Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:46 pm on February 13, 2021 Permalink | Reply
    Tags: "In Violation of Einstein Black Holes Might Have ‘Hair’", According to Einstein’s general theory of relativity black holes have only three observable properties: mass; spin; and charge. Additional properties- or “hair”- do not exist., All of this could allow us to probe ideas such as string theory and quantum gravity in a way that has never been possible before., , Black hole hair hair could be detected by gravitational wave observatories., Black hole hair if it exists is expected to be incredibly short-lived lasting just fractions of a second., , ESA Lisa, Instabilities would effectively give some regions of a black hole’s horizon a stronger gravitational pull than others., Instabilities would make otherwise identical black holes distinguishable., , Quanta Magazine, Some black holes might have instabilities on their event horizons., Yet scientists have begun to wonder if the “no-hair theorem” is strictly true.   

    From Quanta Magazine: “In Violation of Einstein Black Holes Might Have ‘Hair’” 

    From Quanta Magazine

    February 11, 2021
    Jonathan O’Callaghan

    1
    According to Einstein’s general theory of relativity, black holes have only three observable properties: mass, spin and charge. Additional properties, or “hair,” do not exist. Credit: Andriy_A/Shutterstock.

    Identical twins have nothing on black holes. Twins may grow from the same genetic blueprints, but they can differ in a thousand ways — from temperament to hairstyle. Black holes, according to Albert Einstein’s theory of general relativity, can have just three characteristics — mass, spin and charge. If those values are the same for any two black holes, it is impossible to discern one twin from the other. Black holes, they say, have no hair.

    “In classical general relativity, they would be exactly identical,” said Paul Chesler, a theoretical physicist at Harvard University. “You can’t tell the difference.”

    Yet scientists have begun to wonder if the “no-hair theorem” is strictly true. In 2012, a mathematician named Stefanos Aretakis — then at the University of Cambridge (UK) and now at the University of Toronto (CA) — suggested that some black holes might have instabilities [Horizon Instability of Extremal Black Holes] on their event horizons. These instabilities would effectively give some regions of a black hole’s horizon a stronger gravitational pull than others. That would make otherwise identical black holes distinguishable [Physical Review Letters].

    However, his equations only showed that this was possible for so-called extremal black holes — ones that have a maximum value possible for either their mass, spin or charge. And as far as we know, “these black holes cannot exist, at least exactly, in nature,” said Chesler.

    But what if you had a near-extremal black hole, one that approached these extreme values but didn’t quite reach them? Such a black hole should be able to exist, at least in theory. Could it have detectable violations of the no-hair theorem?

    A paper published late last month [Physical Review D] shows that it could. Moreover, this hair could be detected by gravitational wave observatories.

    MIT /Caltech Advanced aLigo at Hanford, WA (US), Livingston, LA, (US) and VIRGO Gravitational Wave interferometer, near Pisa, Italy.

    “Aretakis basically suggested there was some information that was left on the horizon,” said Gaurav Khanna, a physicist at the University of Massachusetts (US) and the University of Rhode Island (US) and one of the co-authors. “Our paper opens up the possibility of measuring this hair.”

    In particular, the scientists suggest that remnants either of the black hole’s formation or of later disturbances, such as matter falling into the black hole, could create gravitational instabilities on or near the event horizon of a near-extremal black hole. “We would expect that the gravitational signal we would see would be quite different from ordinary black holes that are not extremal,” said Khanna.

    If black holes do have hair — thus retaining some information about their past — this could have implications for the famous black hole information paradox put forward by the late physicist Stephen Hawking, said Lia Medeiros, an astrophysicist at the Institute for Advanced Study in Princeton, New Jersey (US). That paradox distills the fundamental conflict between general relativity and quantum mechanics, the two great pillars of 20th-century physics. “If you violate one of the assumptions [of the information paradox], you might be able to solve the paradox itself,” said Medeiros. “One of the assumptions is the no-hair theorem.”

    The ramifications of that could be broad. “If we can prove the actual space-time of the black hole outside of the black hole is different from what we expect, then I think that is going to have really huge implications for general relativity,” said Medeiros, who co-authored a paper in October [
    Physical Review Letters
    ] that addressed whether the observed geometry of black holes is consistent with predictions.

    Perhaps the most exciting aspect of this latest paper, however, is that it could provide a way to merge observations of black holes with fundamental physics. Detecting hair on black holes — perhaps the most extreme astrophysical laboratories in the universe — could allow us to probe ideas such as string theory and quantum gravity in a way that has never been possible before.

    “One of the big issues [with] string theory and quantum gravity is that it’s really hard to test those predictions,” said Medeiros. “So if you have anything that’s even remotely testable, that’s amazing.”

    There are major hurdles, however. It’s not certain that near-extremal black holes exist. (The best simulations at the moment typically produce black holes that are 30% away from being extremal, said Chesler.) And even if they do, it’s not clear if gravitational wave detectors would be sensitive enough to spot these instabilities from the hair.

    What’s more, the hair is expected to be incredibly short-lived, lasting just fractions of a second.

    But the paper itself, at least in principle, seems sound. “I don’t think that anybody in the community doubts it,” said Chesler. “It’s not speculative. It just turns out Einstein’s equations are so complicated that we’re discovering new properties of them on a yearly basis.”

    The next step would be to see what sort of signals we should be looking for in our gravitational detectors — either LIGO and Virgo, operating today, or future instruments like the European Space Agency’s space-based LISA instrument.

    Gravity is talking. Lisa will listen. Dialogos of Eide.


    ESA/NASA eLISA space based, the future of gravitational wave research.

    “One should now build upon their work and really compute what would be the frequency of this gravitational radiation, and understand how we could measure and identify it,” said Helvi Witek, an astrophysicist at the University of Illinois, Urbana-Champaign (US). “The next step is to go from this very nice and important theoretical study to what would be the signature.”

    There are plenty of reasons to want to do so. While the chances of a detection that would prove the paper correct are slim, such a discovery would not only challenge Einstein’s theory of general relativity but prove the existence of near-extremal black holes.

    “We would love to know if nature would even allow for such a beast to exist,” said Khanna. “It would have pretty dramatic implications for our field.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:18 pm on January 22, 2021 Permalink | Reply
    Tags: "Secret Ingredient Found to Power Supernovas", , , , , , , , Quanta Magazine   

    From Quanta Magazine: “Secret Ingredient Found to Power Supernovas” 

    From Quanta Magazine

    January 21, 2021
    Thomas Lewton

    Three-dimensional supernova simulations have solved the mystery of why they explode at all.

    1
    Still from the below video. Credit: ALCF, D. Radice and H. Nakagura.


    Turbulent matter swirls around the center of a collapsing star. The supernova’s shockwave, shown in blue, gets an extra push from the turbulence, while the dense core at the center will go on to form a neutron star. D. Vartanyan, A. Burrows. Credit: ALCF, D. Radice and H. Nakagura.

    In 1987, a giant star exploded right next to our own Milky Way galaxy.

    SN 1987A remnant, imaged by ALMA. The inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres.

    It was the brightest and closest supernova since the invention of the telescope some four centuries earlier, and just about every observatory turned to take a look. Perhaps most excitingly, specialized observatories buried deep underground captured shy subatomic particles called neutrinos streaming out of the blast.

    These particles were first proposed as the driving force behind supernovas in 1966, which made their detection a source of comfort to theorists who had been trying to understand the inner workings of the explosions. Yet over the decades, astrophysicists had constantly bumped into what appeared to be a fatal flaw in their neutrino-powered models.

    Neutrinos are famously aloof particles, and questions remained over exactly how neutrinos transfer their energy to the star’s ordinary matter under the extreme conditions of a collapsing star. Whenever theorists tried to model these intricate particle motions and interactions in computer simulations, the supernova’s shock wave would stall and fall back on itself. The failures “entrenched the idea that our leading theory for how supernovas explode maybe doesn’t work,” said Sean Couch, a computational astrophysicist at Michigan State University.

    Of course, the specifics of what goes on deep inside a supernova as it explodes have always been mysterious. It’s a cauldron of extremes, a turbulent soup of transmuting matter, where particles and forces often ignored in our everyday world become critical. Compounding the problem, the explosive interior is largely hidden from view, shrouded by clouds of hot gas. Understanding the details of supernovas “has been a central unsolved problem in astrophysics,” said Adam Burrows, an astrophysicist at Princeton University who has studied supernovas for more than 35 years.

    In recent years, however, theorists have been able to home in on the surprisingly complex mechanisms that make supernovas tick. Simulations that explode have become the norm, rather than the exception, Burrows wrote in Nature this month. Rival research groups’ computer codes are now agreeing [Journal of Physics G: Nuclear and Particle Physics] on how supernova shock waves evolve, while simulations have advanced so far that even the effects of Einstein’s notoriously intricate general relativity are being included [MNRAS]. The role of neutrinos is finally becoming understood.

    “It’s a watershed moment,” said Couch. What they’re finding is that without turbulence, collapsing stars may never form supernovas at all.

    A Chaotic Dance

    For much of a star’s life, the inward pull of gravity is delicately balanced by the outward push of radiation from nuclear reactions inside the star’s core. As the star runs out of fuel, gravity takes hold. The core collapses in on itself — plummeting at 150,000 kilometers per hour — causing temperatures to surge to 100 billion degrees Celsius and fusing the core into a solid ball of neutrons.

    The outer layers of the star continue to fall inward, but as they hit this incompressible neutron core, they bounce off it, creating a shock wave. In order for the shock wave to become an explosion, it must be driven outward with enough energy to escape the pull of the star’s gravity. The shock wave must also fight against the inward spiral of the star’s outermost layers, which are still falling onto the core.

    Until recently, the forces powering the shock wave were only understood in the blurriest of terms. For decades, computers were only powerful enough to run simplified models of the collapsing core. Stars were treated as perfect spheres, with the shock wave emanating from the center the same way in every direction. But as the shock wave moves outward in these one-dimensional models, it slows and then falters.

    Only in the last few years, with the growth of supercomputers, have theorists had enough computing power to model massive stars with the complexity needed to achieve explosions. The best models now integrate details such as the micro-level interactions between neutrinos and matter, the disordered motions of fluids, and recent advances in many different fields of physics — from nuclear physics to stellar evolution. Moreover, theorists can now run many simulations each year [MNRAS], allowing them to freely tweak the models and try out different starting conditions.

    One turning point came in 2015, when Couch and his collaborators ran a three-dimensional computer model of the final minutes of a massive star’s collapse [The Astrophysical Journal Letters]. Although the simulation only mapped out 160 seconds of the star’s life, it illuminated the role of an underappreciated player that helps stalled shock waves turn into fully fledged explosions.

    Hidden inside the belly of the beast, particles twist and turn chaotically. “It’s like boiling water on your stove. There are massive overturns of fluid inside the star, going at thousands of kilometers per second,” said Couch.

    This turbulence creates extra pressure behind the shock wave, pushing it further from the star’s center. Away from the center, the inward pull of gravity is weaker, and there’s less inward-falling matter to temper the shock wave. The turbulent matter bouncing around behind the shock wave also has more time to absorb neutrinos. Energy from the neutrinos then heats the matter and drives the shock wave into an explosion.

    For years, researchers had failed to realize the importance of turbulence, because it only reveals its full impact in simulations run in three dimensions. “What nature does effortlessly, it has taken us decades to achieve as we went up from one dimension to two and three dimensions,” said Burrows.


    Swirling matter surrounds the core of a supernova in the first half second after core collapse. In this simulation, the matter is colored by entropy, a measure of disorder. (Hotter colors like red indicate higher entropies.) Because of the turbulence, the explosion isn’t symmetric. Credit: D. Vartanyan, A. Burrows. Thanks to ALCF, D. Radice and H. Nakagura.

    These simulations have also revealed that turbulence results in an asymmetric explosion, where the star looks a bit like an hourglass. As the explosion pushes outward in one direction, matter keeps falling onto the core in another direction, fueling the star’s explosion further.

    These new simulations are giving researchers a better understanding of exactly how supernovas have shaped the universe we see today. “We can get the correct explosion energy range, and we can get the neutron star masses that we see left behind,” said Burrows. Supernovas are largely responsible for creating the universe’s budget of hefty elements such as oxygen and iron, and theorists are starting to use simulations to predict exactly how much of these heavy elements should be around. “We’re now starting to tackle problems that were unimaginable in the past,” said Tuguldur Sukhbold, a theoretical and computational astrophysicist at Ohio State University.

    The Next Blast

    Despite the exponential rise in computing power, a supernova simulation is far rarer than an observation in the sky. “Twenty years ago there were around 100 supernovae being discovered every year,” said Edo Berger, an astronomer at Harvard University. “Now we’re discovering 10,000 or 20,000 every year,” a rise driven by new telescopes that quickly and repeatedly scan the night sky. By contrast, in a year theorists carry out around 30 computer simulations. A single simulation, re-creating just a few minutes of core collapse, can take many months. “You check in every day and it’s only gone a millisecond,” said Couch. “It’s like watching molasses in the wintertime.”

    The broad accuracy of the new simulations has astrophysicists excited for the next nearby blast. “While we’re waiting for the next supernova [in our galaxy], we have a lot of work to do. We need to improve the theoretical modeling to understand what features we could detect,” said Irene Tamborra, a theoretical astrophysicist at the University of Copenhagen. “You cannot miss the opportunity, because it’s such a rare event.”

    Most supernovas are too far away from Earth for observatories to detect their neutrinos. Supernovas in the immediate vicinity of the Milky Way — like Supernova 1987A — only occur on average about once every half-century [CERN Courier].

    ESA/Integral.

    But if one does occur, astronomers will be able to “peer directly into the center of the explosion,” said Berger, by observing its gravitational waves. “Different groups have emphasized different processes as being important in the actual explosion of the star. And those different processes have different gravitational wave and neutrino signatures.”

    While theorists have now broadly reached a consensus on the most important factors driving supernovas, challenges remain. In particular, the outcome of the explosion is “very strongly dictated” by the structure of a star’s core before it collapses, said Sukhbold. Small differences are magnified into a variety of outcomes by the chaotic collapse, and so the evolution of a star before it collapses must also be accurately modeled [The Astrophysical Journal].

    Other questions include the role of intense magnetic fields [The Astrophysical Journal] in a rotating star’s core. “It’s very possible that you can have a hybrid mechanism of magnetic fields and neutrinos,” said Burrows. The way neutrinos change from one type — or “flavor” — into another and how this affects the explosion is also unclear.

    “There are a lot of ingredients that still need to be added to our simulations,” said Tamborra. “If a supernova were to explode tomorrow and it matches our theoretical predictions, then it means that all the ingredients that we are currently missing can safely be neglected. But if this is not the case, then we need to understand why.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:30 am on January 8, 2021 Permalink | Reply
    Tags: "How I Learned to Love and Fear the Riemann Hypothesis", , , Quanta Magazine   

    From Quanta Magazine: “How I Learned to Love and Fear the Riemann Hypothesis” 

    From Quanta Magazine

    January 4, 2021
    Alex Kontorovich


    The Riemann Hypothesis, Explained

    I first heard of the Riemann hypothesis — arguably the most important and notorious unsolved problem in all of mathematics — from the late, great Eli Stein, a world-renowned mathematician at Princeton University. I was very fortunate that Professor Stein decided to reimagine the undergraduate analysis sequence during my sophomore year of college, in the spring of 2000. He did this by writing a rich, broad set of now-famous books on the subject (co-authored with then-graduate student Rami Shakarchi).

    In mathematics, analysis deals with the ideas of calculus in a rigorous, axiomatic way. Stein wrote four books in the series. The first was on Fourier analysis (the art and science of decomposing arbitrary signals into combinations of simple harmonic waves). The next was on complex analysis (which treats functions that have complex numbers as both inputs and outputs), followed by real analysis (which develops, among other things, a rigorous way to measure sizes of sets) and finally functional analysis (which deals broadly with functions of functions). These are core subjects containing foundational knowledge for any working mathematician.

    In Stein’s class, my fellow students and I were the guinea pigs on whom the material for his books was to be rehearsed. We had front-row seats as Eli (as I later came to call him) showcased his beloved subject’s far-reaching consequences: Look at how amazing analysis is, he would say. You can even use it to resolve problems in the distant world of number theory! Indeed, his book on Fourier analysis builds up to a proof of Dirichlet’s theorem on primes in arithmetic progressions, which says, for example, that infinitely many primes leave a remainder of 6 when divided by 35 (since 6 and 35 have no prime factors in common). And his course on complex analysis included a proof of the prime number theorem, which gives an asymptotic estimate for the number of primes below a growing bound. Moreover, I learned that if the Riemann hypothesis is true, we’ll get a much stronger prime number theorem than the one known today. To see why that is and for a closer look under the hood of this famous math problem, please watch the accompanying video at the top of this page.

    Despite Eli’s proselytizing to us on the wide-ranging power of analysis, I learned the opposite lesson: Look at how amazing number theory is — you can even use fields as far away as analysis to prove the things you want! Stein’s class helped set me on the path to becoming a number theorist. But as I came to understand more about the Riemann hypothesis over the years, I learned not to make it a focus of my research. It’s just too hard to make progress.

    After Princeton, I went off to graduate school at Columbia University. It was an exciting time to be working in number theory. In 2003, Dan Goldston and Cem Yıldırım announced a spectacular new result about gaps in the primes, only to withdraw the claim soon after. (As Goldston wrote years later on accepting the prestigious Cole Prize for these ideas: “While mathematicians often do not have much humility, we all have lots of experience with humiliation.”) Nevertheless, the ideas became an important ingredient in the Green-Tao theorem, which shows that the set of prime numbers contains arithmetic progressions of any given length. Then, working with János Pintz, Goldston and Yıldırım salvaged enough of their method to prove, in their breakthrough GPY theorem in 2005, that primes will infinitely often have gaps which are arbitrarily small when compared to the average gap. Moreover, if you could improve their result by any amount at all, you would prove that primes infinitely often differ by some bounded constant. And this would be a huge leap toward solving the notoriously difficult twin primes conjecture, which predicts that there are infinitely many pairs of primes that differ by 2.

    A meeting on how to extend the GPY method was immediately organized at the American Institute of Mathematics in San Jose, California. As a bright-eyed and bushy-tailed grad student, I felt extraordinarily lucky to be there among the world’s top experts. By the end of the week, the experts agreed that it was basically impossible to improve the GPY method to get bounded prime gaps. Fortunately, Yitang Zhang did not attend this meeting. Almost a decade later, after years of incredibly hard work in relative isolation, he found a way around the impasse and proved the experts wrong. I guess the moral of my story is that when people organize meetings on how not to solve the Riemann hypothesis (as they do from time to time), don’t go!

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:59 pm on January 6, 2021 Permalink | Reply
    Tags: "Galaxy-Size Bubbles Discovered Towering Over the Milky Way", , , , , Every astronomer’s major headache: Peering out into space researchers have no depth perception. “We see a 2D map of a 3D universe.”, North Polar Spur, Quanta Magazine   

    From Quanta Magazine: “Galaxy-Size Bubbles Discovered Towering Over the Milky Way” 

    From Quanta Magazine

    January 6, 2021
    Charlie Wood

    1
    In this X-ray view of the entire sky, giant bubbles are clearly visible extending above and below the plane of the Milky Way galaxy. The bubbles likely come from the supermassive black hole at the galaxy’s center. Credit: Jeremy Sanders, Hermann Brunner and the eSASS team (MPE); Eugene Churazov, Marat Gilfanov (on behalf of IKI).

    When Peter Predehl, an astrophysicist at the Max Planck Institute for Extraterrestrial Physics in Germany, first laid eyes on the new map of the universe’s hottest objects, he immediately recognized the aftermath of a galactic catastrophe. A bright yellow cloud billowed tens of thousands of light-years upward from the Milky Way’s flat disk, with a fainter twin reflected below.

    The structure was so obvious that it barely seemed necessary to describe it in writing. But “Nature wouldn’t accept [us] simply sending a picture and saying, ‘OK, we can see this,’” Predehl said. “Therefore, we did some analysis.”

    The results, which Nature published on December 9, have moved a decades-old idea from the fringe into the mainstream.

    In the 1950s, astronomers first spotted a radio wave-emitting arc hanging above — or to the “north” of — the galactic plane. In the decades since, the “North Polar Spur” has become something of a celestial Rorschach test. Some see the scattered innards of an ex-star that’s relatively close by. Others see evidence of a grander explosion.

    The controversy hinges on every astronomer’s major headache: Peering out into space, researchers have no depth perception. “We see a 2D map of a 3D universe,” said Kaustav Das, a researcher at the California Institute of Technology.

    For decades, most astronomers believed that the North Polar Spur was part of the local galactic neighborhood. Some studies concluded that it connects to nearby gas clouds. Others looked at its distortion of background stars and inferred that it’s a supernova remnant — a dusty cloud marking the gravestone of a dead star.

    Yet Yoshiaki Sofue, an astronomer at the University of Tokyo, has always thought the spur looked funky for a stellar debris cloud. Instead, he imagined the arc to be one stretch of a huge unseen structure — a pair of bubbles straddling the galaxy’s heart. He published simulations in 1977 [Astronomy and Astrophysics] that produced digital clouds lining up with the spur, and ever since then he has told anyone who would listen that the spur actually hovers tens of thousands of light-years above the disk. He described it as an expanding shock wave from a galactic calamity dating back millions of years.

    But if Sofue was right, there should also be a twin structure to the south of the galactic plane. Astronomers saw no trace of this counterpart, and most remained unconvinced.

    Then in 2010, the Fermi space telescope caught the faint gamma-ray glow [The Astrophysical Journal] of two humungous lobes, each extending roughly 20,000 light-years from the galaxy’s center. They were too small to trace the North Polar Spur, but they otherwise looked just like the galactic-scale clouds of hot gas Sofue predicted. Astronomers began to wonder: If the galaxy had at least one pair of bubbles, perhaps the spur was part of a second set?

    3
    Credit: Samuel Velasco/Quanta Magazine; source: Peter Predehel et al.

    “The situation dramatically changed after the discovery of the Fermi bubbles,” said Jun Kataoka, an astronomer at Waseda University in Japan who has collaborated with Sofue.

    The new images have further cemented the change of opinion. They came from eROSITA, an orbiting X-ray telescope that launched in 2019 to track dark energy’s effect on galaxy clusters.

    After about ten years of development and integration the eROSITA X-ray telescope is complete: with 7 mirror modules and 54 mirror shells each combined with 7 specially built X-ray cameras. You see the telescope here after final integration at MPE, shortly before transport to further testing. Credit: MPE

    eRosita DLR MPG, on Russian German space telescope The Russian-German space probe Spektrum-Roentgen-Gamma (SRG) .

    The eROSITA team released a preliminary map in June, the fruit of the telescope’s first six months of observations.

    The map traces X-ray bubbles that stand an estimated 45,000 light-years tall, engulfing the gamma-ray Fermi bubbles. Their X-rays shine from gas that measures 3 million to 4 million degrees Kelvin as it expands outward at 300 to 400 kilometers per second. And not only does the northern bubble align perfectly with the spur, its mirror image is obvious as well, just as Sofue predicted. “I was particularly happy to see the southern bubble clearly exhibited, so similar to my simulation,” he said.

    Still, a full interpretation of all North Polar Spur observations remains complex; a nearby supernova remnant could have parked itself right in front of the X-ray bubbles by chance, for instance, giving both interpretations elements of truth. In September, Das and collaborators used state-of-the-art observations of distant stars to show that something dusty is hanging out about 450 light-years away [MNRAS] — a stone’s throw, by galactic standards.

    4
    In a composite image featuring both X-ray observations (blue) and gamma-ray observations (red), the X-ray bubbles and the Fermi bubbles are clearly visible.Credit: Peter Predehel.

    But the meaning of eROSITA’s mushroom clouds is clear: Something went bang in the center of the Milky Way around 15 million to 20 million years ago, around the same time hyenas and weasels were emerging on Earth.

    “I think now [the debate] is done, more or less,” said Predehl, who spent 25 years developing eROSITA.

    What exploded? Based on the energy required to make the clouds so big and so hot, there are two plausible sources.

    One possibility is that a wave of tens of thousands of stars popped into being and promptly blew up, behavior familiar from so-called starburst galaxies. But the bubbles appear rather pure, lacking the heavy atomic shrapnel that a cohort of exploding stars should have peppered them with. “The metal abundance is very small, so I don’t believe that the starburst activity happened,” Kataoka said.

    The alternative culprit is the supermassive black hole that sits at the galaxy’s heart. The 4-million-solar-mass leviathan is relatively quiet today. But if a large cloud of gas once strayed too close, the black hole could have switched on like a spotlight. While feasting on the hapless passerby, the black hole would have gobbled down half the cloud while energy from the other half sprayed out above and below the disk, inflating the X-ray bubbles and perhaps the Fermi bubbles too (although the two pairs could also represent separate episodes of activity, Predehl noted).

    Astronomers have long observed other galaxies that shoot out jets above and below their disks, and they’ve wondered what makes the central supermassive black holes in those galaxies churn so much more violently than ours does. The Fermi bubbles, and now the eROSITA bubbles, suggest that the main difference may simply be the passage of time.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:58 pm on January 3, 2021 Permalink | Reply
    Tags: "Computer Scientists Break Traveling Salesperson Record", After 44 years there’s finally a better way to find approximate solutions to the notoriously difficult traveling salesperson problem., , , Quanta Magazine, The traveling salesperson problem is one of a handful of foundational problems that theoretical computer scientists turn to again and again to test the limits of efficient computation., The traveling salesperson problem isn’t a problem- it’s an addiction., This optimization problem which seeks the shortest (or least expensive) round trip through a collection of cities has applications ranging from DNA sequencing to ride-sharing logistics.   

    From Quanta Magazine: “Computer Scientists Break Traveling Salesperson Record” 

    From Quanta Magazine

    October 8, 2020 [From Year End Wrap Up.]
    Erica Klarreich

    After 44 years, there’s finally a better way to find approximate solutions to the notoriously difficult traveling salesperson problem.

    1
    Credit: Islenia Mil/Quanta Magazine.

    When Nathan Klein started graduate school two years ago, his advisers proposed a modest plan: to work together on one of the most famous, long-standing problems in theoretical computer science.

    Even if they didn’t manage to solve it, they figured, Klein would learn a lot in the process. He went along with the idea. “I didn’t know to be intimidated,” he said. “I was just a first-year grad student — I don’t know what’s going on.”

    Now, in a paper posted online in July, Klein and his advisers at the University of Washington, Anna Karlin and Shayan Oveis Gharan, have finally achieved a goal computer scientists have pursued for nearly half a century: a better way to find approximate solutions to the traveling salesperson problem.

    This optimization problem, which seeks the shortest (or least expensive) round trip through a collection of cities, has applications ranging from DNA sequencing to ride-sharing logistics. Over the decades, it has inspired many of the most fundamental advances in computer science, helping to illuminate the power of techniques such as linear programming. But researchers have yet to fully explore its possibilities — and not for want of trying.

    The traveling salesperson problem “isn’t a problem, it’s an addiction,” as Christos Papadimitriou, a leading expert in computational complexity, is fond of saying.

    Most computer scientists believe that there is no algorithm that can efficiently find the best solutions for all possible combinations of cities. But in 1976, Nicos Christofides came up with an algorithm that efficiently finds approximate solutions — round trips that are at most 50% longer than the best round trip. At the time, computer scientists expected that someone would soon improve on Christofides’ simple algorithm and come closer to the true solution. But the anticipated progress did not arrive.

    “A lot of people spent countless hours trying to improve this result,” said Amin Saberi of Stanford University.

    Now Karlin, Klein and Oveis Gharan have proved that an algorithm devised a decade ago beats Christofides’ 50% factor, though they were only able to subtract 0.2 billionth of a trillionth of a trillionth of a percent. Yet this minuscule improvement breaks through both a theoretical logjam and a psychological one. Researchers hope that it will open the floodgates to further improvements.

    “This is a result I have wanted all my career,” said David Williamson of Cornell University, who has been studying the traveling salesperson problem since the 1980s.

    The traveling salesperson problem is one of a handful of foundational problems that theoretical computer scientists turn to again and again to test the limits of efficient computation. The new result “is the first step towards showing that the frontiers of efficient computation are in fact better than what we thought,” Williamson said.

    Fractional Progress

    While there is probably no efficient method that always finds the shortest trip, it is possible to find something almost as good: the shortest tree connecting all the cities, meaning a network of connections (or “edges”) with no closed loops. Christofides’ algorithm uses this tree as the backbone for a round-trip tour, adding extra edges to convert it into a round trip.

    Any round-trip route must have an even number of edges into each city, since every arrival is followed by a departure. It turns out that the reverse is also true — if every city in a network has an even number of connections then the edges of the network must trace a round trip.

    The shortest tree connecting all the cities lacks this evenness property, since any city at the end of a branch has just one connection to another city. So to turn the shortest tree into a round trip, Christofides (who died last year) found the best way to connect pairs of cities that have odd numbers of edges. Then he proved that the resulting round trip will never be more than 50% longer than the best possible round trip.

    In doing so, he devised perhaps the most famous approximation algorithm in theoretical computer science — one that usually forms the first example in textbooks and courses.

    “Everybody knows the simple algorithm,” said Alantha Newman of Grenoble Alpes University and the National Center for Scientific Research in France. And when you know it, she said, “you know the state of the art” — at least, you did until this past July.

    Computer scientists have long suspected that there should be an approximation algorithm that outperforms Christofides’ algorithm. After all, his simple and intuitive algorithm isn’t always such an effective way to design a traveling salesperson route, since the shortest tree connecting the cities may not be the best backbone you could choose. For instance, if this tree has many branches, each city at the end of a branch will need to be matched with another city, potentially forming lots of expensive new connections.

    In 2010, Oveis Gharan, Saberi and Mohit Singh of the Georgia Institute of Technology started wondering if it might be possible to improve on Christofides’ algorithm by choosing not the shortest tree connecting all the cities, but a random tree from a carefully chosen collection. They took inspiration from an alternate version of the traveling salesperson problem in which you are allowed to travel along a combination of paths — maybe you get to Denver via 3/4 of the route from Chicago to Denver plus 1/4 of the route from Los Angeles to Denver.

    Unlike the regular traveling salesperson problem, this fractional problem can be solved efficiently. And while fractional routes don’t make physical sense, computer scientists have long believed that the best fractional route should be a rough guide to the contours of good ordinary routes.

    So to create their algorithm, Oveis Gharan, Saberi and Singh defined a random process that picks a tree connecting all the cities, so that the probability that a given edge is in the tree equals that edge’s fraction in the best fractional route. There are many such random processes, so the researchers chose one that tends to produce trees with many evenly connected cities. After this random process spits out a specific tree, their algorithm plugs it into Christofides’ scheme for matching cities with odd numbers of edges, to convert it into a round trip.

    3
    Credit: Samuel Velasco/Quanta Magazine.

    This method seemed promising, not just to the three researchers but to other computer scientists. “The intuition is simple,” said Ola Svensson of the Swiss Federal Institute of Technology Lausanne. But “to prove it turns out to be a different beast.”

    The following year, though, Oveis Gharan, Saberi and Singh managed to prove that their algorithm beats Christofides’ algorithm for “graphical” traveling salesperson problems — ones where the distances between cities are represented by a network (not necessarily including all connections) in which every edge has the same length. But the researchers couldn’t figure out how to extend their result to the general traveling salesperson problem, in which some edges may be vastly longer than others.

    “If we have to add a super expensive edge to the matching then we’re screwed, basically,” Karlin said.

    Pushing Back

    Nevertheless, Oveis Gharan emerged from that collaboration with an unshakable belief that their algorithm should beat Christofides’ algorithm for the general traveling salesperson problem. “I never had a doubt,” he said.

    Oveis Gharan kept turning the problem over in his mind over the years that followed. He suspected that a mathematical discipline called the geometry of polynomials, little known in the theoretical computer science world, might have the tools he needed. So when Karlin came to him two years ago suggesting that they co-advise a brilliant new graduate student named Nathan Klein who had double-majored in math and cello, he said, “OK, let’s give it a try — I have this interesting problem.”

    Karlin thought that, if nothing else, it would be a fun opportunity to learn more about the geometry of polynomials. “I really didn’t think we would be able to solve this problem,” she said.

    She and Oveis Gharan had no hesitation about throwing Klein into the deep end of computer science research. Oveis Gharan had himself cut his teeth on the traveling salesperson problem as a graduate student back in 2010. And the two advisers agreed about the merits of assigning hard problems to graduate students, especially during their first couple of years, when they are not under pressure to get results.

    The three dived into an intense collaboration. “It’s all I was thinking about for two years,” Klein said.

    They spent the first year solving a simplified version of the problem, to get a sense of the challenges they were facing. But even after they accomplished that, the general case still felt like a “moonshot,” Klein said.

    Still, they had gotten a feel for their tools — in particular, the geometry of polynomials. A polynomial is a combination of terms made out of numbers and variables raised to powers, such as 3x2y + 8xz7. To study the traveling salesperson problem, the researchers distilled a map of cities down to a polynomial that had one variable for each edge between cities, and one term for each tree that could connect all the cities. Numerical factors then weighted these terms to reflect each edge’s value in the fractional solution to the traveling salesperson problem.

    This polynomial, they found, has a coveted property called “real stability,” which means that the complex numbers that make the polynomial evaluate to zero never lie in the upper half of the complex plane. The nice thing about real stability is that it stays in force even when you make many kinds of changes to your polynomial. So, for example, if the researchers wanted to focus on particular cities, they could use a single variable to represent all the different edges leading into a city, and they could set the variables for edges they didn’t care about equal to 1. As they manipulated these simplified polynomials, the results of their manipulations still had real stability, opening the door to a wide assortment of techniques.

    This approach enabled the researchers to get a handle on questions like how often the algorithm would be forced to connect two distant cities. In a nearly 80-page analysis, they managed to show that the algorithm beats out Christofides’ algorithm for the general traveling salesperson problem (the paper has yet to be peer-reviewed, but experts are confident that it’s correct). Once the paper was completed, Oveis Gharan dashed off an email to Saberi, his old doctoral adviser. “I guess I can finally graduate,” he joked.

    While the improvement the researchers established is vanishingly small, computer scientists hope this breakthrough will inspire rapid further progress. That’s what happened back in 2011 when Oveis Gharan, Saberi and Singh figured out the graphical case. Within a year, other researchers had come up with radically different algorithms that greatly improved the approximation factor for the graphical case, which has now been lowered to 40% instead of Christofides’ 50%.

    “When they announced their result [about the graphical case], … that made us think that it’s possible. It made us work for it,” said Svensson, one of the researchers who made further progress in that case. He’s been trying for many years to beat Christofides’ algorithm for the general traveling salesperson problem. “I will try again now I know it’s possible,” he said.

    Over the decades, the traveling salesperson problem has launched many new methods into prominence. Oveis Gharan hopes that it will now play that role for the geometry of polynomials, for which he has become an eager evangelist. In the decade or so since he started learning about this approach, it has helped him prove a wide range of theorems. The tool has “shaped my whole career,” he said.

    The new traveling salesperson result highlights the power of this approach, Newman said. “Definitely it’s an inspiration to look at it more closely.”

    Klein will now have to find a new problem to obsess over. “It’s a bit sad to lose the problem, because it just built up so many structures in my head, and now they’re all kind of gone,” he said. But he couldn’t have asked for a more satisfying introduction to computer science research. “I felt like we pushed back a little bit on something that was unknown.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:41 am on January 3, 2021 Permalink | Reply
    Tags: , , , , , Quanta Magazine, , ,   

    From Quanta Magazine: “Landmark Computer Science Proof Cascades Through Physics and Math” 

    From Quanta Magazine

    March 4, 2020 [From Year End Wrap-Up.]
    Kevin Hartnett

    Computer scientists established a new boundary on computationally verifiable knowledge. In doing so, they solved major open problems in quantum mechanics and pure mathematics.

    1
    A new proof in computer science also has implications for researchers in quantum mechanics and pure math. DVDP for Quanta Magazine.

    In 1935, Albert Einstein, working with Boris Podolsky and Nathan Rosen, grappled with a possibility revealed by the new laws of quantum physics: that two particles could be entangled, or correlated, even across vast distances.

    The very next year, Alan Turing formulated the first general theory of computing and proved that there exists a problem that computers will never be able to solve.

    These two ideas revolutionized their respective disciplines. They also seemed to have nothing to do with each other. But now a landmark proof has combined them while solving a raft of open problems in computer science, physics and mathematics.

    The new proof establishes that quantum computers that calculate with entangled quantum bits or qubits, rather than classical 1s and 0s, can theoretically be used to verify answers to an incredibly vast set of problems. The correspondence between entanglement and computing came as a jolt to many researchers.

    “It was a complete surprise,” said Miguel Navascués, who studies quantum physics at the Institute for Quantum Optics and Quantum Information in Vienna.

    The proof’s co-authors set out to determine the limits of an approach to verifying answers to computational problems. That approach involves entanglement. By finding that limit the researchers ended up settling two other questions almost as a byproduct: Tsirelson’s problem in physics, about how to mathematically model entanglement, and a related problem in pure mathematics called the Connes embedding conjecture.

    In the end, the results cascaded like dominoes.

    “The ideas all came from the same time. It’s neat that they come back together again in this dramatic way,” said Henry Yuen of the University of Toronto and an author of the proof, along with Zhengfeng Ji of the University of Technology Sydney, Anand Natarajan and Thomas Vidick of the California Institute of Technology, and John Wright of the University of Texas, Austin. The five researchers are all computer scientists.

    Undecidable Problems

    Turing defined a basic framework for thinking about computation before computers really existed. In nearly the same breath, he showed that there was a certain problem computers were provably incapable of addressing. It has to do with whether a program ever stops.

    Typically, computer programs receive inputs and produce outputs. But sometimes they get stuck in infinite loops and spin their wheels forever. When that happens at home, there’s only one thing left to do.

    “You have to manually kill the program. Just cut it off,” Yuen said.

    Turing proved that there’s no all-purpose algorithm that can determine whether a computer program will halt or run forever. You have to run the program to find out.

    2
    The computer scientists Henry Yuen, Thomas Vidick, Zhengfeng Ji, Anand Natarajan and John Wright co-authored a proof about verifying answers to computational problems and ended up solving major problems in math and quantum physics.
    Credits:(Yuen) Andrea Lao; (Vidick) Courtesy of Caltech; (Ji) Anna Zhu; (Natarajan) David Sella; (Wright) Soya Park.

    “You’ve waited a million years and a program hasn’t halted. Do you just need to wait 2 million years? There’s no way of telling,” said William Slofstra, a mathematician at the University of Waterloo.

    In technical terms, Turing proved that this halting problem is undecidable — even the most powerful computer imaginable couldn’t solve it.

    After Turing, computer scientists began to classify other problems by their difficulty. Harder problems require more computational resources to solve — more running time, more memory. This is the study of computational complexity.

    Ultimately, every problem presents two big questions: “How hard is it to solve?” and “How hard is it to verify that an answer is correct?”

    Interrogate to Verify

    When problems are relatively simple, you can check the answer yourself. But when they get more complicated, even checking an answer can be an overwhelming task. However, in 1985 computer scientists realized it’s possible to develop confidence that an answer is correct even when you can’t confirm it yourself.

    The method follows the logic of a police interrogation.

    If a suspect tells an elaborate story, maybe you can’t go out into the world to confirm every detail. But by asking the right questions, you can catch your suspect in a lie or develop confidence that the story checks out.

    In computer science terms, the two parties in an interrogation are a powerful computer that proposes a solution to a problem — known as the prover — and a less powerful computer that wants to ask the prover questions to determine whether the answer is correct. This second computer is called the verifier.

    To take a simple example, imagine you’re colorblind and someone else — the prover — claims two marbles are different colors. You can’t check this claim by yourself, but through clever interrogation you can still determine whether it’s true.

    Put the two marbles behind your back and mix them up. Then ask the prover to tell you which is which. If they really are different colors, the prover should answer the question correctly every time. If the marbles are actually the same color — meaning they look identical — the prover will guess wrong half the time.

    “If I see you succeed a lot more than half the time, I’m pretty sure they’re not” the same color, Vidick said.

    By asking a prover questions, you can verify solutions to a wider class of problems than you can on your own.

    n 1988, computer scientists considered what happens when two provers propose solutions to the same problem. After all, if you have two suspects to interrogate, it’s even easier to solve a crime, or verify a solution, since you can play them against each other.

    “It gives more leverage to the verifier. You interrogate, ask related questions, cross-check the answers,” Vidick said. If the suspects are telling the truth, their responses should align most of the time. If they’re lying, the answers will conflict more often.

    Similarly, researchers showed that by interrogating two provers separately about their answers, you can quickly verify solutions to an even larger class of problems than you can when you only have one prover to interrogate.

    Computational complexity may seem entirely theoretical, but it’s also closely connected to the real world. The resources that computers need to solve and verify problems — time and memory — are fundamentally physical. For this reason, new discoveries in physics can change computational complexity.

    “If you choose a different set of physics, like quantum rather than classical, you get a different complexity theory out of it,” Natarajan said.

    The new proof is the end result of 21st-century computer scientists confronting one of the strangest ideas of 20th-century physics: entanglement.

    The Connes Embedding Conjecture

    When two particles are entangled, they don’t actually affect each other — they have no causal relationship. Einstein and his co-authors elaborated on this idea in their 1935 paper. Afterward, physicists and mathematicians tried to come up with a mathematical way of describing what entanglement really meant.

    Yet the effort came out a little muddled. Scientists came up with two different mathematical models for entanglement — and it wasn’t clear that they were equivalent to each other.

    In a roundabout way, this potential dissonance ended up producing an important problem in pure mathematics called the Connes embedding conjecture. Eventually, it also served as a fissure that the five computer scientists took advantage of in their new proof.

    The first way of modeling entanglement was to think of the particles as spatially isolated from each other. One is on Earth, say, and the other is on Mars; the distance between them is what prevents causality. This is called the tensor product model.

    But in some situations, it’s not entirely obvious when two things are causally separate from each other. So mathematicians came up with a second, more general way of describing causal independence.

    When the order in which you perform two operations doesn’t affect the outcome, the operations “commute”: 3 x 2 is the same as 2 x 3. In this second model, particles are entangled when their properties are correlated but the order in which you perform your measurements doesn’t matter: Measure particle A to predict the momentum of particle B or vice versa. Either way, you get the same answer. This is called the commuting operator model of entanglement.

    Both descriptions of entanglement use arrays of numbers organized into rows and columns called matrices. The tensor product model uses matrices with a finite number of rows and columns. The commuting operator model uses a more general object that functions like a matrix with an infinite number of rows and columns.

    Over time, mathematicians began to study these matrices as objects of interest in their own right, completely apart from any connection to the physical world. As part of this work, a mathematician named Alain Connes conjectured in 1976 that it should be possible to approximate many infinite-dimensional matrices with finite-dimensional ones. This is one implication of the Connes embedding conjecture.

    The following decade a physicist named Boris Tsirelson posed a version of the problem that grounded it in physics once more. Tsirelson conjectured that the tensor product and commuting operator models of entanglement were roughly equivalent. This makes sense, since they’re theoretically two different ways of describing the same physical phenomenon. Subsequent work showed that because of the connection between matrices and the physical models that use them, the Connes embedding conjecture and Tsirelson’s problem imply each other: Solve one, and you solve the other.

    Yet the solution to both problems ended up coming from a third place altogether.

    Game Show Physics

    In the 1960s, a physicist named John Bell came up with a test for determining whether entanglement was a real physical phenomenon, rather than just a theoretical notion. The test involved a kind of game whose outcome reveals whether something more than ordinary, non-quantum physics is at work.

    Computer scientists would later realize that this test about entanglement could also be used as a tool for verifying answers to very complicated problems.

    But first, to see how the games work, let’s imagine two players, Alice and Bob, and a 3-by-3 grid. A referee assigns Alice a row and tells her to enter a 0 or a 1 in each box so that the digits sum to an odd number. Bob gets a column and has to fill it out so that it sums to an even number. They win if they put the same number in the one place her row and his column overlap. They’re not allowed to communicate.

    Under normal circumstances, the best they can do is win 89% of the time. But under quantum circumstances, they can do better.

    Imagine Alice and Bob split a pair of entangled particles. They perform measurements on their respective particles and use the results to dictate whether to write 1 or 0 in each box. Because the particles are entangled, the results of their measurements are going to be correlated, which means their answers will correlate as well — meaning they can win the game 100% of the time.

    3
    Credit: Lucy Reading-Ikkanda/Quanta Magazine.

    So if you see two players winning the game at unexpectedly high rates, you can conclude that they are using something other than classical physics to their advantage. Such Bell-type experiments are now called “nonlocal” games, in reference to the separation between the players. Physicists actually perform them in laboratories.

    “People have run experiments over the years that really show this spooky thing is real,” said Yuen.

    As when analyzing any game, you might want to know how often players can win a nonlocal game, provided they play the best they can. For example, with solitaire, you can calculate how often someone playing perfectly is likely to win.

    But in 2016, William Slofstra proved that there’s no general algorithm for calculating the exact maximum winning probability for all nonlocal games. So researchers wondered: Could you at least approximate the maximum-winning percentage?

    Computer scientists have homed in on an answer using the two models describing entanglement. An algorithm that uses the tensor product model establishes a floor, or minimum value, on the approximate maximum-winning probability for all nonlocal games. Another algorithm, which uses the commuting operator model, establishes a ceiling.

    These algorithms produce more precise answers the longer they run. If Tsirelson’s prediction is true, and the two models really are equivalent, the floor and the ceiling should keep pinching closer together, narrowing in on a single value for the approximate maximum-winning percentage.

    But if Tsirelson’s prediction is false, and the two models are not equivalent, “the ceiling and the floor will forever stay separated,” Yuen said. There will be no way to calculate even an approximate winning percentage for nonlocal games.

    In their new work, the five researchers used this question — about whether the ceiling and floor converge and Tsirelson’s problem is true or false — to solve a separate question about when it’s possible to verify the answer to a computational problem.

    Entangled Assistance

    In the early 2000s, computer scientists began to wonder: How does it change the range of problems you can verify if you interrogate two provers that share entangled particles?

    Most assumed that entanglement worked against verification. After all, two suspects would have an easier time telling a consistent lie if they had some means of coordinating their answers.

    But over the last few years, computer scientists have realized that the opposite is true: By interrogating provers that share entangled particles, you can verify a much larger class of problems than you can without entanglement.

    “Entanglement is a way to generate correlations that you think might help them lie or cheat,” Vidick said. “But in fact you can use that to your advantage.”

    To understand how, you first need to grasp the almost otherworldly scale of the problems whose solutions you could verify through this interactive procedure.

    Imagine a graph — a collection of dots (vertices) connected by lines (edges). You might want to know whether it’s possible to color the vertices using three colors, so that no vertices connected by an edge have the same color. If you can, the graph is “three-colorable.”

    If you hand a pair of entangled provers a very large graph, and they report back that it can be three-colored, you’ll wonder: Is there a way to verify their answer?

    For very big graphs, it would be impossible to check the work directly. So instead, you could ask each prover to tell you the color of one of two connected vertices. If they each report a different color, and they keep doing so every time you ask, you’ll gain confidence that the three-coloring really works.

    But even this interrogation strategy fails as graphs get really big — with more edges and vertices than there are atoms in the universe. Even the task of stating a specific question (“Tell me the color of XYZ vertex”) is more than you, the verifier, can manage: The amount of data required to name a specific vertex is more than you can hold in your working memory.

    But entanglement makes it possible for the provers to come up with the questions themselves.

    “The verifier doesn’t have to compute the questions. The verifier forces the provers to compute the questions for them,” Wright said.

    The verifier wants the provers to report the colors of connected vertices. If the vertices aren’t connected, then the answers to the questions won’t say anything about whether the graph is three-colored. In other words, the verifier wants the provers to ask correlated questions: One prover asks about vertex ABC and the other asks about vertex XYZ. The hope is that the two vertices are connected to each other, even though neither prover knows which vertex the other is thinking about. (Just as Alice and Bob hope to fill in the same number in the same square even though neither knows which row or column the other has been asked about.)

    If two provers were coming up with these questions completely on their own, there’d be no way to force them to select connected, or correlated, vertices in a way that would allow the verifier to validate their answers. But such correlation is exactly what entanglement enables.

    “We’re going to use entanglement to offload almost everything onto the provers. We make them select questions by themselves,” Vidick said.

    At the end of this procedure, the provers each report a color. The verifier checks whether they’re the same or not. If the graph really is three-colorable, the provers should never report the same color.

    “If there is a three-coloring, the provers will be able to convince you there is one,” Yuen said.

    As it turns out, this verification procedure is another example of a nonlocal game. The provers “win” if they convince you their solution is correct.

    In 2012, Vidick and Tsuyoshi Ito proved that it’s possible to play a wide variety of nonlocal games with entangled provers to verify answers to at least the same number of problems you can verify by interrogating two classical computers. That is, using entangled provers doesn’t work against verification. And last year, Natarajan and Wright proved that interacting with entangled provers actually expands the class of problems that can be verified.

    But computer scientists didn’t know the full range of problems that can be verified in this way. Until now.

    A Cascade of Consequences

    In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.

    “The verification capability of this type of model is really mind-boggling,” Yuen said.

    But the halting problem can’t be solved. And that fact is the spark that sets the final proof in motion.

    Imagine you hand a program to a pair of entangled provers. You ask them to tell you whether it will halt. You’re prepared to verify their answer through a kind of nonlocal game: The provers generate questions and “win” based on the coordination between their answers.

    If the program does in fact halt, the provers should be able to win this game 100% of the time — similar to how if a graph is actually three-colorable, entangled provers should never report the same color for two connected vertices. If it doesn’t halt, the provers should only win by chance — 50% of the time.

    That means if someone asks you to determine the approximate maximum-winning probability for a specific instance of this nonlocal game, you will first need to solve the halting problem. And solving the halting problem is impossible. Which means that calculating the approximate maximum-winning probability for nonlocal games is undecidable, just like the halting problem.

    This in turn means that the answer to Tsirelson’s problem is no — the two models of entanglement are not equivalent. Because if they were, you could pinch the floor and the ceiling together to calculate an approximate maximum-winning probability.

    “There cannot be such an algorithm, so the two [models] must be different,” said David Pérez-García of the Complutense University of Madrid.

    The new paper proves that the class of problems that can be verified through interactions with entangled quantum provers, a class called MIP*, is exactly equal to the class of problems that are no harder than the halting problem, a class called RE. The title of the paper states it succinctly: “MIP* = RE.”

    In the course of proving that the two complexity classes are equal, the computer scientists proved that Tsirelson’s problem is false, which, due to previous work, meant that the Connes embedding conjecture is also false.

    For researchers in these fields, it was stunning that answers to such big problems would fall out from a seemingly unrelated proof in computer science.

    “If I see a paper that says MIP* = RE, I don’t think it has anything to do with my work,” said Navascués, who co-authored previous work tying Tsirelson’s problem and the Connes embedding conjecture together. “For me it was a complete surprise.”

    Quantum physicists and mathematicians are just beginning to digest the proof. Prior to the new work, mathematicians had wondered whether they could get away with approximating infinite-dimensional matrices by using large finite-dimensional ones instead. Now, because the Connes embedding conjecture is false, they know they can’t.

    “Their result implies that’s impossible,” said Slofstra.

    The computer scientists themselves did not aim to answer the Connes embedding conjecture, and as a result, they’re not in the best position to explain the implications of one of the problems they ended up solving.

    “Personally, I’m not a mathematician. I don’t understand the original formulation of the Connes embedding conjecture well,” said Natarajan.

    He and his co-authors anticipate that mathematicians will translate this new result into the language of their own field. In a blog post announcing the proof, Vidick wrote, “I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences.”

    Yet as other researchers run with the proof, the line of inquiry that prompted it is coming to a halt. For more than three decades, computer scientists have been trying to figure out just how far interactive verification will take them. They are now confronted with the answer, in the form of a long paper with a simple title and echoes of Turing.

    “There’s this long sequence of works just wondering how powerful” a verification procedure with two entangled quantum provers can be, Natarajan said. “Now we know how powerful it is. That story is at an end.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 1:48 pm on December 27, 2020 Permalink | Reply
    Tags: "After Centuries a Seemingly Simple Math Problem Gets an Exact Solution", A branch of math known as complex analysis, , Quanta Magazine   

    From Quanta Magazine: “After Centuries a Seemingly Simple Math Problem Gets an Exact Solution” 

    From Quanta Magazine

    December 9, 2020
    Steve Nadis

    Mathematicians have long pondered the reach of a grazing goat tied to a fence, only finding approximate answers until now.

    1
    Dappermouth for Quanta Magazine.

    Here’s a simple-sounding problem: Imagine a circular fence that encloses one acre of grass. If you tie a goat to the inside of the fence, how long a rope do you need to allow the animal access to exactly half an acre?

    It sounds like high school geometry, but mathematicians and math enthusiasts have been pondering this problem in various forms for more than 270 years. And while they’ve successfully solved some versions, the goat-in-a-circle puzzle has refused to yield anything but fuzzy, incomplete answers.

    Even after all this time, “nobody knows an exact answer to the basic original problem,” said Mark Meyerson, an emeritus mathematician at the U.S. Naval Academy. “The solution is only given approximately.”

    But earlier this year, a German mathematician named Ingo Ullisch finally made progress, finding what is considered the first exact solution to the problem — although even that comes in an unwieldy, reader-unfriendly form.

    3
    Ingo Ullisch reached an exact solution for the grazing goat problem by applying a branch of math known as complex analysis.
    Courtesy of Ingo Ullisch.

    “[This] is the first explicit expression that I’m aware of [for the length of the rope],” said Michael Harrison, a mathematician at Carnegie Mellon University. “It certainly is an advance.”

    Of course, it won’t upend textbooks or revolutionize math research, Ullisch concedes, because this problem is an isolated one. “It’s not connected to other problems or embedded within a mathematical theory.” But it’s possible for even fun puzzles like this to give rise to new mathematical ideas and help researchers come up with novel approaches to other problems.

    Into (and Out of) the Barnyard

    The first problem of this type was published in the 1748 issue of the London-based periodical The Ladies Diary: Or, The Woman’s Almanack — a publication that promised to present “new improvements in arts and sciences, and many diverting particulars.”

    The original scenario involves “a horse tied to feed in a Gentlemen’s Park.” In this case, the horse is tied to the outside of a circular fence. If the length of the rope is the same as the circumference of the fence, what is the maximum area upon which the horse can feed? This version was subsequently classified as an “exterior problem,” since it concerned grazing outside, rather than inside, the circle.

    An answer appeared in the Diary’s 1749 edition. It was furnished by “Mr. Heath,” who relied upon “Trial and a Table of Logarithms,” among other resources, to reach his conclusion.

    Heath’s answer — 76,257.86 square yards for a 160-yard rope — was an approximation rather than an exact solution. To illustrate the difference, consider the equation x2 − 2 = 0. One could derive an approximate numerical answer, x = 1.4142, but that’s not as accurate or satisfying as the exact solution, x = 2–√.

    The problem reemerged in 1894 in the first issue of the American Mathematical Monthly, recast as the initial grazer-in-a-fence problem (this time without any reference to farm animals). This type is classified as an interior problem and tends to be more challenging than its exterior counterpart, Ullisch explained. In the exterior problem, you start with the radius of the circle and length of the rope and compute the area. You can solve it through integration.

    “Reversing this procedure — starting with a given area and asking which inputs result in this area — is much more involved,” Ullisch said.

    In the decades that followed, the Monthly published variations on the interior problem, which mainly involved horses (and in at least one case a mule) rather than goats, with fences that were circular, square and elliptical in shape. But in the 1960s, for mysterious reasons, goats started displacing horses in the grazing-problem literature — this despite the fact that goats, according to the mathematician Marshall Fraser, may be “too independent to submit to tethering.”

    Goats in Higher Dimensions

    In 1984, Fraser got creative, taking the problem out of the flat, pastoral realm and into more expansive terrain. He worked out how long a rope is needed to allow a goat to graze in exactly half the volume of an n-dimensional sphere as n goes to infinity. Meyerson spotted a logical flaw in the argument and corrected Fraser’s mistake later that year, but reached the same conclusion: As n approaches infinity, the ratio of the tethering rope to the sphere’s radius approaches 2–√.

    As Meyerson noted, this seemingly more complicated way of framing the problem — in multidimensional space rather than a field of grass — actually made finding a solution easier. “In infinite dimensions, we have a clean answer, whereas in two dimensions there is not such a clear-cut solution.”

    3
    The grazing goat problem can take two forms, but both usually start with a goat tied to a circular fence. The interior version asks how long a goat’s leash should be if we want it to access exactly half the enclosed area. The exterior version asks how much outside area a goat has access to with a given length of rope and a given fence circumference. (In this case, the rope’s length is equal to the fence’s circumference.) Credit: Samuel Velasco/Quanta Magazine.

    In 1998, Michael Hoffman, also a Naval Academy mathematician, expanded the problem in a different direction after coming across an example of the exterior problem through an online newsgroup. This version sought to quantify the area available to a bull tied outside a circular silo. The problem intrigued Hoffman, and he decided to generalize it to the exterior of not just a circle, but any smooth, convex curve, including ellipses and even unclosed curves.

    “Once you see a problem stated in a simple case, being a mathematician you often try to see how you can generalize it,” Hoffman said.

    Hoffman considered the case in which the leash (of length L) is less than or equal to half the curve’s circumference. First he drew a line tangent to the curve at the point where the bull’s leash is attached. The bull can graze on a semicircle of area πL2/2 bounded by the tangent. Hoffman then devised an exact integral solution for the spaces between the tangent and the curve to determine the total grazing area.

    More recently, the Lancaster University mathematician Graham Jameson worked out the three-dimensional case of the interior problem in detail with his son Nicholas, choosing it because it has received less attention. Since goats can’t move easily in three dimensions, the Jamesons called it the “bird problem” in their 2017 paper [Cambridge Core] : If you tether a bird to a point on the inside of a spherical cage, how long should the tether be to confine the bird to half the cage’s volume?

    “The three-dimensional problem is actually simpler to solve than the two-dimensional one,” the older Jameson said, and the pair arrived at a precise solution. However, since the mathematical form of the answer — which Jameson characterized as “exact (albeit horrible!)” — would have been daunting to the uninitiated, they also used an approximation technique to provide a numerical answer for the tether length that “bird handlers might prefer.”

    Getting His Goat

    Nevertheless, an exact solution to the two-dimensional interior problem from 1894 remained elusive — until Ullisch’s paper earlier this year. Ullisch first heard of the goat problem from a relative in 2001, when he was a child. He started working on it in 2017, after earning a doctorate from the University of Münster. He wanted to try a new approach.

    It was well known by then that the goat problem could be reduced to a single transcendental equation, which by definition includes trigonometric terms like sine and cosine. That could create a roadblock, as many transcendental equations are intractable; x = cos(x), for example, has no exact solutions.

    But Ullisch set up the problem in such a way that he could get a more tractable transcendental equation to work with: sin(β) – β cos(β) − π/2 = 0. And while this equation may also seem unmanageable, he realized he could approach it using complex analysis — a branch of mathematics that applies analytic tools, including those of calculus, to expressions containing complex numbers. Complex analysis has been around for centuries, but as far as Ullisch knows, he was the first to apply this approach to hungry goats.

    With this strategy, he was able to transform his transcendental equation into an equivalent expression for the length of rope that would let the goat graze in half the enclosure. In other words, he finally answered the question with a precise mathematical formulation.

    Unfortunately, there’s a catch. Ullisch’s solution is not something simple like the square root of 2. It’s a bit more abstruse — the ratio of two so-called contour integral expressions, with numerous trigonometric terms thrown into the mix — and it can’t tell you, in a practical sense, how long to make the goat’s leash. Approximations are still required to get a number that’s useful to anyone in animal husbandry.

    But Ullisch still sees value in having an exact solution, even if it’s not neat and simple. “If we only use numerical values (or approximations), we will never get to know the intrinsic nature of the solution,” he said. “Having a formula can give us further insight into how the solution is composed.”

    Not Giving Up the Goat

    Ullisch has set aside the grazing goat for now, as he’s not sure how to go further with it, but other mathematicians are pursuing their own ideas. Harrison, for instance, has an upcoming paper in Mathematics Magazine in which he exploits properties of the sphere to attack a three-dimensional generalization of the grazing-goat problem.

    “It’s often of value in math to think up new ways of getting an answer — even to a problem that has been solved before,” Meyerson noted, “because maybe it can be generalized for use in other ways.”

    And that’s why so much mathematical ink has been devoted to imaginary farm animals. “My instincts say that no breakthrough mathematics will come from work on the grazing-goat problem,” Harrison said, “but you never know. New math can come from anywhere.”

    Hoffman is more optimistic. The transcendental equation Ullisch came up with is related to the transcendental equations Hoffman investigated in a 2017 paper. Hoffman’s interest in those equations was sparked, in turn, by a 1953 paper that stimulated further work by presenting established methods in a new light. He sees possible parallels in the way Ullisch applied known approaches in complex analysis to transcendental equations, this time in a novel setting involving goats.

    “Not all progress in mathematics comes from people making fundamental breakthroughs,” Hoffman said. “Sometimes it consists of looking at classical approaches and finding a new angle — a new way of putting the pieces together that might eventually lead to new results.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:56 am on December 11, 2020 Permalink | Reply
    Tags: "How the Slowest Computer Programs Illuminate Math’s Fundamental Limits", , BusyBeaverology, , Quanta Magazine, The search for long-running computer programs can illuminate the state of mathematical knowledge and even tell us what’s knowable.   

    From Quanta Magazine: “How the Slowest Computer Programs Illuminate Math’s Fundamental Limits” 

    From Quanta Magazine

    December 10, 2020
    John Pavlus

    1
    A visualization of the longest-running five-rule Turing machine currently known. Each column of pixels represents one step in the computation, moving from left to right. Black squares show where the machine has printed a 1. The far right column shows the state of the computation when the Turing machine halts. Credit: Quanta Magazine/Peter Krumins.

    Programmers normally want to minimize the time their code takes to execute. But in 1962, the Hungarian mathematician Tibor Radó posed the opposite problem. He asked: How long can a simple computer program possibly run before it terminates? Radó nicknamed these maximally inefficient but still functional programs “busy beavers.”

    Finding these programs has been a fiendishly diverting puzzle for programmers and other mathematical hobbyists ever since it was popularized in Scientific American’s “Computer Recreations” column in 1984. But in the last several years, the busy beaver game, as it’s known, has become an object of study in its own right, because it has yielded connections to some of the loftiest concepts and open problems in mathematics.

    “In math, there is a very permeable boundary between what’s an amusing recreation and what is actually important,” said Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin who recently published a survey of progress in “BusyBeaverology.”

    The recent work suggests that the search for long-running computer programs can illuminate the state of mathematical knowledge, and even tell us what’s knowable. According to researchers, the busy beaver game provides a concrete benchmark for evaluating the difficulty of certain problems, such as the unsolved Goldbach conjecture and Riemann hypothesis. It even offers a glimpse of where the logical bedrock underlying math breaks down. The logician Kurt Gödel proved the existence of such mathematical terra incognita nearly a century ago. But the busy beaver game can show where it actually lies on a number line, like an ancient map depicting the edge of the world.

    An Uncomputable Computer Game

    The busy beaver game is all about the behavior of Turing machines — the primitive, idealized computers conceived by Alan Turing in 1936. A Turing machine performs actions on an endless strip of tape divided into squares. It does so according to a list of rules. The first rule might say:

    ‘If the square contains a 0, replace it with a 1, move one square to the right and consult rule 2. If the square contains a 1, leave the 1, move one square to the left and consult rule 3.”

    Each rule has this forking choose-your-own-adventure style. Some rules say to jump back to previous rules; eventually there’s a rule containing an instruction to “halt.” Turing proved that this simple kind of computer is capable of performing any possible calculation, given the right instructions and enough time.

    As Turing noted in 1936, in order to compute something, a Turing machine must eventually halt — it can’t get trapped in an infinite loop. But he also proved that there’s no reliable, repeatable method for distinguishing machines that halt from machines that simply run forever — a fact known as the halting problem.

    The busy beaver game asks: Given a certain number of rules, what’s the maximum number of steps that a Turing machine can take before halting?

    For instance, if you’re only allowed one rule, and you want to ensure that the Turing machine halts, you’re forced to include the halt instruction right away. The busy beaver number of a one-rule machine, or BB(1), is therefore 1.

    But adding just a few more rules instantly blows up the number of machines to consider. Of 6,561 possible machines with two rules, the one that runs the longest — six steps — before halting is the busy beaver. But some others simply run forever. None of these are the busy beaver, but how do you definitively rule them out? Turing proved that there’s no way to automatically tell whether a machine that runs for a thousand or a million steps won’t eventually terminate.

    That’s why finding busy beavers is so hard. There’s no general approach for identifying the longest-running Turing machines with an arbitrary number of instructions; you have to puzzle out the specifics of each case on its own. In other words, the busy beaver game is, in general, “uncomputable.”

    Proving that BB(2) = 6 and that BB(3) = 107 was difficult enough that Radó’s student Shen Lin earned a doctorate for the work in 1965. Radó considered BB(4) “entirely hopeless,” but the case was finally solved in 1983. Beyond that, the values virtually explode; researchers have identified a five-rule Turing machine, for instance, that runs for 47,176,870 steps before stopping, so BB(5) is at least that big. BB(6) is at least 7.4 × 1036,534. Proving the exact values “will need new ideas and new insights, if it can be done at all,” said Aaronson.

    Threshold of Unknowability

    William Gasarch, a computer scientist at the University of Maryland, College Park, said he’s less intrigued by the prospect of pinning down busy beaver numbers than by “the general concept that it’s actually uncomputable.” He and other mathematicians are mainly interested in using the game as a yardstick for gauging the difficulty of important open problems in mathematics — or for figuring out what is mathematically knowable at all.

    The Goldbach conjecture, for instance, asks whether every even integer greater than 2 is the sum of two primes. Proving the conjecture true or false would be an epochal event in number theory, allowing mathematicians to better understand the distribution of prime numbers. In 2015, an anonymous GitHub user named Code Golf Addict published code for a 27-rule Turing machine that halts if — and only if — the Goldbach conjecture is false. It works by counting upward through all even integers greater than 4; for each one, it grinds through all the possible ways to get that integer by adding two others, checking whether the pair is prime. When it finds a suitable pair of primes, it moves up to the next even integer and repeats the process. If it finds an even integer that can’t be summed by a pair of prime numbers, it halts.

    Running this mindless machine isn’t a practical way to solve the conjecture, because we can’t know if it will ever halt until it does. But the busy beaver game sheds some light on the problem. If it were possible to compute BB(27), that would provide a ceiling on how long we’d have to wait for the Goldbach conjecture to be settled automatically. That’s because BB(27) corresponds to the maximum number of steps this 27-rule Turing machine would have to execute in order to halt (if it ever did). If we knew that number, we could run the Turing machine for exactly that many steps. If it halted by that point, we’d know the Goldbach conjecture was false. But if it went that many steps and didn’t halt, we’d know for certain that it never would — thus proving the conjecture true.

    The rub is that BB(27) is such an incomprehensibly huge number that even writing it down, much less running the Goldbach-falsifying machine for that many steps, isn’t remotely possible in our physical universe. Nevertheless, that incomprehensibly huge number is still an exact figure whose magnitude, according to Aaronson, represents “a statement about our current knowledge” of number theory.

    In 2016, Aaronson established a similar result in collaboration with Yuri Matiyasevich and Stefan O’Rear. They identified a 744-rule Turing machine that halts if and only if the Riemann hypothesis is false. The Riemann hypothesis also concerns the distribution of prime numbers and is one of the Clay Mathematics Institute’s “Millennium Problems” worth $1 million. Aaronson’s machine will deliver an automatic solution in BB(744) steps. (It works by essentially the same mindless process as the Goldbach machine, iterating upward until it finds a counterexample.)

    Of course, BB(744) is an even more unattainably large number than BB(27). But working to pin down something easier, like BB(5), “may actually turn up some new number theory questions that are interesting in their own right,” Aaronson said. For instance, the mathematician Pascal Michel proved in 1993 that the record-holding five-rule Turing machine exhibits behavior similar to that of the function described in the Collatz conjecture, another famous open problem in number theory.

    “So much of math can be encoded as a question of, ‘Does this Turing machine halt or not?’” Aaronson said. “If you knew all the busy beaver numbers, then you could settle all of those questions.”

    More recently, Aaronson has used a busy-beaver-derived yardstick to gauge what he calls “the threshold of unknowability” for entire systems of mathematics. Gödel’s famous incompleteness theorems of 1931 proved that any set of basic axioms that could serve as a possible logical foundation for mathematics is doomed to one of two fates: Either the axioms will be inconsistent, leading to contradictions (like proving that 0 = 1), or they’ll be incomplete, unable to prove some true statements about numbers (like the fact that 2 + 2 = 4). The axiomatic system underpinning almost all modern math, known as Zermelo-Fraenkel (ZF) set theory, has its own Gödelian boundaries — and Aaronson wanted to use the busy beaver game to establish where they are.

    In 2016, he and his graduate student Adam Yedidia specified a 7,910-rule Turing machine that would only halt if ZF set theory is inconsistent. This means BB(7,910) is a calculation that eludes the axioms of ZF set theory. Those axioms can’t be used to prove that BB(7,910) represents one number instead of another, which is like not being able to prove that 2 + 2 = 4 instead of 5.

    O’Rear subsequently devised a much simpler 748-rule machine that halts if ZF is inconsistent — essentially moving the threshold of unknowability closer, from BB(7,910) to BB(748). “That is a kind of a dramatic thing, that the number [of rules] is not completely ridiculous,” said Harvey Friedman, a mathematical logician and emeritus professor at Ohio State University. Friedman thinks that the number can be brought down even further: “I think maybe 50 is the right answer.” Aaronson suspects that the true threshold may be as close as BB(20).

    Whether near or far, such thresholds of unknowability definitely exist. “This is the vision of the world that we have had since Gödel,” said Aaronson. “The busy beaver function is another way of making it concrete.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:29 am on December 11, 2020 Permalink | Reply
    Tags: "The Computer Scientist Who Shrinks Big Data", , For Jelani Nelson [UC Berkeley] algorithms represent a wide-open playground., Jelani Nelson, Quanta Magazine   

    From Quanta Magazine: “The Computer Scientist Who Shrinks Big Data” Jelani Nelson 

    From Quanta Magazine

    December 7, 2020
    Allison Whitten

    Jelani Nelson designs clever algorithms that only have to remember slivers of massive data sets. He also teaches kids in Ethiopia how to code.

    1
    Computer scientist Jelani Nelson with his daughter at their home in Berkeley, California. Credit: Constanza Hevia/Quanta Magazine.

    For Jelani Nelson [UC Berkeley] algorithms represent a wide-open playground. “The design space is just so broad that it’s fun to see what you can come up with,” he said.

    Yet the algorithms Nelson devises obey real-world constraints — chief among them the fact that computers cannot store unlimited amounts of data. This poses a challenge for companies like Google and Facebook, which have vast amounts of information streaming into their servers every minute. They’d like to quickly extract patterns in that data without having to remember it all in real time.

    Nelson, 36, a computer scientist at the University of California, Berkeley, expands the theoretical possibilities for low-memory streaming algorithms. He’s discovered the best procedures for answering on-the-fly questions like “How many different users are there?” (known as the distinct elements problem) and “What are the trending search terms right now?” (the frequent items problem).

    Nelson’s algorithms often use a technique called sketching, which compresses big data sets into smaller components that can be stored using less memory and analyzed quickly.

    For example, in 2016 Nelson and his collaborators devised the best possible algorithm for monitoring things like repeat IP addresses (or frequent users) accessing a server. Instead of keeping track of billions of different IP addresses to identify the users who keep coming back, the algorithm breaks each 10-digit address into smaller two-digit chunks. Then sub-algorithms each focus on remembering a different two-digit chunk — drastically cutting the number of combinations logged in memory from billions of possible 10-digit combinations down to just 100 possible two-digit ones. Finally, by using clever strategies to put the chunks back together, the algorithm reconstructs the original IP addresses with a high degree of accuracy. But the massive memory-saving benefits don’t kick in until the users are identified by numbers much longer than 10 digits, so for now his algorithm is more of a theoretical advance.

    2
    Credit: Antoine Doré/Quanta Magazine.

    Before he started designing cutting-edge algorithms, Nelson was a kid trying to teach himself to code. He grew up in St. Thomas in the U.S. Virgin Islands and learned his first programming languages from a few textbooks he picked up during visits to the U.S. mainland. Today he devotes a lot of time to making it easier for kids to get into computer science. In 2011 he founded AddisCoder, a free summer program in Addis Ababa, Ethiopia (where his mother is from). So far the program has taught coding and computer science to over 500 high school students. Perhaps not surprisingly, given Nelson’s involvement, the course is highly compressed, packing a semester of college-level material into just four weeks.

    Quanta spoke with Nelson about the challenges and trade-offs involved in developing low-memory algorithms, how growing up in the Virgin Islands protected him from America’s race problem, and the story behind AddisCoder. This interview is based on video calls and has been condensed and edited for clarity.

    When did you first get interested in computer science?

    When I was 11, I remember reading a book on HTML and giving myself exercises of websites to make. And I remember I wanted to make a webpage for my little sister. She was probably in kindergarten or first grade and she really loved Rugrats on Nickelodeon. So on her webpage, I wanted to have some of the characters’ images, like Tommy and Chuckie, and I was really worried that I might be violating Nickelodeon’s copyright. So I called Nickelodeon to ask permission to include these images on my sister’s site. They denied permission.

    That was when you were still in the U.S. Virgin Islands, right? What was it like growing up there?

    It felt like a very small town. I grew up on the island of St. Thomas, which has roughly 50,000 people, and the whole island is less than 35 square miles. We didn’t have running water supplied by the government, so we had two big cisterns, which are just giant rooms under the house, to collect water. We still have a giant generator in our front yard in case of power outages and hurricanes.

    And when I was 11, Hurricane Marilyn came and wreaked havoc on the Virgin Islands. I remember we actually stayed at the Marriott the night of the hurricane, and it was a good thing we did that because our house was messed up.

    What’s something you remember about the U.S. Virgin Islands that’s different from here in the States?

    What I’ve noticed a lot in the mainland U.S. is that often there’s this very strange lack of respect, or disrespect, that I receive in some circumstances. I’ve been in the mainland for a long time now, it’s been like 20 years. There have been isolated incidents here and there where usually no one will explicitly tell you that something is happening because of your race, but you get treated very weirdly.

    I did not witness it in my childhood because of where I was. People often ask me about being Black in science in America. But I think in the Virgin Islands, somehow my race was less important down there. Because almost everybody’s Black. It was never like, “Oh, you’re a Black kid who’s succeeding in math and science.” It was like, well, of course I’m a Black kid, everyone’s a Black kid here. I think that growing up in the Virgin Islands shielded me from some of the negative psychological effects of racism in America.

    Now that I’m in the mainland U.S., I do agree there’s a real problem here. But because of what I’ve seen in my own life, I feel that this problem is not intrinsic to the world. It doesn’t have to be this way. There are places in the world where it isn’t this way.


    Jelani Nelson explains how his streaming algorithms answer questions about big data without having to remember everything they’ve seen. Credit: Constanza Hevia/Quanta Magazine.

    Speaking of other places in the world, what led you to start the AddisCoder program in Ethiopia?

    I was a final-semester Ph.D. student; I already had a faculty offer from Harvard, so I was thinking: What do I want to do? And I thought, I’ll go to Ethiopia and visit my relatives and hang out over the summer. And then I thought, well, if I’m going to spend six weeks there, that’s a lot of time to just hang out and do nothing else. Why don’t I teach on the side?

    So I had some friends help me advertise my program to high schools in Addis Ababa. I thought there would be a large number of interested students, so I made a puzzle. You had to solve a math problem. The solution to that math problem gave you an email address, and you could sign up for the class by emailing that address. We got a couple hundred kids who signed up to take the class. It was actually too many even with the puzzle. The classroom we got wasn’t big enough to support that. So I made the first few days of class very hard and fast to encourage students to drop out, which many did.

    How has AddisCoder changed since that first summer in 2011?

    Instead of 80-something students a year, now we’re training a little more than double that, and it’s a boarding camp that we co-organize with the Meles Zenawi Foundation, a nonprofit that was created in honor of the late prime minister of Ethiopia. The students now come from all over the country, and we have a teaching staff of 40.

    A lot of the students have never been outside of their town, or their region. So AddisCoder is the first time they’re seeing kids from all over the country, and then they’re meeting instructors from all over the world. It’s very eye-opening for them.

    I’m sure it’s exciting for them to meet top computer scientists. In your own work, what is your main challenge when developing algorithms?

    An algorithm is just a procedure for solving some task. So your job as an algorithm designer is to come up with a procedure that solves that task as efficiently as possible. And the design space is infinite, right? You could do anything.

    What kinds of problems do your algorithms solve?

    Companies we interact with use “distinct elements” a lot, which is counting the number of distinct items in a stream. So, maybe you count the number of distinct IP addresses that watch a YouTube video.

    It turns out that this is a problem that also can be solved using a low-memory streaming algorithm. The one that’s most often used in practice is something called HyperLogLog. It’s used at Facebook, Google and a bunch of big companies. But the very first optimal low-memory algorithm for distinct elements, in theory, is one that I co-developed in 2010 for my Ph.D. thesis with David Woodruff and Daniel Kane.

    If your algorithm is more memory-efficient than HyperLogLog, why don’t companies use it?

    It’s optimal once the problem is big enough, but with the kinds of problem sizes that people usually deal with, HyperLogLog is more of a practical algorithm.

    Imagine that you’re seeing a stream of packets, and what you want is to count the number of distinct IP addresses that are sending traffic on this link. You want to know how many IP addresses there are. Well, in internet protocol version 4, there are 232 IP addresses total, which is about 4 billion. That’s not big enough for our algorithm to win. It really has to be something astronomically big for our algorithms to be better.

    How do streaming algorithms analyze data without having to remember it all?

    Here’s a very simple example. I’m Amazon and I want to know what my gross sales were today. Let’s say there were a million transactions today. There’s a very simple solution to this problem, which is you keep a running sum. Every time there’s a sale, you just add the sale amount to that sum, so you’re just storing one number instead of every transaction.

    It turns out that there are other problems where the data might not seem numerical, but you somehow think of the data as numerical. And then what you’re doing is somehow taking a little bit of information from each piece of data and combining it, and you’re storing those combinations. This process takes the data and summarizes it into a sketch.

    How does that kind of compression work?

    There are many techniques, though a popular one is linear sketching. Let’s say I want to answer the distinct elements problem, where a website like Facebook wants to know how many of their users visit their site each day. Facebook has roughly 3 billion users, so you could imagine creating a data set which has 3 billion dimensions, one for each user. I don’t want to remember the full Facebook user data set. Instead of storing 3 billion dimensions, I’ll store 100 dimensions.

    The first dimension comes from taking a subset of the 3 billion numbers and adding them together, or multiplying them by some coefficient. And then it does that again 100 times, so now you’re only storing 100 numbers. Each of these 100 numbers might depend, in principle, on all of the 3 billion numbers. But it doesn’t store them individually.

    A lot of big-data algorithms output answers that are within some range of the truth but don’t output the true value itself. Why can’t they produce an exact answer?

    Unfortunately, for a lot of these problems, like the distinct elements problem, you can mathematically prove that if you insist on having the exact correct answer, then there is no algorithm that’s memory-efficient. To get the exact answer, the algorithm would basically have to remember everything it saw.

    4
    Nelson thinks algorithm design is really only limited by the creative capacity of the human mind. Credit: Constanza Hevia/Quanta Magazine.

    So what kinds of sacrifices do you have to make to keep the estimate as accurate as possible?

    The sacrifice is cost — memory cost. The more accuracy you want, the more memory you’re typically going to have to devote to the algorithm. And also, there’s failure probability. Maybe I’m OK with outputting a wrong answer with probability 10% of the time. Or maybe I really want it to be 5% or maybe 1%. The lower I make the failure probability, usually that costs me more memory too.

    Do you think today’s big-data algorithms are limited by the human mind or human engineering?

    I would say the human mind. Can you come up with an algorithm, and can you come up with a proof that there’s no better algorithm? But I should mention that the models we’re working in are constrained by human engineering. Why does it matter that the algorithm uses low memory? It’s because my device has bounded memory. Why does my device have bounded memory? Well, because of some constraints of the device.

    In a sense, then, you could be developing the algorithms of the future. Do you think companies will start using your algorithms at some point?

    Well, I think with our distinct elements algorithm that may very well never happen. What may happen is that people see our result as a proof of concept and they’ll work harder at making their practical algorithms as good as the theory suggests they can be.

    What do you enjoy about finding exactly how good an algorithm can be?

    There is a sense of satisfaction that comes at the end. I like when we don’t just know a better method, but we know the method — when we know that a million years into the future, no matter how clever humanity becomes, it’s mathematically impossible to do anything better.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 1:24 pm on December 4, 2020 Permalink | Reply
    Tags: "Physicists Nail Down the ‘Magic Number’ That Shapes the Universe", A team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant., , Asked whether a physcist should decide to publish is like an artist deciding that a painting is finished., , For Guellati-Khélifa the hardest part is knowing when to stop and publish., Guellati-Khélifa has been improving her experiment for the past 22 years., Numerically the fine-structure constant denoted by the Greek letter α (alpha) comes very close to the ratio 1/137., , , Quanta Magazine, The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons., , The speed of light- c- enjoys all the fame yet c’s numerical value says nothing about nature., The team measured the constant’s value to the 11th decimal place reporting that α = 1/137.03599920611. (The last two digits are uncertain.)   

    From Quanta Magazine: “Physicists Nail Down the ‘Magic Number’ That Shapes the Universe” 

    From Quanta Magazine

    December 2, 2020
    Natalie Wolchover

    1
    The fine-structure constant was introduced in 1916 to quantify the tiny gap between two lines in the spectrum of colors emitted by certain atoms. The closely spaced frequencies are seen here through a Fabry-Pérot interferometer. Credit: Computational Physics Inc.

    As fundamental constants go, the speed of light, c, enjoys all the fame, yet c’s numerical value says nothing about nature; it differs depending on whether it’s measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”

    Numerically, the fine-structure constant, denoted by the Greek letter α (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. “It’s like in architecture, there’s the golden ratio,” said Eric Cornell, a Nobel Prize-winning physicist at the University of Colorado, Boulder and the National Institute of Standards and Technology. “In the physics of low-energy matter — atoms, molecules, chemistry, biology — there’s always a ratio” of bigger things to smaller things, he said. “Those ratios tend to be powers of the fine-structure constant.”

    The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. “In our everyday world, everything is either gravity or electromagnetism. And that’s why alpha is so important,” said Holger Müller, a physicist at the University of California, Berkeley. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn’t exist.

    Physicists have more or less given up on a century-old obsession over where alpha’s particular value comes from; they now acknowledge that the fundamental constants could be random, decided in cosmic dice rolls during the universe’s birth. But a new goal has taken over.

    Physicists want to measure the fine-structure constant as precisely as possible. Because it’s so ubiquitous, measuring it precisely allows them to test their theory of the interrelationships between elementary particles — the majestic set of equations known as the Standard Model of particle physics.

    Standard Model of Particle Physics (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS.

    Any discrepancy between ultra-precise measurements of related quantities could point to novel particles or effects not accounted for by the standard equations. Cornell calls these kinds of precision measurements a third way of experimentally discovering the fundamental workings of the universe, along with particle colliders and telescopes.

    Today, in a new paper in the journal Nature, a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant’s value to the 11th decimal place, reporting that α = 1/137.03599920611. (The last two digits are uncertain.)

    With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement [Science] in 2018 by Müller’s group at Berkeley, the main competition. (Guellati-Khélifa made the most precise measurement before Müller’s in 2011.) Müller said of his rival’s new measurement of alpha, “A factor of three is a big deal. Let’s not be shy about calling this a big accomplishment.”

    4
    Saïda Guellati-Khélifa in her laboratory in Paris. Credit: Jean-François Dars and Anne Papillaut.

    Guellati-Khélifa has been improving her experiment for the past 22 years. She gauges the fine-structure constant by measuring how strongly rubidium atoms recoil when they absorb a photon. (Müller does the same with cesium atoms.) The recoil velocity reveals how heavy rubidium atoms are — the hardest factor to gauge in a simple formula for the fine-structure constant. “It’s always the least accurate measurement that’s the bottleneck, so any improvement in that leads to an improvement in the fine-structure constant,” Müller explained.

    The Paris experimenters begin by cooling the rubidium atoms almost to absolute zero, then dropping them in a vacuum chamber. As the cloud of atoms falls, the researchers use laser pulses to put the atoms in a quantum superposition of two states — kicked by a photon and not kicked. The two possible versions of each atom travel on separate trajectories until more laser pulses bring the halves of the superposition back together. The more an atom recoils when kicked by light, the more out of phase it is with the unkicked version of itself. The researchers measure this difference to reveal the atoms’ recoil velocity. “From the recoil velocity, we extract the mass of the atom, and the mass of the atom is directly involved in the determination of the fine-structure constant,” Guellati-Khélifa said.

    In such precise experiments, every detail matters. Table 1 of the new paper is an “error budget” listing 16 sources of error and uncertainty that affect the final measurement. These include gravity and the Coriolis force created by Earth’s rotation — both painstakingly quantified and compensated for. Much of the error budget comes from foibles of the laser, which the researchers have spent years perfecting.

    For Guellati-Khélifa, the hardest part is knowing when to stop and publish. She and her team stopped the week of February 17, 2020, just as the coronavirus was gaining a foothold in France. Asked whether deciding to publish is like an artist deciding that a painting is finished, Guellati-Khélifa said, “Exactly. Exactly. Exactly.”

    Surprisingly, her new measurement differs from Müller’s 2018 result in the seventh digit, a bigger discrepancy than the margin of error of either measurement. This means — barring some fundamental difference between rubidium and cesium — that one or both of the measurements has an unaccounted-for error. The Paris group’s measurement is the more precise, so it takes precedence for now, but both groups will improve their setups and try again.

    Though the two measurements differ, they closely match the value of alpha inferred from precise measurements of the electron’s g-factor, a constant related to its magnetic moment, or the torque that the electron experiences in a magnetic field. “You can connect the fine-structure constant to the g-factor with a hell of a lot of math,” said Cornell. “If there are any physical effects missing from the equations [of the Standard Model], we would be getting the answer wrong.”

    Instead, the measurements match beautifully, largely ruling out some proposals for new particles [Physical Review Letters]. The agreement between the best g-factor measurements and Müller’s 2018 measurement was hailed as the Standard Model’s greatest triumph. Guellati-Khélifa’s new result is an even better match. “It’s the most precise agreement between theory and experiment,” she said.

    And yet she and Müller have both set about making further improvements. The Berkeley team has switched to a new laser with a broader beam (allowing it to strike their cloud of cesium atoms more evenly), while the Paris team plans to replace their vacuum chamber, among other things.

    What kind of person puts such a vast effort into such scant improvements? Guellati-Khélifa named three traits: “You have to be rigorous, passionate and honest with yourself.” Müller said in response to the same question, “I think it’s exciting because I love building shiny nice machines. And I love applying them to something important.” He noted that no one can single-handedly build a high-energy collider like Europe’s Large Hadron Collider. But by constructing an ultra-precise instrument rather than a super-energetic one, Müller said, “you can do measurements relevant to fundamental physics, but with three or four people.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: