Tagged: Quanta Magazine (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:03 pm on January 21, 2022 Permalink | Reply
    Tags: , , , , Quanta Magazine (US), "Computer Scientists Eliminate Pesky Quantum Computations", Proof that any quantum algorithm can be rearranged to move measurements performed in the middle of the calculation to the end of the process., The basic difference between quantum computers and the computers we have at home is the way each stores information., Instead of encoding information in the 0s and 1s of typical bits quantum computers encode information in higher-dimensional combinations of bits called qubits., This collapse possibly affects all the other qubits in the system., Virtually all algorithms require knowing the value of a computation as it’s in progress., 28 years ago computer scientists established that for quantum algorithms you can wait until the end of a computation to make intermediate measurements without changing the final result., "BQL" and "BQuL", If at any point in a calculation you need to access the information contained in a qubit and you measure it the qubit collapses.   

    From Quanta Magazine (US): “Computer Scientists Eliminate Pesky Quantum Computations” 

    From Quanta Magazine (US)

    January 19, 2022
    Nick Thieme

    Credit: Samuel Velasco/Quanta Magazine.

    As quantum computers have become more functional, our understanding of them has remained muddled. Work by a pair of computer scientists [Symposium on Theory of Computing] has clarified part of the picture, providing insight into what can be computed with these futuristic machines.

    “It’s a really nice result that has implications for quantum computation,” said John Watrous of The University of Waterloo (CA).

    The research, posted in June 2020 by Bill Fefferman and Zachary Remscrim of The University of Chicago (US), proves that any quantum algorithm can be rearranged to move measurements performed in the middle of the calculation to the end of the process, without changing the final result or drastically increasing the amount of memory required to carry out the task. Previously, computer scientists thought that the timing of those measurements affected memory requirements, creating a bifurcated view of the complexity of quantum algorithms.

    “This has been quite annoying,” said Fefferman. “We’ve had to talk about two complexity classes — one with intermediate measurements and one without.”

    This issue applies exclusively to quantum computers due to the unique way they work. The basic difference between quantum computers and the computers we have at home is the way each stores information. Instead of encoding information in the 0s and 1s of typical bits quantum computers encode information in higher-dimensional combinations of bits called qubits.

    This approach enables denser information storage and sometimes faster calculations. But it also presents a problem. If at any point in a calculation you need to access the information contained in a qubit and you measure it, the qubit collapses from a delicate combination of simultaneously possible bits into a single definite one, possibly affecting all the other qubits in the system.

    This can be a problem because virtually all algorithms require knowing the value of a computation as it’s in progress. For instance, an algorithm may contain a statement like “If the variable x is a number, multiply it by 10; if not, leave it alone.” Performing these steps would seem to require knowing what x is at that moment in the computation — a potential challenge for quantum computers, where measuring the state of a particle (to determine what x is) inherently changes it.

    But 28 years ago, computer scientists proved it’s possible to avoid this kind of no-win situation. They established that for quantum algorithms, you can wait until the end of a computation to make intermediate measurements without changing the final result.

    An essential part of that result showed that you can push intermediate measurements to the end of a computation without drastically increasing the total running time. These features of quantum algorithms — that measurements can be delayed without affecting the answer or the runtime — came to be called the principle of deferred measurement.

    This principle fortifies quantum algorithms, but at a cost. Deferring measurements uses a great deal of extra memory space, essentially one extra qubit per deferred measurement. While one bit per measurement might take only a tiny toll on a classical computer with 4 trillion bits, it’s prohibitive given the limited number of qubits currently in the largest quantum computers.

    Google 53-qubit “Sycamore” superconducting processor quantum computer.

    IBM Unveils Breakthrough 127-Qubit Quantum Processor. Credit: IBM Corp.

    Fefferman and Remscrim’s work resolves this issue in a surprising way. With an abstract proof, they show that subject to a few caveats, anything calculable with intermediate measurements can be calculated without them. Their proof offers a memory-efficient way to defer intermediate measurements — circumventing the memory problems that such measurements created.


    “In the most standard scenario, you don’t need intermediate measurements,” Fefferman said.

    Fefferman and Remscrim achieved their result by showing that a representative problem called “well-conditioned matrix powering” is, in a way, equivalent to a different kind of problem with important properties.

    The “well-conditioned matrix powering” problem effectively asks you to find the values for particular entries in a type of matrix (an array of numbers), given some conditions. Fefferman and Remscrim proved that matrix powering is just as hard as any other quantum computing problem that allows for intermediate measurements. This set of problems is called “BQL”, and the team’s work meant that matrix powering could serve as a representative for all other problems in that class — so anything they proved about matrix powering would be true for all other problems involving intermediate measurements.

    At this point, the researchers took advantage of some of their earlier work. In 2016, Fefferman and Cedric Lin proved that a related problem called “well-conditioned matrix inversion” was equivalent to the hardest problem in a very similar class of problems called “BQuL”. This class is like BQL’s little sibling. It’s identical to BQL, except that it comes with the requirement that every problem in the class must also be reversible.

    In quantum computing, the distinction between reversible and irreversible measurements is essential. If a calculation measures a qubit, it collapses the state of the qubit, making the initial information impossible to recover. As a result, all measurements in quantum algorithms are innately irreversible.

    That means that BQuL is not just the reversible version of BQL; it’s also BQL without any intermediate measurements (because intermediate measurements, like all quantum measurements, would be irreversible, violating the signal condition of the class). The 2016 work proved that matrix inversion is a prototypical quantum calculation without intermediate measurements — that is, a fully representative problem for BQuL.

    The new paper builds on that by connecting the two, proving that well-conditioned matrix powering, which represents all problems with intermediate measurements, can be reduced to well-conditioned matrix inversion, which represents all problems that cannot feature intermediate measurements. In other words, any quantum computing problem with intermediate measurements can be reduced to a quantum computing problem without intermediate measurements.

    This means that for quantum computers with limited memory, researchers no longer need to worry about intermediate measurements when classifying the memory needs of different types of quantum algorithms.

    In 2020, a group of researchers at Princeton University (US) — Ran Raz, Uma Girish and Wei Zhan — independently proved a slightly weaker but nearly identical result that they posted three days after Fefferman and Rimscrim’s work. Raz and Girish later extended the result, proving that intermediate measurements can be deferred in both a time-efficient and space-efficient way for a more limited class of computers.

    Altogether, the recent work provides a much better understanding of how limited-memory quantum computation works. With this theoretical guarantee, researchers have a road map for translating their theory into applied algorithms. Quantum algorithms are now free, in a sense, to proceed without the prohibitive costs of deferred measurements.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 4:45 pm on January 21, 2022 Permalink | Reply
    Tags: "Any Single Galaxy Reveals the Composition of an Entire Universe", A group of scientists may have stumbled upon a radical new way to do cosmology., , Cosmic density of matter, , , , , , , Quanta Magazine (US), The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project, Theoretical Astrophysics   

    From Quanta Magazine (US): “Any Single Galaxy Reveals the Composition of an Entire Universe” 

    From Quanta Magazine (US)

    January 20, 2022
    Charlie Wood

    Credit: Kaze Wong / CAMELS collaboration.

    In the CAMELS project, coders simulated thousands of universes with diverse compositions, arrayed at the end of this video as cubes.

    A group of scientists may have stumbled upon a radical new way to do cosmology.

    Cosmologists usually determine the composition of the universe by observing as much of it as possible. But these researchers have found that a machine learning algorithm can scrutinize a single simulated galaxy and predict the overall makeup of the digital universe in which it exists — a feat analogous to analyzing a random grain of sand under a microscope and working out the mass of Eurasia. The machines appear to have found a pattern that might someday allow astronomers to draw sweeping conclusions about the real cosmos merely by studying its elemental building blocks.

    “This is a completely different idea,” said Francisco Villaescusa-Navarro, a theoretical astrophysicist at The Flatiron Institute Center for Computational Astrophysics (US) and lead author of the work. “Instead of measuring these millions of galaxies, you can just take one. It’s really amazing that this works.”

    It wasn’t supposed to. The improbable find grew out of an exercise Villaescusa-Navarro gave to Jupiter Ding, a Princeton University(US) undergraduate: Build a neural network that, knowing a galaxy’s properties, can estimate a couple of cosmological attributes. The assignment was meant merely to familiarize Ding with machine learning. Then they noticed that the computer was nailing the overall density of matter.

    “I thought the student made a mistake,” Villaescusa-Navarro said. “It was a little bit hard for me to believe, to be honest.”

    The results of the investigation that followed appeared on January 6 submitted for publication. The researchers analyzed 2,000 digital universes generated by The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project [The Astrophysical Journal]. These universes had a range of compositions, containing between 10% and 50% matter with the rest made up of Dark Energy, which drives the universe to expand faster and faster. (Our actual cosmos consists of roughly one-third Dark Matter and visible matter and two-thirds Dark Energy.) As the simulations ran, Dark Matter and visible matter swirled together into galaxies. The simulations also included rough treatments of complicated events like supernovas and jets that erupt from supermassive black holes.

    Ding’s neural network studied nearly 1 million simulated galaxies within these diverse digital universes. From its godlike perspective, it knew each galaxy’s size, composition, mass, and more than a dozen other characteristics. It sought to relate this list of numbers to the density of matter in the parent universe.

    It succeeded. When tested on thousands of fresh galaxies from dozens of universes it hadn’t previously examined, the neural network was able to predict the cosmic density of matter to within 10%. “It doesn’t matter which galaxy you are considering,” Villaescusa-Navarro said. “No one imagined this would be possible.”

    “That one galaxy can get [the density to] 10% or so, that was very surprising to me,” said Volker Springel, an expert in simulating galaxy formation at The MPG Institute for Astrophysics [MPG Institut für Astrophysik](DE) who was not involved in the research.

    The algorithm’s performance astonished researchers because galaxies are inherently chaotic objects. Some form all in one go, and others grow by eating their neighbors. Giant galaxies tend to hold onto their matter, while supernovas and black holes in dwarf galaxies might eject most of their visible matter. Still, every galaxy had somehow managed to keep close tabs on the overall density of matter in its universe.

    One interpretation is “that the universe and/or galaxies are in some ways much simpler than we had imagined,” said Pauline Barmby, an astronomer at The Western University (CA). Another is that the simulations have unrecognized flaws.

    The team spent half a year trying to understand how the neural network had gotten so wise. They checked to make sure the algorithm hadn’t just found some way to infer the density from the coding of the simulation rather than the galaxies themselves. “Neural networks are very powerful, but they are super lazy,” Villaescusa-Navarro said.

    Through a series of experiments, the researchers got a sense of how the algorithm was divining the cosmic density. By repeatedly retraining the network while systematically obscuring different galactic properties, they zeroed in on the attributes that mattered most.

    Near the top of the list was a property related to a galaxy’s rotation speed, which corresponds to how much matter (dark and otherwise) sits in the galaxy’s central zone. The finding matches physical intuition, according to Springel. In a universe overflowing with Dark Matter, you’d expect galaxies to grow heavier and spin faster. So you might guess that rotation speed would correlate with the cosmic matter density, although that relationship alone is too rough to have much predictive power.

    The neural network found a much more precise and complicated relationship between 17 or so galactic properties and the matter density. This relationship persists despite galactic mergers, stellar explosions and black hole eruptions. “Once you get to more than [two properties], you can’t plot it and squint at it by eye and see the trend, but a neural network can,” said Shaun Hotchkiss, a cosmologist at The University of Auckland (NZ).

    While the algorithm’s success raises the question of how many of the universe’s traits might be extracted from a thorough study of just one galaxy, cosmologists suspect that real-world applications will be limited. When Villaescusa-Navarro’s group tested their neural network on a different property — cosmic clumpiness — it found no pattern. And Springel expects that other cosmological attributes, such as the accelerating expansion of the universe due to Dark Energy, have little effect on individual galaxies.

    The research does suggest that, in theory, an exhaustive study of the Milky Way and perhaps a few other nearby galaxies could enable an exquisitely precise measurement of our universe’s matter. Such an experiment, Villaescusa-Navarro said, could give clues to other numbers of cosmic import such as the sum of the unknown masses of the universe’s three types of neutrinos.

    Neutrinos- Universe Today

    But in practice, the technique would have to first overcome a major weakness. The CAMELS collaboration cooks up its universes using two different recipes. A neural network trained on one of the recipes makes bad density guesses when given galaxies that were baked according to the other. The cross-prediction failure indicates that the neural network is finding solutions unique to the rules of each recipe. It certainly wouldn’t know what to do with the Milky Way, a galaxy shaped by the real laws of physics. Before applying the technique to the real world, researchers will need to either make the simulations more realistic or adopt more general machine learning techniques — a tall order.

    “I’m very impressed by the possibilities, but one needs to avoid being too carried away,” Springel said.

    But Villaescusa-Navarro takes heart that the neural network was able to find patterns in the messy galaxies of two independent simulations. The digital discovery raises the odds that the real cosmos may be hiding a similar link between the large and the small.

    “It’s a very beautiful thing,” he said. “It establishes a connection between the whole universe and a single galaxy.”

    The Dark Energy Survey

    Dark Energy Camera [DECam] built at DOE’s Fermi National Accelerator Laboratory(US).

    NOIRLab National Optical Astronomy Observatory(US) Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLab(US)NSF NOIRLab NOAO (US) Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP.

    The The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called Dark Energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    The Dark Energy Survey is designed to probe the origin of the accelerating universe and help uncover the nature of Dark Energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the Dark Energy Survey collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:35 pm on January 21, 2022 Permalink | Reply
    Tags: "In a Numerical Coincidence Some See Evidence for String Theory", "Massive Gravity", "String universality": a monopoly of string theories among viable fundamental theories of nature, , Asymptotically safe quantum gravity, , Graviton: A graviton is a closed string-or loop-in its lowest-energy vibration mode in which an equal number of waves travel clockwise and counterclockwise around the loop., Lorentz invariance: the same laws of physics must hold from all vantage points., , , Quanta Magazine (US), , , ,   

    From Quanta Magazine (US): “In a Numerical Coincidence Some See Evidence for String Theory” 

    From Quanta Magazine (US)

    January 21, 2022
    Natalie Wolchover

    Dorine Leenders for Quanta Magazine.

    In a quest to map out a quantum theory of gravity, researchers have used logical rules to calculate how much Einstein’s theory must change. The result matches string theory perfectly.

    Quantum gravity researchers use “α” to denote the size of the biggest quantum correction to Albert Einstein’s Theory of General Relativity.

    Recently, three physicists calculated a number pertaining to the quantum nature of gravity. When they saw the value, “we couldn’t believe it,” said Pedro Vieira, one of the three.

    Gravity’s quantum-scale details are not something physicists usually know how to quantify, but the trio attacked the problem using an approach that has lately been racking up stunners in other areas of physics. It’s called the bootstrap.

    To bootstrap is to deduce new facts about the world by figuring out what’s compatible with known facts — science’s version of picking yourself up by your own bootstraps. With this method, the trio found a surprising coincidence: Their bootstrapped number closely matched the prediction for the number made by string theory. The leading candidate for the fundamental theory of gravity and everything else, string theory holds that all elementary particles are, close-up, vibrating loops and strings.

    Vieira, Andrea Guerrieri of The Tel Aviv University (IL), and João Penedones of The EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH) reported their number and the match with string theory’s prediction in Physical Review Letters in August 2021. Quantum gravity theorists have been reading the tea leaves ever since.

    Some interpret the result as a new kind of evidence for string theory, a framework that sorely lacks even the prospect of experimental confirmation, due to the pointlike minuteness of the postulated strings.

    “The hope is that you could prove the inevitability of string theory using these ‘bootstrap’ methods,” said David Simmons-Duffin, a theoretical physicist at The California Institute of Technology (US). “And I think this is a great first step towards that.”

    From left: Pedro Vieira, Andrea Guerrieri and João Penedones.
    Credit: Gabriela Secara / The Perimeter Institute for Theoretical Physics (CA); Courtesy of Andrea Guerrieri; The Swiss National Centres of Competence in Research (NCCRs) [Pôle national suisse de recherche en recherche][Schweizerisches Nationales Kompetenzzentrum für Forschung](CH) SwissMAP (CH)

    Irene Valenzuela, a theoretical physicist at the Institute for Theoretical Physics at The Autonomous University of Madrid [Universidad Autónoma de Madrid](ES), agreed. “One of the questions is if string theory is the unique theory of quantum gravity or not,” she said. “This goes along the lines that string theory is unique.”

    Other commentators saw that as too bold a leap, pointing to caveats about the way the calculation was done.

    Einstein, Corrected

    The number that Vieira, Guerrieri and Penedones calculated is the minimum possible value of “α” (alpha). Roughly, “α” is the size of the first and largest mathematical term that you have to add to Albert Einstein’s gravity equations in order to describe, say, an interaction between two gravitons — the presumed quantum units of gravity.

    Albert Einstein’s 1915 Theory of General Relativity paints gravity as curves in the space-time continuum created by matter and energy. It perfectly describes large-scale behavior such as a planet orbiting a star. But when matter is packed into too-small spaces, General Relativity short-circuits. “Some correction to Einsteinian gravity has to be there,” said Simon Caron-Huot, a theoretical physicist at McGill University (CA).

    Physicists can tidily organize their lack of knowledge of gravity’s microscopic nature using a scheme devised in the 1960s by Kenneth Wilson and Steven Weinberg: They simply add a series of possible “corrections” to General Relativity that might become important at short distances. Say you want to predict the chance that two gravitons will interact in a certain way. You start with the standard mathematical term from Relativity, then add new terms (using any and all relevant variables as building blocks) that matter more as distances get smaller. These mocked-up terms are fronted by unknown numbers labeled “α”, “β”, “γ” and so on, which set their sizes. “Different theories of quantum gravity will lead to different such corrections,” said Vieira, who has joint appointments at The Perimeter Institute for Theoretical Physics (CA), and The International Centre for Theoretical Physics at The South American Institute for Fundamental Research [Instituto sul-Americano de Pesquisa Fundamental] (BR). “So these corrections are our first way to tell such possibilities apart.”

    In practice, “α” has only been explicitly calculated in string theory, and even then only for highly symmetric 10-dimensional universes. The English string theorist Michael Green and colleagues determined in the 1990s that in such worlds “α” must be at least 0.1389. In a given stringy universe it might be higher; how much higher depends on the string coupling constant, or a string’s propensity to spontaneously split into two. (This coupling constant varies between versions of string theory, but all versions unite in a master framework called “M-theory”, where string coupling constants correspond to different positions in an extra 11th dimension.)

    Meanwhile, alternative quantum gravity ideas remain unable to make predictions about “α”. And since physicists can’t actually detect gravitons — the force of gravity is too weak — they haven’t been able to directly measure “α” as a way of investigating and testing quantum gravity theories.

    Then a few years ago, Penedones, Vieira and Guerrieri started talking about using the bootstrap method to constrain what can happen during particle interactions. They first successfully applied the approach to particles called pions. “We said, OK, here it’s working very well, so why not go for gravity?” Guerrieri said.

    Bootstrapping the Bound

    The trick of using accepted truths to constrain unknown possibilities was devised by particle physicists in the 1960s, then forgotten, then revived to fantastic effect over the past decade by researchers with supercomputers, which can solve the formidable formulas that bootstrapping tends to produce.

    Guerrieri, Vieira and Penedones set out to determine what “α” has to be in order to satisfy two consistency conditions. The first, known as unitarity, states that the probabilities of different outcomes must always add up to 100%. The second, known as Lorentz invariance, says that the same laws of physics must hold from all vantage points.

    The trio specifically considered the range of values of “α” permitted by those two principles in supersymmetric 10D universes. Not only is the calculation simple enough to pull off in that setting (not so, currently, for “α” in 4D universes like our own), but it also allowed them to compare their bootstrapped range to string theory’s prediction that “α” in that 10D setting is 0.1389 or higher.

    Unitarity and Lorentz invariance impose constraints on what can happen in a two-graviton interaction in the following way: When the gravitons approach and scatter off each other, they might fly apart as two gravitons, or morph into three gravitons or any number of other particles. As you crank up the energies of the approaching gravitons, the chance they’ll emerge from the encounter as two gravitons changes — but unitarity demands that this probability never surpass 100%. Lorentz invariance means the probability can’t depend on how an observer is moving relative to the gravitons, restricting the form of the equations. Together the rules yield a complicated bootstrapped expression that “α” must satisfy. Guerrieri, Penedones and Vieira programmed the Perimeter Institute’s computer clusters to solve for values that make the two-graviton interactions unitary and Lorentz-invariant.

    The computer spit out its lower bound for “α”: 0.14, give or take a hundredth — an extremely close and potentially exact match with string theory’s lower bound of 0.1389. In other words, string theory seems to span the whole space of allowed “α” values — at least in the 10D place where the researchers checked. “That was a huge surprise,” Vieira said.

    10-Dimensional Coincidence

    What might the numerical coincidence mean? According to Simmons-Duffin, whose work a few years ago helped drive the bootstrap’s resurgence, “they’re trying to tackle a question [that’s] fundamental and important. Which is: To what extent does string theory as we know it cover the space of all possible theories of quantum gravity?”

    String theory emerged in the 1960s as a putative picture of the stringy glue that binds composite particles called mesons. A different description ended up prevailing for that purpose, but years later people realized that string theory could set its sights higher: If strings are small — so small they look like points — they could serve as nature’s elementary building blocks. Electrons, photons and so on would all be the same kind of fundamental string strummed in different ways. The theory’s selling point is that it gives a quantum description of gravity: A graviton is a closed string, or loop, in its lowest-energy vibration mode, in which an equal number of waves travel clockwise and counterclockwise around the loop. This feature would underlie macroscopic properties of gravity like the corkscrew-patterned polarization of gravitational waves.

    But matching the theory to all other aspects of reality takes some fiddling. To get rid of negative energies that would correspond to unphysical, faster-than-light particles, string theory needs a property called “Supersymmetry”, which doubles the number of its string vibration modes. Every vibration mode corresponding to a matter particle must come with another mode signifying a force particle. String theory also requires the existence of 10 space-time dimensions for the strings to wiggle around in. Yet we haven’t found any supersymmetric partner particles, and our universe looks 4D, with three dimensions of space and one of time.

    Standard Model of Supersymmetry

    Both of these data points present something of a problem.

    If string theory describes our world, Supersymmetry must be broken here. That means the partner particles, if they exist, must be far heavier than the known set of particles — too heavy to muster in experiments. And if there really are 10 dimensions, six must be curled up so small they’re imperceptible to us — tight little knots of extra directions you can go in at any point in space. These “compactified” dimensions in a 4D-looking universe could have countless possible arrangements, all affecting strings (and numbers like “α”) differently.

    Broken Supersymmetry and invisible dimensions have led many quantum gravity researchers to seek or prefer alternative, non-stringy ideas.

    Mordehai Milgrom, MOND theorist, is an Israeli physicist and professor in the department of Condensed Matter Physics at The Weizmann Institute of Science (IL) in Rehovot, Israel http://cosmos.nautil.us

    MOND Rotation Curves with MOND Tully-Fisher

    MOND 1

    But so far the rival approaches have struggled to produce the kind of concrete calculations about things like graviton interactions that string theory can.

    Some physicists hope to see string theory win hearts and minds by default, by being the only microscopic description of gravity that’s logically consistent. If researchers can prove “string universality,” as this is sometimes called — a monopoly of string theories among viable fundamental theories of nature — we’ll have no choice but to believe in hidden dimensions and an inaudible orchestra of strings.

    To string theory sympathizers, the new bootstrap calculation opens a route to eventually proving string universality, and it gets the journey off to a rip-roaring start.

    Other researchers disagree with those implications. Astrid Eichhorn, a theoretical physicist at The South Danish University [Syddansk Universitet](DK) and The Ruprecht Karl University of Heidelberg [Ruprecht-Karls-Universität Heidelberg](DE) who specializes in a non-stringy approach called asymptotically safe quantum gravity, told me, “I would consider the relevant setting to collect evidence for or against a given quantum theory of gravity to be four-dimensional and non-supersymmetric” universes, since this “best describes our world, at least so far.”

    Eichhorn pointed out that there might be unitary, Lorentz-invariant descriptions of gravitons in 4D that don’t make any sense in 10D. “Simply by this choice of setting one might have ruled out alternative quantum gravity approaches” that are viable, she said.

    Vieira acknowledged that string universality might hold only in 10 dimensions, saying, “It could be that in 10D with supersymmetry, there’s only string theory, and when you go to 4D, there are many theories.” But, he said, “I doubt it.”

    Another critique, though, is that even if string theory saturates the range of allowed “α” values in the 10-dimensional setting the researchers probed, that doesn’t stop other theories from lying in the permitted range. “I don’t see any practical way we’re going to conclude that string theory is the only answer,” said Andrew Tolley of Imperial College London (UK).

    Just the Beginning

    Assessing the meaning of the coincidence will become easier if bootstrappers can generalize and extend similar results to more settings. “At the moment, many, many people are pursuing these ideas in various variations,” said Alexander Zhiboedov, a theoretical physicist at The European Organization for Nuclear Research [Organización Europea para la Investigación Nuclear][Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN], Europe’s particle physics laboratory.

    Guerrieri, Penedones and Vieira have already completed a “dual” bootstrap calculation, which bounds “α” from below by ruling out solutions less than the minimum rather than solving for viable “α” values above the bound, as they did previously. This dual calculation shows that their computer clusters didn’t simply miss smaller allowed “α” values, which would correspond to additional viable quantum gravity theories outside string theory’s range.

    They also plan to bootstrap the lower bound for worlds with nine large dimensions, where string theory calculations are still under some control (since only one dimension is curled up), to look for more evidence of a correlation. Aside from “α”, bootstrappers also aim to calculate “β” and “γ” — the allowed sizes of the second- and third-biggest quantum gravity corrections— and they have ideas for how to approach harder calculations about worlds where supersymmetry is broken or nonexistent, as it appears to be in reality. In this way they’ll try to carve out the space of allowed quantum gravity theories, and test string universality in the process.

    Claudia de Rham, a theorist at Imperial College, emphasized the need to be “agnostic,” noting that bootstrap principles are useful for exploring more ideas than just string theory. She and Tolley have used positivity — the rule that probabilities are always positive — to constrain a theory called “Massive Gravity”, which may or may not be a realization of string theory. They discovered potentially testable consequences, showing that massive gravity only satisfies positivity if certain exotic particles exist. De Rham sees bootstrap principles and positivity bounds as “one of the most exciting research developments at the moment” in fundamental physics.

    “No one has done this job of taking everything we know and taking consistency and putting it together,” said Zhiboedov. It’s “exciting,” he added, that theorists have work to do “at a very basic level.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 1:56 pm on January 3, 2022 Permalink | Reply
    Tags: "Mathematicians Outwit Hidden Number Conspiracy", A new proof has debunked a conspiracy that mathematicians feared might haunt the number line., An improved solution to a particular formulation of the Chowla conjecture., , , Chowla conjecture, Connected numbers represent exceptions to Chowla’s conjecture in which the factorization of one integer actually does bias that of another., Consider the number 1001 which is divisible by the primes 7; 11; and 13. In Tao’s graph it shares edges with 1008; 1012 and 1014 (by addition) as well as with 994; 990 and 988 (by subtraction)., Eigenvalues, Expander graphs, Expander graphs have previously led to new discoveries in theoretical computer science; group theory and other areas of math., Harald Helfgott of The University of Göttingen [Georg-August-Universität Göttingen](DE), Helfgott and Radziwiłł have expander graphs available for problems in number theory as well., Helfgott and Radziwiłł’s solution to the logarithmic Chowla conjecture marked a significant quantitative improvement on Tao’s result., Linking two arithmetic operations that usually live independently of one another., Liouville function, Maksym Radziwiłł of The California Institute of Technology (US), Many of number theory’s most important problems arise when mathematicians think about how multiplication and addition relate in terms of the prime numbers., , Problems about primes that involve addition have plagued mathematicians for centuries., Proving the Chowla conjecture is a “sort of warmup or steppingstone” to answering those more intractable problems., Quanta Magazine (US), Tao proved an easier version of the problem called the logarithmic Chowla conjecture., Terence Tao of The University of California-Los Angeles (US), The primes themselves are defined in terms of multiplication: They’re divisible by no numbers other than themselves and 1., The twin primes conjecture asserts that there are infinitely many primes that differ by only 2 (like 11 and 13)., There could be this vast conspiracy that every time a number n decides to be prime it has some secret agreement with its neighbor n + 2 saying you’re not allowed to be prime anymore., This work has given mathematicians another set of tools for understanding arithmetic’s fundamental building blocks-the prime numbers., When multiplied together they construct the rest of the integers.   

    From Quanta Magazine (US) : “Mathematicians Outwit Hidden Number Conspiracy” 

    From Quanta Magazine (US)

    January 3, 2022
    Jordana Cepelewicz

    A new proof has debunked a conspiracy that mathematicians feared might haunt the number line. In doing so, it has given them another set of tools for understanding arithmetic’s fundamental building blocks-the prime numbers.

    In a paper posted last March, Harald Helfgott of The University of Göttingen [Georg-August-Universität Göttingen](DE) and Maksym Radziwiłł of The California Institute of Technology (US) presented an improved solution to a particular formulation of the Chowla conjecture, a question about the relationships between integers.

    The conjecture predicts that whether one integer has an even or odd number of prime factors does not influence whether the next or previous integer also has an even or odd number of prime factors. That is, nearby numbers do not collude about some of their most basic arithmetic properties.

    That seemingly straightforward inquiry is intertwined with some of math’s deepest unsolved questions about the primes themselves. Proving the Chowla conjecture is a “sort of warmup or steppingstone” to answering those more intractable problems, said Terence Tao of The University of California-Los Angeles (US).

    Terence Tao developed a strategy for using expander graphs to answer a version of the Chowla conjecture but couldn’t quite make it work. Courtesy of UCLA.

    And yet for decades, that warmup was a nearly impossible task itself. It was only a few years ago that mathematicians made any progress, when Tao proved an easier version of the problem called the logarithmic Chowla conjecture. But while the technique he used was heralded as innovative and exciting, it yielded a result that was not precise enough to help make additional headway on related problems, including ones about the primes. Mathematicians hoped for a stronger and more widely applicable proof instead.

    Now, Helfgott and Radziwiłł have provided just that. Their solution, which pushes techniques from graph theory squarely into the heart of number theory, has reignited hope that the Chowla conjecture will deliver on its promise — ultimately leading mathematicians to the ideas they’ll need to confront some of their most elusive questions.

    Conspiracy Theories

    Many of number theory’s most important problems arise when mathematicians think about how multiplication and addition relate in terms of the prime numbers.

    The primes themselves are defined in terms of multiplication: They’re divisible by no numbers other than themselves and 1, and when multiplied together they construct the rest of the integers. But problems about primes that involve addition have plagued mathematicians for centuries. For instance, the twin primes conjecture asserts that there are infinitely many primes that differ by only 2 (like 11 and 13). The question is challenging because it links two arithmetic operations that usually live independently of one another. “It’s difficult because we are mixing two worlds,” said Oleksiy Klurman of The University of Bristol (UK).

    Maksym Radziwiłł. Caltech.

    Harald Helfgott . University of Göttingen.

    Intuition tells mathematicians that adding 2 to a number should completely change its multiplicative structure — meaning there should be no correlation between whether a number is prime (a multiplicative property) and whether the number two units away is prime (an additive property). Number theorists have found no evidence to suggest that such a correlation exists, but without a proof, they can’t exclude the possibility that one might emerge eventually.

    “For all we know, there could be this vast conspiracy that every time a number n decides to be prime, it has some secret agreement with its neighbor n + 2 saying you’re not allowed to be prime anymore,” said Tao.

    No one has come close to ruling out such a conspiracy. That’s why, in 1965, Sarvadaman Chowla formulated a slightly easier way to think about the relationship between nearby numbers. He wanted to show that whether an integer has an even or odd number of prime factors — a condition known as the “parity” of its number of prime factors — should not in any way bias the number of prime factors of its neighbors.

    This statement is often understood in terms of the Liouville function, which assigns integers a value of −1 if they have an odd number of prime factors (like 12, which is equal to 2 × 2 × 3) and +1 if they have an even number (like 10, which is equal to 2 × 5). The conjecture predicts that there should be no correlation between the values that the Liouville function takes for consecutive numbers.

    Many state-of-the-art methods for studying prime numbers break down when it comes to measuring parity, which is precisely what Chowla’s conjecture is all about. Mathematicians hoped that by solving it, they’d develop ideas they could apply to problems like the twin primes conjecture.

    For years, though, it remained no more than that: a fanciful hope. Then, in 2015, everything changed.

    Dispersing Clusters

    Radziwiłł and Kaisa Matomäki of The University of Turku [Turun yliopisto](FI) didn’t set out to solve the Chowla conjecture. Instead, they wanted to study the behavior of the Liouville function over short intervals. They already knew that, on average, the function is +1 half the time and −1 half the time. But it was still possible that its values might cluster, cropping up in long concentrations of either all +1s or all −1s.

    In 2015, Matomäki and Radziwiłł proved that those clusters almost never occur [Annals of Mathematics]. Their work, published the following year, established that if you choose a random number and look at, say, its hundred or thousand nearest neighbors, roughly half have an even number of prime factors and half an odd number.

    “That was the big piece that was missing from the puzzle,” said Andrew Granville of The University of Montreal [Université de Montréal](CA). “They made this unbelievable breakthrough that revolutionized the whole subject.”

    It was strong evidence that numbers aren’t complicit in a large-scale conspiracy — but the Chowla conjecture is about conspiracies at the finest level. That’s where Tao came in. Within months, he saw a way to build on Matomäki and Radziwiłł’s work to attack a version of the problem that’s easier to study, the logarithmic Chowla conjecture. In this formulation, smaller numbers are given larger weights so that they are just as likely to be sampled as larger integers.

    Tao had a vision for how a proof of the logarithmic Chowla conjecture might go. First, he would assume that the logarithmic Chowla conjecture is false — that there is in fact a conspiracy between the number of prime factors of consecutive integers. Then he’d try to demonstrate that such a conspiracy could be amplified: An exception to the Chowla conjecture would mean not just a conspiracy among consecutive integers, but a much larger conspiracy along entire swaths of the number line.

    He would then be able to take advantage of Radziwiłł and Matomäki’s earlier result, which had ruled out larger conspiracies of exactly this kind. A counterexample to the Chowla conjecture would imply a logical contradiction — meaning it could not exist, and the conjecture had to be true.

    But before Tao could do any of that, he had to come up with a new way of linking numbers.

    A Web of Lies

    Tao started by capitalizing on a defining feature of the Liouville function. Consider the numbers 2 and 3. Both have an odd number of prime factors and therefore share a Liouville value of −1. But because the Liouville function is multiplicative, multiples of 2 and 3 also have the same sign pattern as each other.

    That simple fact carries an important implication. If 2 and 3 both have an odd number of prime factors due to some secret conspiracy, then there’s also a conspiracy between 4 and 6 — numbers that differ not by 1 but by 2. And it gets worse from there: A conspiracy between adjacent integers would also imply conspiracies between all pairs of their multiples.

    “For any prime, these conspiracies will propagate,” Tao said.

    To better understand this widening conspiracy, Tao thought about it in terms of a graph — a collection of vertices connected by edges. In this graph, each vertex represents an integer. If two numbers differ by a prime and are also divisible by that prime, they’re connected by an edge.

    For example consider the number 1001, which is divisible by the primes 7, 11 and 13. In Tao’s graph, it shares edges with 1,008, 1,012 and 1,014 (by addition), as well as with 994, 990 and 988 (by subtraction). Each of these numbers is in turn connected to many other vertices.

    Samuel Velasco/Quanta Magazine

    Taken together, those edges encode broader networks of influence: Connected numbers represent exceptions to Chowla’s conjecture in which the factorization of one integer actually does bias that of another.

    To prove his logarithmic version of the Chowla conjecture, Tao needed to show that this graph has too many connections to be a realistic representation of values of the Liouville function. In the language of graph theory, that meant showing that his graph of interconnected numbers had a specific property — that it was an “expander” graph.

    Expander Walks

    An expander is an ideal yardstick for measuring the scope of a conspiracy. It’s a highly connected graph, even though it has relatively few edges compared to its number of vertices. That makes it difficult to create a cluster of interconnected vertices that don’t interact much with other parts of the graph.

    If Tao could show that his graph was a local expander — that any given neighborhood on the graph had this property — he’d prove that a single breach of the Chowla conjecture would spread across the number line, a clear violation of Matomäki and Radziwiłł’s 2015 result.

    “The only way to have correlations is if the entire population sort of shares that correlation,” said Tao.

    Proving that a graph is an expander often translates to studying random walks along its edges. In a random walk, each successive step is determined by chance, as if you were wandering through a city and flipping a coin at each intersection to decide whether to turn left or right. If the streets of that city form an expander, it’s possible to get pretty much anywhere by taking random walks of relatively few steps.

    But walks on Tao’s graph are strange and circuitous. It’s impossible, for instance, to jump directly from 1,001 to 1,002; that requires at least three steps. A random walk along this graph starts at an integer, adds or subtracts a random prime that divides it, and moves to another integer.

    It’s not obvious that repeating this process only a few times can lead to any point in a given neighborhood, which should be the case if the graph really is an expander. In fact, when the integers on the graph get big enough, it’s no longer clear how to even create random paths: Breaking numbers down into their prime factors — and therefore defining the graph’s edges — becomes prohibitively difficult.

    “It’s a scary thing, counting all these walks,” Helfgott said.

    When Tao tried to show that his graph was an expander, “it was a little too hard,” he said. He developed a new approach instead, based on a measure of randomness called entropy. This allowed him to circumvent the need to show the expander property — but at a cost.

    He could solve the logarithmic Chowla conjecture [Forum of Mathematics, Pi], but less precisely than he’d wanted to. In an ideal proof of the conjecture, independence between integers should always be evident, even along small sections of the number line. But with Tao’s proof, that independence doesn’t become visible until you sample over an astronomical number of integers.

    “It’s not quantitatively very strong,” said Joni Teräväinen of the University of Turku.

    Moreover, it wasn’t clear how to extend his entropy method to other problems.

    “Tao’s work was a complete breakthrough,” said James Maynard of The University of Oxford (UK), but because of those limitations, “it couldn’t possibly give those things that would lead to the natural next steps in the direction of problems more like the twin primes conjecture.”

    Five years later, Helfgott and Radziwiłł managed to do what Tao couldn’t — by extending the conspiracy he’d identified even further.

    Enhancing the Conspiracy

    Tao had built a graph that connected two integers if they differed by a prime and were divisible by that prime. Helfgott and Radziwiłł considered a new, “naïve” graph that did away with that second condition, connecting numbers merely if subtracting one from the other yielded a prime.

    The effect was an explosion of edges. On this naïve graph, 1,001 didn’t have just six connections with other vertices, it had hundreds. But the graph was also much simpler than Tao’s in a key way: Taking random walks along its edges didn’t require knowledge of the prime divisors of very large integers. That, along with the greater density of edges, made it much easier to demonstrate that any neighborhood in the naïve graph had the expander property — that you’re likely to get from any vertex to any other in a small number of random steps.

    Helfgott and Radziwiłł needed to show that this naïve graph approximated Tao’s graph. If they could show that the two graphs were similar, they would be able to infer properties of Tao’s graph by looking at theirs instead. And because they already knew their graph was a local expander, they’d be able to conclude that Tao’s was, too (and therefore that the logarithmic Chowla conjecture was true).

    But given that the naïve graph had so many more edges than Tao’s, the resemblance was buried, if it existed at all.

    “What does it even mean when you’re saying these graphs look like each other?” Helfgott said.

    Hidden Resemblance

    While the graphs don’t look like each other on the surface, Helfgott and Radziwiłł set out to prove that they approximate each other by translating between two perspectives. In one, they looked at the graphs as graphs; in the other, they looked at them as objects called matrices.

    First they represented each graph as a matrix, which is an array of values that in this case encoded connections between vertices. Then they subtracted the matrix that represented the naïve graph from the matrix that represented Tao’s graph. The result was a matrix that represented the difference between the two.

    Helfgott and Radziwiłł needed to prove that certain parameters associated with this matrix, called eigenvalues, were all small. This is because a defining characteristic of an expander graph is that its associated matrix has one large eigenvalue while the rest are significantly smaller. If Tao’s graph, like the naïve one, was an expander, then it too would have one large eigenvalue — and those two large eigenvalues would nearly cancel out when one matrix was subtracted from the other, leaving a set of eigenvalues that were all small.

    But eigenvalues are tricky to study by themselves. Instead, an equivalent way to prove that all the eigenvalues of this matrix were small involved a return to graph theory. And so, Helfgott and Radziwiłł converted this matrix (the difference between the matrices representing their naïve graph and Tao’s more complicated one) back into a graph itself.

    They then proved that this graph contained few random walks — of a certain length and in compliance with a handful of other properties — that looped back to their starting points. This implied that most random walks on Tao’s graph had essentially canceled out random walks on the naïve expander graph — meaning that the former could be approximated by the latter, and both were therefore expanders.

    A Way Forward

    Helfgott and Radziwiłł’s solution to the logarithmic Chowla conjecture marked a significant quantitative improvement on Tao’s result. They could sample over far fewer integers to arrive at the same outcome: The parity of the number of prime factors of an integer is not correlated with that of its neighbors.

    “That’s a very strong statement about how prime numbers and divisibility look random,” said Ben Green of Oxford.

    But the work is perhaps even more exciting because it provides “a natural way to attack the problem,” Matomäki said — exactly the intuitive approach that Tao first hoped for six years ago.

    Expander graphs have previously led to new discoveries in theoretical computer science; group theory and other areas of math. Now, Helfgott and Radziwiłł have made them available for problems in number theory as well. Their work demonstrates that expander graphs have the power to reveal some of the most basic properties of arithmetic — dispelling potential conspiracies and starting to disentangle the complex interplay between addition and multiplication.

    “Suddenly, when you’re using the graph language, it’s seeing all this structure in the problem that you couldn’t really see beforehand,” Maynard said. “That’s the magic.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:53 pm on December 10, 2021 Permalink | Reply
    Tags: "Gravitational Waves Should Permanently Distort Space-Time", , , , , , , , , Quanta Magazine (US)   

    From Quanta Magazine (US) : “Gravitational Waves Should Permanently Distort Space-Time” 

    From Quanta Magazine (US)

    December 8, 2021
    Katie McCormick

    A black hole collision should forever scar space-time. Credit: Alfred Pasieka / Science Source.

    The first detection of gravitational waves in 2016 provided decisive confirmation of Einstein’s general theory of relativity. But another astounding prediction remains unconfirmed: According to general relativity, every gravitational wave should leave an indelible imprint on the structure of space-time. It should permanently strain space, displacing the mirrors of a gravitational wave detector even after the wave has passed.

    Caltech /MIT Advanced aLigo
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA.

    Since that first detection almost six years ago, physicists have been trying to figure out how to measure this so-called “memory effect.”

    “The memory effect is absolutely a strange, strange phenomenon,” said Paul Lasky, an astrophysicist at Monash University (AU). “It’s really deep stuff.”

    Their goals are broader than just glimpsing the permanent space-time scars left by a passing gravitational wave. By exploring the links between matter, energy and space-time, physicists hope to come to a better understanding of Stephen Hawking’s black hole information paradox, which has been a major focus of theoretical research for going on five decades. “There’s an intimate connection between the memory effect and the symmetry of space-time,” said Kip Thorne, a physicist at The California Institute of Technology (US) whose work on gravitational waves earned him part of the 2017 Nobel Prize in Physics. “It is connected ultimately to the loss of information in black holes, a very deep issue in the structure of space and time.”

    A Scar in Space-Time

    Why would a gravitational wave permanently change space-time’s structure? It comes down to general relativity’s intimate linking of space-time and energy.

    First consider what happens when a gravitational wave passes by a gravitational wave detector. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has two arms positioned in an L shape [see Livingstone, LA installation above]. If you imagine a circle circumscribing the arms, with the center of the circle at the arms’ intersection, a gravitational wave will periodically distort the circle, squeezing it vertically, then horizontally, alternating until the wave has passed. The difference in length between the two arms will oscillate — behavior that reveals the distortion of the circle, and the passing of the gravitational wave.

    According to the memory effect, after the passing of the wave, the circle should remain permanently deformed by a tiny amount. The reason why has to do with the particularities of gravity as described by general relativity.

    The objects that LIGO detects are so far away, their gravitational pull is negligibly weak. But a gravitational wave has a longer reach than the force of gravity. So, too, does the property responsible for the memory effect: the gravitational potential.

    In simple Newtonian terms, a gravitational potential measures how much energy an object would gain if it fell from a certain height. Drop an anvil off a cliff, and the speed of the anvil at the bottom can be used to reconstruct the “potential” energy that falling off the cliff can impart.

    But in general relativity, where space-time is stretched and squashed in different directions depending on the motions of bodies, a potential dictates more than just the potential energy at a location — it dictates the shape of space-time.

    “The memory is nothing but the change in the gravitational potential,” said Thorne, “but it’s a relativistic gravitational potential.” The energy of a passing gravitational wave creates a change in the gravitational potential; that change in potential distorts space-time, even after the wave has passed.

    How, exactly, will a passing wave distort space-time? The possibilities are literally infinite, and, puzzlingly, these possibilities are also equivalent to one another. In this manner, space-time is like an infinite game of Boggle. The classic Boggle game has 16 six-sided dice arranged in a four-by-four grid, with a letter on each side of each die. Each time a player shakes the grid, the dice clatter around and settle into a new arrangement of letters. Most configurations are distinguishable from one another, but all are equivalent in a larger sense. They are all at rest in the lowest-energy state that the dice could possibly be in. When a gravitational wave passes through, it shakes the cosmic Boggle board, changing space-time from one wonky configuration to another. But space-time remains in its lowest-energy state.

    Super Symmetries

    That characteristic — that you can change the board, but in the end things fundamentally stay the same — suggests the presence of hidden symmetries in the structure of space-time. Within the past decade, physicists have explicitly made this connection.

    The story starts back in the 1960s, when four physicists wanted to better understand general relativity. They wondered what would happen in a hypothetical region infinitely far from all mass and energy in the universe, where gravity’s pull can be neglected, but gravitational radiation cannot. They started by looking at the symmetries this region obeyed.

    They already knew the symmetries of the world according to special relativity, where space-time is flat and featureless. In such a smooth world, everything looks the same regardless of where you are, which direction you’re facing, and the speed at which you’re moving. These properties correspond to the translational, rotational and boost symmetries, respectively. The physicists expected that infinitely far from all the matter in the universe, in a region referred to as “asymptotically flat,” these simple symmetries would reemerge.

    To their surprise, they found an infinite set of symmetries in addition to the expected ones. The new “supertranslation” symmetries indicated that individual sections of space-time could be stretched, squeezed and sheared, and the behavior in this infinitely distant region would remain the same.

    In the 1980s, Abhay Ashtekar, a physicist at The Pennsylvania State University (US), discovered that the memory effect was the physical manifestation of these symmetries. In other words, a supertranslation was exactly what would cause the Boggle universe to pick a new but equivalent way to warp space-time.

    His work connected these abstract symmetries in a hypothetical region of the universe to real effects. “To me that’s the exciting thing about measuring the memory effect — it’s just proving these symmetries are really physical,” said Laura Donnay, a physicist at The Vienna University of Technology (TU Wien)[Technische Universität Wien](AT). “Even very good physicists don’t quite grasp that they act in a nontrivial way and give you physical effects. And the memory effect is one of them.”

    Probing a Paradox

    The point of the Boggle game is to search the seemingly random arrangement of letters on the grid to find words. Each new configuration hides new words, and hence new information.

    Like Boggle, space-time has the potential to store information, which could be the key to solving the infamous black hole information paradox. Briefly, the paradox is this: Information cannot be created or destroyed. So where does the information about particles go after they fall into a black hole and are re-emitted as information-less Hawking radiation?

    In 2016, Andrew Strominger, a physicist at Harvard University (US), along with Stephen Hawking [The University of Cambridge (UK)] and Malcolm Perry [The University of Cambridge (UK) and Queen Mary University of London (UK)] realized that the horizon of a black hole has the same supertranslation symmetries as those in asymptotically flat space. And by the same logic as before, there would be an accompanying memory effect. This meant the infalling particles could alter space-time near the black hole, thereby changing its information content. This offered a possible solution to the information paradox. Knowledge of the particles’ properties wasn’t lost — it was permanently encoded in the fabric of space-time.

    “The fact that you can say something interesting about black hole evaporation is pretty cool,” said Sabrina Pasterski, a theoretical physicist at Princeton University (US). “The starting point of the framework has already had interesting results. And now we’re pushing the framework even further.”

    Pasterski and others have launched a new research program relating statements about gravity and other areas of physics to these infinite symmetries. In chasing the connections, they’ve discovered new, exotic memory effects. Pasterski established a connection between a different set of symmetries and a spin memory effect, where space-time becomes gnarled and twisted from gravitational waves that carry angular momentum.

    A Ghost in the Machine

    Alas, LIGO scientists haven’t yet seen evidence of the memory effect. The change in the distance between LIGO’s mirrors from a gravitational wave is minuscule — about one-thousandth the width of a proton — and the memory effect is predicted to be 20 times smaller.

    LIGO’s placement on our noisy planet worsens matters. Low-frequency seismic noise mimics the memory effect’s long-term changes in the mirror positions, so disentangling the signal from noise is tricky business.

    Earth’s gravitational pull also tends to restore LIGO’s mirrors to their original position, erasing its memory. So even though the kinks in space-time are permanent, the changes in the mirror position — which enables us to measure the kinks — are not. Researchers will need to measure the displacement of the mirrors caused by the memory effect before gravity has time to pull them back down.

    While detecting the memory effect caused by a single gravitational wave is infeasible with current technology, astrophysicists like Lasky and Patricia Schmidt of The University of Birmingham (UK) have thought up clever workarounds. “What you can do is effectively stack up the signal from multiple mergers,” said Lasky, “accumulating evidence in a very statistically rigorous way.”

    Lasky and Schmidt have independently predicted that they’ll need over 1,000 gravitational wave events to accumulate enough statistics to confirm they’ve seen the memory effect. With ongoing improvements to LIGO, as well as contributions from the VIRGO detector in Italy and KAGRA in Japan, Lasky thinks reaching 1,000 detections is a few short years away.


    Caltech /MIT Advanced aLigo.

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation.

    VIRGO Gravitational Wave interferometer, near Pisa, Italy.

    KAGRA Large-scale Cryogenic Gravitational Wave Telescope Project (JP).

    LIGO Virgo Kagra Masses in the Stellar Graveyard. Credit: Frank Elavsky and Aaron Geller at Northwestern University(US).

    “It is such a special prediction,” said Schmidt. “It’s quite exciting to see if it’s actually true.”

    Correction: December 9, 2021
    The original version of this article attributed the original discovery of the connection between supertranslation symmetries and the memory effect to Andrew Strominger in 2014. In fact, that connection had previously been known. The 2014 discovery by Strominger was between supertranslation symmetries, the memory effect and a third topic.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:16 pm on November 12, 2021 Permalink | Reply
    Tags: "A New Theory for Systems That Defy Newton’s Third Law", , , , Quanta Magazine (US)   

    From Quanta Magazine (US) : “A New Theory for Systems That Defy Newton’s Third Law” 

    From Quanta Magazine (US)

    November 11, 2021
    Stephen Ornes

    By programming a fleet of robots to behave nonreciprocally — blue cars react to red cars differently than red cars react to blue cars — a team of researchers elicited spontaneous phase transitions.Credit: Kristen Norman for Quanta Magazine.

    Newton’s third law tells us that for every action, there’s an equal reaction going the opposite way. It’s been reassuring us for 400 years, explaining why we don’t fall through the floor (the floor pushes up on us too), and why paddling a boat makes it glide through water. When a system is in equilibrium, no energy goes in or out and such reciprocity is the rule. Mathematically, these systems are elegantly described with statistical mechanics, the branch of physics that explains how collections of objects behave. This allows researchers to fully model the conditions that give rise to phase transitions in matter, when one state of matter transforms into another, such as when water freezes.

    But many systems exist and persist far from equilibrium. Perhaps the most glaring example is life itself. We’re kept out of equilibrium by our metabolism, which converts matter into energy. A human body that settles into equilibrium is a dead body.

    In such systems, Newton’s third law becomes moot. Equal-and-opposite falls apart. “Imagine two particles,” said Vincenzo Vitelli, a condensed matter theorist at The University of Chicago (US), “where A interacts with B in a different way than how B interacts with A.” Such nonreciprocal relationships show up in systems like neuron networks and particles in fluids and even, on a larger scale, in social groups. Predators eat prey, for example, but prey doesn’t eat its predators.

    For these unruly systems, statistical mechanics falls short in representing phase transitions. Out of equilibrium, nonreciprocity dominates. Flocking birds show how easily the law is broken: Because they can’t see behind them, individuals change their flight patterns in response to the birds ahead of them. So bird A doesn’t interact with bird B in the same way that bird B interacts with bird A; it’s not reciprocal. Cars barreling down a highway or stuck in traffic are similarly nonreciprocal. Engineers and physicists who work with metamaterials — which get their properties from structure, rather than substance — have harnessed nonreciprocal elements to design acoustic, quantum and mechanical devices.

    Many of these systems are kept out of equilibrium because individual constituents have their own power source — ATP for cells, gas for cars. But all these extra energy sources and mismatched reactions make for a complex dynamical system beyond the reach of statistical mechanics. How can we analyze phases in such ever-changing systems?

    Vitelli and his colleagues see an answer in mathematical objects called exceptional points. Generally, an exceptional point in a system is a singularity, a spot where two or more characteristic properties become indistinguishable and mathematically collapse into one. At an exceptional point, the mathematical behavior of a system differs dramatically from its behavior at nearby points, and exceptional points often describe curious phenomena in systems — like lasers — in which energy is gained and lost continuously.

    Now the team has found [Nature] that these exceptional points also control phase transitions in nonreciprocal systems. Exceptional points aren’t new; physicists and mathematicians have studied them for decades in a variety of settings. But they’ve never been associated so generally with this type of phase transition. “That’s what no one has thought about before, using these in the context of nonequilibrium systems,” said the physicist Cynthia Reichhardt of DOE’s Los Alamos National Laboratory (US). “So you can bring all the machinery that we already have about exceptional points to study these systems.”

    The new work also draws connections among a range of areas and phenomena that, for years, haven’t seemed to have anything to say to each other. “I believe their work represents rich territory for mathematical development,” said Robert Kohn of the Courant Institute of Mathematical Sciences at New York University (US).

    When Symmetry Breaks

    The work began not with birds or neurons, but with quantum weirdness. A few years ago, two of the authors of the new paper — Ryo Hanai, a postdoctoral researcher at The University of Chicago (US), and Peter Littlewood, Hanai’s adviser — were investigating a kind of quasiparticle called a polariton. (Littlewood is on the scientific advisory board of the The Flatiron Institute Center for Computational Astrophysics (US), a research division of The Simons Foundation (US), which also funds this editorially independent publication.)

    A quasiparticle isn’t a particle per se. It’s a collection of quantum behaviors that, en masse, look as if they should be connected to a particle. A polariton shows up when photons (the particles responsible for light) couple with excitons (which themselves are quasiparticles). Polaritons have exceptionally low mass, which means they can move very fast and can form a state of matter called a Bose-Einstein condensate (BEC) — in which separate atoms all collapse into a single quantum state — at higher temperatures than other particles.

    However, using polaritons to create a BEC is complicated. It’s leaky. Some photons continuously escape the system, which means light must be pumped continuously into the system to make up the difference. That means it’s out of equilibrium. “From the theory side, that’s what was interesting to us,” said Hanai.

    To Hanai and Littlewood, it was analogous to creating lasers. “Photons are leaking out all the time, but nonetheless you maintain some coherent state,” said Littlewood. This is because of the constant addition of new energy powering the laser. They wanted to know: How does being out of equilibrium affect the transition into BEC or other exotic quantum states of matter? And, in particular, how does that change affect the system’s symmetry?

    The concept of symmetry is at the heart of phase transitions. Liquids and gases are considered highly symmetric because if you found yourself hurtling through them in a molecule-size jet, the spray of particles would look the same in every direction. Fly your ship through a crystal or other solid, though, and you’ll see that molecules occupy straight rows, with the patterns you see determined by where you are. When a material changes from a liquid or gas to a solid, researchers say its symmetry “breaks.”

    In physics, one of the most well-studied phase transitions shows up in magnetic materials. The atoms in a magnetic material like iron or nickel each have something called a magnetic moment, which is basically a tiny individual magnetic field. In magnets, these magnetic moments all point in the same direction and collectively produce a magnetic field. But if you heat the material enough — even with a candle, in high school science demonstrations — those magnetic moments become jumbled. Some point one way, and others a different way. The overall magnetic field is lost, and symmetry is restored. When it cools, the moments again align, breaking that free-form symmetry, and magnetism is restored.

    The flocking of birds can also be viewed as a breaking of symmetry: Instead of flying in random directions, they align like the spins in a magnet. But there is an important difference: A ferromagnetic phase transition is easily explained using statistical mechanics because it’s a system in equilibrium.

    But birds — and cells, bacteria and cars in traffic — add new energy to the system. “Because they have a source of internal energy, they behave differently,” said Reichhardt. “And because they don’t conserve energy, it appears out of nowhere, as far as the system is concerned.”

    Beyond Quantum

    Hanai and Littlewood started their investigation into BEC phase transitions by thinking about ordinary, well-known phase transitions. Consider water: Even though liquid water and steam look different, Littlewood said, there’s basically no symmetry distinction between them. Mathematically, at the point of the transition, the two states are indistinguishable. In a system in equilibrium, that point is called a critical point.

    Critical phenomena show up all over the place — in cosmology, high-energy physics, even biological systems. But in all these examples, researchers couldn’t find a good model for the condensates that form when quantum mechanical systems are coupled to the environment, undergoing constant damping and pumping.

    Hanai and Littlewood suspected that critical points and exceptional points had to share some important properties, even if they clearly arose from different mechanisms. “Critical points are sort of an interesting mathematical abstraction,” said Littlewood, “where you can’t tell the difference between these two phases. Exactly the same thing happens in these polariton systems.”

    They also knew that under the mathematical hood, a laser — technically a state of matter — and a polariton-exciton BEC had the same underlying equations. In a paper published in 2019 [Physical Review Letters], the researchers connected the dots, proposing a new and, crucially, universal mechanism by which exceptional points give rise to phase transitions in quantum dynamical systems.

    “We believe that was the first explanation for those transitions,” said Hanai.

    At about the same time, Hanai said, they realized that even though they were studying a quantum state of matter, their equations weren’t dependent on quantum mechanics. Did the phenomenon they were studying apply to even bigger and more general phenomena? “We started to suspect that this idea [connecting a phase transition to an exceptional point] could be applied to classical systems as well.”

    But to chase that idea, they’d need help. They approached Vitelli and Michel Fruchart, a postdoctoral researcher in Vitelli’s lab, who study unusual symmetries in the classical realm. Their work extends to metamaterials, which are rich in nonreciprocal interactions; they may, for example, exhibit different reactions to being pressed on one side or another and can also display exceptional points.

    Vitelli and Fruchart were immediately intrigued. Was some universal principle playing out in the polariton condensate, some fundamental law about systems where energy isn’t conserved?

    Getting in Sync

    Now a quartet, the researchers began looking for general principles underpinning the connection between nonreciprocity and phase transitions. For Vitelli, that meant thinking with his hands. He has a habit of building physical mechanical systems to illustrate difficult, abstract phenomena. In the past, for example, he’s used Legos to build lattices that become topological materials that move differently on the edges than in the interior.

    “Even though what we’re talking about is theoretical, you can demonstrate it with toys,” he said.

    But for exceptional points, he said, “Legos aren’t enough.” He realized that it would be easier to model nonreciprocal systems using building blocks that could move on their own but were governed by nonreciprocal rules of interaction.

    So the team whipped up a fleet of two-wheeled robots programmed to behave nonreciprocally. These robot assistants are small, cute and simple. The team programmed them all with certain color-coded behaviors. Red ones would align with other reds, and the blues with other blues. But here’s the nonreciprocity: The red ones would also orient themselves in the same directions as the blues, while the blues would point in the opposite direction of reds. This arrangement guarantees that no agent will ever get what it wants.

    Each robot is programmed to align with others of the same color, but they’re also programmed to behave nonreciprocally: Red ones want to align with blue ones, while blue ones want to point in the opposite direction. The result is a spontaneous phase transition, as they all began rotating in place.

    The group scattered the robots across the floor and turned them all on at the same time. Almost immediately, a pattern emerged. The robots began to move, turning slowly but simultaneously, until they were all rotating, basically in place, in the same direction. Rotation wasn’t built into the robots, Vitelli said. “It’s due to all these frustrated interactions. They’re perpetually frustrated in their motions.”

    It’s tempting to let the charm of a fleet of spinning, frustrated robots overshadow the underlying theory, but those rotations exactly demonstrated a phase transition for a system out of equilibrium. And the symmetry-breaking that they demonstrated lines up mathematically with the same phenomenon Hanai and Littlewood found when looking at exotic quantum condensates.

    To better explore that comparison, the researchers turned to the mathematical field of bifurcation theory. A bifurcation is a qualitative change in the behavior of a dynamical system, often taking the form of one state splitting into two.

    The researchers also created simulations of two groups of agents moving at constant speed with different relationships to each other. At left, the two groups move randomly. In the next frame, blue and red agents fly in the same direction, spontaneously breaking symmetry and displaying flocking behavior. When the two groups fly in opposite directions, there’s a similar antiflocking phase. In a nonreciprocal situation, at right, a new phase appears where they run in circles — another case of spontaneous symmetry breaking.

    Mathematicians draw bifurcation diagrams (the simplest look like pitchforks) to analyze how the states of a system respond to changes in their parameters. Often, a bifurcation divides stability from instability; it may also divide different types of stable states. It’s useful in studying systems associated with mathematical chaos, where small changes in the starting point (one parameter at the outset) can trigger outsize changes in the outcomes. The system shifts from non-chaotic to chaotic behaviors through a cascade of bifurcation points. Bifurcations have a long-standing connection to phase transitions, and the four researchers built on that link to better understand nonreciprocal systems.

    That meant they also had to think about the energy landscape. In statistical mechanics, the energy landscape of a system shows how energy changes form (such as from potential to kinetic) in space. At equilibrium, phases of matter correspond to the minima — the valleys — of the energy landscape. But this interpretation of phases of matter requires the system to end up at those minima, says Fruchart.

    Vitelli said perhaps the most important aspect of the new work is that it reveals the limitations of the existing language that physicists and mathematicians use to describe systems in flux. When equilibrium is a given, he said, statistical mechanics frames the behavior and phenomena in terms of minimizing the energy — since no energy is added or lost. But when a system is out of equilibrium, “by necessity, you can no longer describe it with our familiar energy language, but you still have a transition between collective states,” he said. The new approach relaxes the fundamental assumption that to describe a phase transition you must minimize energy.

    “When we assume there is no reciprocity, we can no longer define our energy,” Vitelli said, “and we have to recast the language of these transitions into the language of dynamics.”

    Looking for Exotic Phenomena

    The work has wide implications. To demonstrate how their ideas work together, the researchers analyzed a range of nonreciprocal systems. Because the kinds of phase transitions they’ve connected to exceptional points can’t be described by energy considerations, these exceptional-point symmetry shifts can only occur in nonreciprocal systems. That suggests that beyond reciprocity lie a range of phenomena in dynamical systems that could be described with the new framework.

    And now that they’ve laid the foundation, Littlewood said, they’ve begun to investigate just how widely it can be applied. “We’re beginning to generalize this to other dynamical systems we didn’t think had the same properties,” he said.

    Vitelli said almost any dynamical system with nonreciprocal behaviors would be worth probing with this new approach. “It’s really a step towards a general theory of collective phenomena in systems whose dynamics is not governed by an optimization principle.”

    Littlewood said he’s most excited about looking for phase transitions in one of the most complicated dynamical systems of all — the human brain. “Where we’re going next is neuroscience,” he said. He points out that neurons have been shown to come in “many flavors,” sometimes excited, sometimes inhibited. “That is nonreciprocal, pretty clearly.” That means their connections and interactions might be accurately modeled using bifurcations, and by looking for phase transitions in which the neurons synchronize and show cycles. “It’s a really exciting direction we’re exploring,” he said, “and the mathematics works.”

    Mathematicians are excited too. Kohn, at the Courant Institute, said the work may have connections to other mathematical topics — like turbulent transport or fluid flow — that researchers haven’t yet recognized. Nonreciprocal systems may turn out to exhibit phase transitions or other spatial patterns for which an appropriate mathematical language is currently lacking.

    “This work may be full of new opportunities, and maybe we’ll need new math,” Kohn said. “That’s sort of the heart of how mathematics and physics connect, to the benefit of both. Here’s a sandbox that we haven’t noticed so far, and here’s a list of things we might do.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:07 pm on November 10, 2021 Permalink | Reply
    Tags: "Laws of Logic Lead to New Restrictions on the Big Bang", “de Sitter” space, , Cosmologists Close in on Logical Laws for the Big Bang, , Inflation theory, , Quanta Magazine (US), , The universe’s first moments have always been a mysterious era when the quantum nature of gravity would have been on full display.,   

    From Quanta Magazine (US) : “Laws of Logic Lead to New Restrictions on the Big Bang” 

    From Quanta Magazine (US)

    November 10, 2021
    Charlie Wood

    Cosmologists Close in on Logical Laws for the Big Bang

    Physicists are translating commonsense principles into strict mathematical constraints on how our universe must have behaved at the beginning of time.

    Patterns in the ever-expanding arrangement of galaxies might reveal secrets of the universe’s first moments.Credit: Dave Whyte for Quanta Magazine.

    Cosmologists Close In on Logical Laws for the Big Bang Credit: Quanta Magazine

    For over 20 years, physicists have had reason to feel envious of certain fictional fish: specifically, the fish inhabiting the fantastic space of M.C. Escher’s Circle Limit III woodcut, which shrink to points as they approach the circular boundary of their ocean world. If only our universe had the same warped shape, theorists lament, they might have a much easier time understanding it.

    M.C. Escher’s Circle Limit III (1959). Credit: M.C. Escher.

    Escher’s fish lucked out because their world comes with a cheat sheet — its edge. On the boundary of an Escher-esque ocean, anything complicated happening inside the sea casts a kind of shadow, which can be described in relatively simple terms. In particular, theories addressing the quantum nature of gravity can be reformulated on the edge in well-understood ways. The technique gives researchers a back door for studying otherwise impossibly complicated questions. Physicists have spent decades exploring this tantalizing link.

    Inconveniently, the real universe looks more like the Escher world turned inside out. This “de Sitter” space has a positive curvature; it expands continuously everywhere. With no obvious boundary on which to study the straightforward shadow theories, theoretical physicists have been unable to transfer their breakthroughs from the Escher world.

    “The closer we get to the real world, the fewer tools we have and the less we understand the rules of the game,” said Daniel Baumann, a cosmologist at The University of Amsterdam [Universiteit van Amsterdam](NL).

    But some Escher advances may finally be starting to bleed through. The universe’s first moments have always been a mysterious era when the quantum nature of gravity would have been on full display. Now multiple groups are converging on a novel way to indirectly evaluate descriptions of that flash of creation. The key is a new notion of a cherished law of reality known as unitarity, the expectation that all probabilities must add up to 100%. By determining what fingerprints a unitary birth of the universe should have left behind, researchers are developing powerful tools to check which theories clear this lowest of bars in our shifty and expanding space-time.

    Unitarity in de Sitter space “was not understood at all,” said Massimo Taronna, a theoretical physicist at The National Institute for Nuclear Physics[Institutio Nzaionale di Fisica Nucleare](IT). “There is a huge jump that has happened in the last couple of years.”

    Spoiler Alert

    The unfathomable ocean that theorists aim to plumb is a brief but dramatic stretch of space and time that many cosmologists believe set the stage for all we see today. During this hypothetical era, known as inflation, the infant universe would have ballooned at a truly incomprehensible rate, inflated by an unknown entity akin to Dark Energy.

    Cosmologists are dying to know exactly how inflation might have happened and what exotic fields might have driven it, but this era of cosmic history remains hidden. Astronomers can see only the output of inflation — the arrangement of matter hundreds of thousands of years after the Big Bang, as revealed by the cosmos’s earliest light.
    CMB per European Space Agency(EU) Planck.

    Their challenge is that countless inflationary theories match the final observable state. Cosmologists are like film buffs struggling to narrow down the possible plots of Thelma and Louise from its final frame: the Thunderbird hanging frozen in midair.

    Yet the task may not be impossible. Just as currents in the Escher-like ocean can be deciphered from their shadows on its boundary, perhaps theorists can read the inflationary story from its final cosmic scene. In recent years, Baumann and other physicists have sought to do just that with a strategy called bootstrapping.

    Cosmic bootstrappers strive to winnow the crowded field of inflationary theories with little more than logic. The general idea is to disqualify theories that fly in the face of common sense — as translated into stringent mathematical requirements. In this way, they “hoist themselves up by their bootstraps,” using math to evaluate theories that can’t be distinguished using current astronomical observations.

    One such commonsense property is unitarity, an elevated name for the obvious fact that the sum of the odds of all possible events must be 1. Put simply, flipping a coin must produce a heads or a tails. Bootstrappers can tell at a glance whether a theory in the Escher-like “anti-de Sitter” space is unitary by looking at its shadow on the boundary, but inflationary theories have long resisted such simple treatment, because the expanding universe has no obvious edge.

    Physicists can check a theory for unitarity by laboriously calculating its predictions from moment to moment and verifying that the odds always add up to 1, the equivalent of watching a whole movie with an eye for plot holes. What they really want is a way to glance at the end of an inflationary theory — the film’s final frame — and instantly know whether unitarity has been violated during any previous scene.

    But the concept of unitarity is linked closely to the passage of time, and they’ve struggled to understand what shape the fingerprints of unitarity would take in this final frame, which is a static, timeless snapshot. “For many years the confusion was, ‘How the hell can I get information about time evolution … in an object where time doesn’t exist at all?’” said Enrico Pajer, a theoretical cosmologist at The University of Cambridge (UK).

    Last year, Pajer helped bring the confusion to an end. He and his colleagues found a way to figure out if a particular theory of inflation is unitary by looking only at the universe it produces.

    In the Escher world, checking shadow theories for unitarity can be done on a cocktail napkin. These boundary theories are, in practice, quantum theories of the sort we might use to understand particle collisions. To check one for unitarity, physicists describe two particles pre-crash with a mathematical object called a matrix, and post-crash with another matrix. For a unitarity collision, the product of the two matrices is 1.

    Where do physicists get these matrices? They start with the pre-crash matrix. When space holds still, a movie of a particle collision looks the same played forward or backward, so researchers can apply a simple operation to the initial matrix to find the final matrix. Multiply those two together, check the product, and they’re done.

    But expanding space ruins everything. Cosmologists can work out the post-inflation matrix. Unlike particle collisions, however, an inflating cosmos looks quite different in reverse, so until recently it was unclear how to determine the pre-inflation matrix.

    “For cosmology we would have to exchange the end of inflation with the beginning of inflation,” Pajer said, “which is crazy.”

    Last year, Pajer, along with his colleagues Harry Goodhew and Sadra Jazayeri, figured out how to calculate the initial matrix. The Cambridge group rewrote the final matrix to accommodate complex numbers as well as real numbers. They also defined a transformation involving swapping positive energies for negative energies — analogous to what physicists might do in the particle collision context.

    But had they found the right transformation?

    Pajer then set out to verify that these two matrices really do capture unitarity. Using a more generic theory of inflation, Pajer and Scott Melville, also at Cambridge, played the birth of the universe forward frame by frame, looking for illegal unitarity violations in the traditional way. In the end, they showed that this painstaking process gave the same result as the matrix method.

    The new method allows them to skip the moment-by-moment calculation. For a general theory involving particles of any mass and any spin communing via any force, they could see if it is unitary by checking the final outcome. They had discovered how to reveal the plot without watching the movie.

    The new matrix test, known as the cosmological optical theorem, soon proved its power. Pajer and Melville found that a lot of possible theories violated unitarity. In fact, the researchers ended up with so few valid possibilities that they wondered if they could make some predictions. Even without a specific theory of inflation in hand, could they tell astronomers what to search for?

    Cosmic Triangle Test

    One revealing imprint of inflation is the way galaxies are distributed across the sky. The simplest pattern is the two-point correlation function, which, roughly speaking, gives the odds of finding two galaxies separated by particular distances. In other words, it tells you where the universe’s matter is.

    Our universe’s matter is spread out in a special way, observations have found, with dense spots stuffed full of galaxies that come in a variety of sizes. The theory of inflation arose in part to explain this peculiar finding.

    Lucy Reading-Ikkanda/Quanta Magazine.

    The universe started out quite smooth overall, the thinking goes, but quantum wiggles imprinted space with tiny dollops of extra matter. As space expanded, these dense spots stretched out even as the tiny ripples continued to arise. When inflation stopped, the young cosmos was left with dense spots ranging from small to large, which would go on to become galaxies and galaxy clusters.

    All theories of inflation nail this two-point correlation function. To distinguish between competing theories, researchers need to measure subtler, higher-point correlations — relationships between the angles formed by a trio of galaxies, for instance.

    Typically, cosmologists propose a theory of inflation involving certain exotic particles, and then play it forward to calculate the three-point correlation functions it would leave in the sky, giving astronomers a target to search for. In this way, researchers tackle theories one by one. “There are many, many, many possible things you could look for. Infinitely many, in fact,” said Daan Meerburg, a cosmologist at the The University of Groningen [Rijksuniversiteit Groningen] (NL).

    Pajer has turned that process around. Inflation is thought to have left ripples in the fabric of space in the form of gravitational waves. Pajer and his collaborators started with all possible three-point functions describing these gravitational waves and checked them with the matrix test, eliminating any functions that failed unitarity.

    In the case of a certain type of gravitational wave, the group found that unitary three-point functions are few and far between. In fact, only three pass the test, the researchers announced in a preprint posted in September. The result “is very remarkable,” said Meerburg, who was not involved. If astronomers ever detect primordial gravitational waves — and efforts are ongoing — these will be the first signs of inflation to look for.

    Positive Signs

    The cosmological optical theorem guarantees that the probabilities of all possible events add up to 1, just as a coin is certain to have two sides. But there is another way of thinking about unitarity: The odds of each event must be positive. No coin can have a negative chance of landing on tails.

    Victor Gorbenko, a theoretical physicist at Stanford University (US), Lorenzo Di Pietro of The University of Trieste [Università degli Studi di Trieste](IT), and Shota Komatsu of The European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN] recently approached unitarity in de Sitter space from this perspective. What would the sky look like, they wondered, in bizarro universes that broke this law of positivity?

    Taking inspiration from the Escher world, they were intrigued by the fact that anti-de Sitter space and de Sitter space share one fundamental feature: Viewed properly, each can look the same at all scales. Zoom in near the boundary of Escher’s Circle Limit III woodcut, and the shrimpy fish have identical proportions to the whoppers in the middle. Similarly, quantum ripples in the inflating universe generated dense spots large and small. This common property, “conformal symmetry,” allowed Gorbenko’s group to port a popular mathematical technique for breaking apart boundary theories between the two worlds.

    Video: David Kaplan explores the leading cosmological explanation for the origin of the universe.
    Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Pete Calandra and Scott P. Schreer.

    In practice, this tool let them take the end of inflation in any universe — the hodgepodge of density ripples — and break it into a sum of wavelike patterns. For unitary universes, they found, each wave would have a positive coefficient. Any theories predicting negative waves would be no good. They described their test in a preprint in August. Simultaneously, an independent group led by João Penedones of The EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH)arrived at the same result.

    The positivity test is more exact than the cosmological optical theorem, but less ready for real data. Both positivity groups made simplifications, including stripping out gravity and assuming flawless de Sitter structure, that will need to be modified to fit our messy, gravitating universe. But Gorbenko calls these steps “concrete and doable.”

    Cause for Hope

    Now that bootstrappers are closing in on the notion of what unitarity looks like for the outcome of a de Sitter expansion, they can move on to other classic bootstrapping rules, such as the expectation that causes should come before effects. It’s not currently clear how to see the traces of causality in a timeless snapshot, but the same was once true of unitarity.

    “That’s the most exciting thing that we still don’t fully understand,” said Taronna, who has been working with Charlotte Sleight, a theoretical physicist at Durham University (UK), to reformulate Escher-world results for a more realistic universe. “We don’t know what is not causal in de Sitter.”

    As bootstrappers learn the ropes of de Sitter space, they hope to zero in on a few correlation functions that next-generation telescopes might actually spot — and the few theories of inflation, or even gravity, that could have produced them. If they can pull it off, our swollen universe might someday look as transparent as the world of Escher’s fish.

    “After many years of working in de Sitter,” Taronna said, “we are finally starting to understand what the rules of a mathematically consistent theory of quantum gravity are.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 11:43 am on November 4, 2021 Permalink | Reply
    Tags: "Researchers Revise Recipe for Building a Rocky Planet Like Earth", , , , , Quanta Magazine (US), Seeing infant disks of dust and gas surrounding young stars.   

    From Quanta Magazine (US) : “Researchers Revise Recipe for Building a Rocky Planet Like Earth” 

    From Quanta Magazine (US)

    November 3, 2021
    Jonathan O’Callaghan

    Pebble accretion may explain where Earth and its water came from.Credit: Ana Kova/Quanta Magazine

    Credit: Ana Kova/Quanta Magazine

    Bob O’Dell wasn’t quite sure what he was looking at. It was 1992, and he had just got his hands on new images from the Hubble Space Telescope that zoomed in on young stars in the Orion Nebula. O’Dell had been hoping to study the nebula itself, an interesting region of star formation relatively close to Earth. Yet something else caught his attention. Several of the stars didn’t look like stars at all, but were instead enveloped by a dim shroud. They seemed to form a “silhouette against the nebula,” said O’Dell.

    At first O’Dell and his colleagues thought they might be seeing an image artifact resulting from Hubble’s warped primary mirror, which had been molded ever so slightly into the wrong shape and would be fixed by a space shuttle mission in 1993. “We really wondered if this was a residual effect of the flawed primary mirror,” said O’Dell, who had been Hubble’s project scientist. Soon, however, they saw more and more of the phenomena in the images, even after the mirror was fixed, and realized it wasn’t a flaw at all. They were actually seeing infant disks of dust and gas surrounding young stars. They were, for the first time, witnessing the birth of planets.

    O’Dell’s discovery of protoplanetary disks sparked a transformation in our understanding of planet formation. In the following decades, astronomers would realize that our classical idea of how planets form — small rocks clump into bigger rocks, which then clump further — might not be correct. For the gas giants, such as Jupiter and Saturn, a model called pebble accretion, where a dominant object gobbles up smaller rocks — would come to replace the old views of how such monstrous worlds come to be.

    These Hubble Space Telescope images provided the first direct evidence of protoplanetary disks around distant stars. C.R. O’Dell (Rice University (US), and The National Aeronautics and Space Agency (US).

    The rocky worlds of the inner solar system are trickier. Planetary scientists have intensely debated whether pebble accretion can explain how Earth and its neighbors arose, or whether the older view is still most likely. The clash has played out over the past few years in journal articles [Science Advances] and even, more recently, in a castle in the Bavarian Alps.

    The debate doesn’t only affect great mysteries such as the origin of Earth — and its water. The answer will also help reveal just how prevalent Earth-like worlds are across the universe. Are such worlds a cosmic fluke, merely a combination of chance events that make the prospects of life elsewhere in the universe slim? Or are habitable planets a certainty in solar systems with just the right ingredients, making us but one of many?

    “It’s part of the human experience to ask how the world around us formed,” said Konstantin Batygin, a planetary scientist at The California Institute of Technology (US). If it formed from pebbles, it will have huge consequences for how many more worlds like ours are out there.

    Invasion of the Pebbles

    In 2012, Anders Johansen and Michiel Lambrechts, astronomers at Lund University [Lunds universitet] (SE), made a bold prediction [Astronomy & Astrophysics]. For much of the preceding few decades, astronomers had believed that planets such as Earth and Jupiter grew from the gradual accumulation of asteroid-like objects, planetesimals, that collided with each other in young solar systems. This process, known as planetesimal accretion, would be slow — perhaps taking up to 100 million years to form a planet. But it made sense. We could see lots of asteroids in our solar system, and it seemed reasonable to assume there were many more when it formed 4.5 billion years ago, enough to form all the worlds we see today.

    The ALMA telescope array can create exceptionally high resolution images, allowing researchers to examine planet-forming disks around other stars. Credit: Sergio Otarola (The European Southern Observatory [Observatoire européen austral][Europäische Südsternwarte](EU)(CL)/The National Astronomical Observatory of Japan [国立天文台](JP)/The National Radio Astronomy Observatory (US))

    But there were problems. No one was quite sure how planetesimals themselves formed — how they made the jump from tiny dust grains to city-size rocks, a problem known as the meter-size barrier. The presence of liquid water on Earth was confusing, as it relied on the chance arrival of water-bearing asteroids. And most troubling, planetesimal accretion would take far too long to build Saturn, Uranus and Neptune. By the time their solid cores formed — after tens of millions of years — it would be too late for them to accumulate enough gas from the protoplanetary disk to become gas giants, as “most disks go away in a few million years,” said ‪André Izidoro, a planetary scientist at Rice University.

    Johansen and Lambrechts proposed a new model. Instead of multiple planetesimals colliding together, they instead suggested that a single dominant planetesimal could grow to a huge size in a short amount of time — just a few million years — by sweeping up material inside a protoplanetary disk “like a vacuum cleaner,” said Johansen. This material would consist of tiny seedlike rocks that surrounded young stars. They called the idea pebble accretion.

    Pebbles are extremely small, just a few millimeters to centimeters in size, whereas planetesimals are much larger, up to hundreds of kilometers wide, like many of the asteroids we see in the solar system today. Both would be found in a star’s protoplanetary disk, with the latter occasionally smashing into one another.

    In 2014, just two years after Johansen and Lambrechts published their pebble model, observations revealed that disks were indeed full of pebbles. A network of 66 telescopes called ALMA (the Atacama Large Millimeter/submillimeter Array) revealed up to 100 Earth masses’ worth of pebbles inside a protoplanetary disk surrounding a young star [The Astrophysical Journal], including wide gaps created by growing planets carving out their orbits. Inside these disks, pebbles were everywhere. ALMA “showed that protoplanetary disks are born with enormous mass reservoirs of small pebbles, not planetesimals,” said Lambrechts.

    ALMA observations of protoplanetary disk around HL Tauri in 2014 revealed hidden structures, including the presence of pebbles in the disk. Credit: ALMA (ESO/NAOJ/NRAO)

    Before long, most scientists came to agree that pebble accretion formed the giant planets. It just seemed to be the only way for them to grow fast enough. “For the cores of the giant planets there is no doubt pebble accretion is the solution,” said Alessandro Morbidelli, a planetary scientist at the Côte d’Azur Observatory in France.

    Yet, while it seemingly explained the formation of Jupiter, Saturn, Uranus and Neptune, pebble accretion raised considerable questions about the formation of the terrestrial planets: Mercury, Venus, Earth and Mars. “In principle one could form the terrestrial planets with planetesimal accretion,” said Lambrechts. “But now there’s this invasion of the pebbles.”

    In the pebble accretion model, you begin with a protoplanetary disk around a young star, just like in the planetesimal accretion model. Both models then require planetesimals to form via a phenomenon called streaming instability. Essentially, dust and pebbles experience drag as they encounter the gas surrounding the star. This causes the pebbles to clump together, until some clumps “are so massive that they become gravitationally bound, and they collapse into planetesimals” up to hundreds of kilometers wide, said Joanna Drążkowska, an astrophysicist at The Ludwig Maximilians University of Munich [Ludwig-Maximilians-Universität München](DE). The clumps may then rotate as they form, which gives them two lobes. “This is exactly what we see” in outer solar system objects such as Arrokoth, said Drążkowska. The process is expected to be incredibly quick, perhaps taking only 100 years.

    From here, the two models diverge. Under planetesimal accretion, these planetesimals form everywhere in the disk, leaving few pebbles left. Over tens of millions of years, the large planetesimals collide and merge, eventually giving rise to the terrestrial planets we see today.

    In pebble accretion, just a few planetesimals become dominant. These planetesimals begin to sweep up pebbles in the protoplanetary disk, which stream down onto the surface of the planetesimal in long riverlike filaments. It is an extremely energetic process, with hot magma oceans glowing on the surface as pebbles rain down. “These planets would shine,” said Lambrechts. The process is very efficient; Earth would grow to its full size in just a few million years, compared to perhaps 100 million years in planetesimal accretion.

    One of the most interesting outcomes of pebble accretion is that it gives a direct prediction of how habitable planets form. Rather than relying on water-rich asteroids to haphazardly collide with protoplanets, the model suggests that incoming icy pebbles from the outer solar system could provide a steady supply of water to a planet like Earth, an idea known as pebble snow. “The nice thing about pebble snow is that it becomes predictable,” said Johansen. “The amount of water and carbon and nitrogen that comes down to Earth is something that can be calculated.”

    Thus, if the pebble accretion model for terrestrial planet formation is correct, it may bode well for the prospects of other life in the universe. Whereas under planetesimal accretion the existence of water on Earth was a chance event, in pebble accretion it might be expected in a planetary system like our own. Take a proto-Earth and put it around a similar star in a similar position, and the amount of water it collects could be the same. Habitable worlds would not be chance events; their existence would be a calculable outcome if a planetary system has the right ingredients. “One can use this as a starting point for understanding prebiotic chemistry and the origin of life,” says Johansen.

    The Great Architect

    Pebble accretion seems like an attractive idea. It solves the problem of rapid planet growth, it explains the presence of water on Earth, and we can even observe pebbles in developing exoplanetary systems. “With ALMA we know now pebbles are concentrated in particular regions that lead to planetesimal formation and potentially planets,” said Paola Pinilla, a planetary formation scientist at The University College London (UK).

    Yet, while it provides a good explanation of giant planet growth, pebble accretion has some notable issues when it comes to terrestrial planets.

    First, where did the pebbles in the inner solar system come from? In recent years, planetary scientists have come to believe that Jupiter, the largest planet in our solar system, was the primary force shaping the destiny of the planets. “The emergent picture is that Jupiter was the great architect of the solar system,” said Batygin.

    Soon after Jupiter’s rapid formation, it created a barrier between the inner and outer solar system, preventing material from the “mass-rich” outer regions from flowing to the “mass-starved” inner terrestrial planets, said Batygin. “The giant planets blocked the flux of dust and pebbles,” said Morbidelli. Pebbles in the inner disk may have dissipated before the terrestrial planets could form, and without more material coming in from the outer solar system, there simply would not have been enough material to make Earth.

    Journey to the Birth of the Solar System 360 VR
    Join David Kaplan on a virtual-reality tour showing how the sun, the Earth and the other planets came to be. Quanta Magazine and Chorus Films.

    Even if there was enough material, pebble accretion runs into another problem: It is extremely efficient, but perhaps too much so. If Earth and the other terrestrial planets did form by pebble accretion, it is not clear why they did not grow larger and larger, eventually becoming super-Earths — worlds somewhere between Earth and Neptune in size, which seem to be relatively common in other planetary systems. “The difficulty with pebble accretion is it’s either not very efficient or it’s very efficient,” said Sean Raymond, an astronomer at the Astrophysics Laboratory of Bordeaux in France. “It’s rarely in between. And to work for the terrestrial planets, you need to have just the right amount of stuff.” Too little material and planets simply never grow. Too much and the planets grow too large too quickly “and the solar system would have super-Earths rather than terrestrial planets,” said Raymond.

    These issues have caused considerable debate among planetary scientists in the last few years, with much ongoing research from both sides of the argument. In September, Morbidelli and his colleagues published an article in Nature Astronomy based on studies of the protoplanet Vesta that suggested how planetesimals would explain the current configuration of the solar system. The study suggests that a ring of planetesimals once orbited the sun at the current location of Earth. In time, this ring formed two large planets — Earth and Venus — toward the middle of the ring, with two smaller worlds — Mars and Mercury — on the flanks.

    Others, however, continue to investigate ways in which pebbles may birth terrestrial planets. In February, Johansen and colleagues described how our own solar system could have formed in this way. Then last month, Drążkowska and colleagues used pebble accretion to explain why super-Earths are relatively uncommon around other sunlike stars.

    At a workshop at Ringberg Castle in Germany last month, the debate flowed freely. Some, like Johansen and Lambrechts, remain very much in favor of a pebble accretion model for terrestrial planets. “There’s very strong evidence that this is the dominant process,” said Johansen. Others are less convinced. “I think pebble accretion is a very important process to understand planet formation, but I don’t think it’s the process that built the terrestrial planets in our solar system,” said Thorsten Kleine, a planetary scientist at The University of Münster [Westfälische Wilhelms-Universität Münster](DE). The two processes could also have worked in concert, with pebble accretion creating planetesimals that then merged after Jupiter cut off the flow of incoming pebbles.

    Some hope that turning to cosmochemistry, the study of the compositions of cosmic objects, might reveal the answer. Johansen pointed out that if the planetesimal accretion model were correct, we would expect to find asteroids of a similar composition to Earth, given that they were likely the building blocks of our world. Yet this is not the case. “I think that’s a limitation of the classical model, because they haven’t been able to find one,” he said. “There are really no meteorites that look like Earth at all.”

    If Earth formed via pebble accretion, however, we might have expected to see a “much higher abundance” of volatile elements like nitrogen and carbon on Earth, delivered by pebbles coming from the outer solar system, said Conel Alexander, a cosmochemist at The Carnegie Institution for Science (US). “We just don’t see that,” he said. Scientists like Alexander hope that combining new ideas of cosmochemistry with modeling of the early solar system could provide some useful answers. “Both the modelers and the cosmochemists have a bit of work to do,” he said.

    Elsewhere, continued studies of exoplanetary systems could reveal more information. Already more than 5,000 protoplanetary disks have been observed by ALMA, said Pinilla, from very young disks of less than 1 million years to disks up to 30 million years in age. On one occasion, we have even seen a giant planet, a world called PDS 70b, being born inside such a disk, with more sightings hoped for in the future. Some disks show the glow of dust, indicating the possible presence of colliding planetesimals — although how many isn’t clear. Upcoming observations from the James Webb Space Telescope (JWST), set to launch in December, alongside work from ALMA, could provide invaluable additional clues.

    National Aeronautics Space Agency(USA)/European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU)/ Canadian Space Agency [Agence Spatiale Canadienne](CA) Webb Infrared Space Telescope(US) James Webb Space Telescope annotated. Scheduled for launch in October 2021 delayed to December 2021.

    Much remains uncertain. The major question now is: Was our planet the result of repeated collisions between huge asteroid-like bodies, or are we standing on top of a world made of trillions upon trillions of tiny, perhaps ice-rich cosmic pebbles? Solving that fundamental question will provide a window not only into our own past, but into Earth-like worlds everywhere.

    “If we bring chemical traces from JWST, and the pebble distribution from ALMA, we can have some hints of what types of planets can form in the inner parts of the disks,” said Pinilla.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:10 pm on October 19, 2021 Permalink | Reply
    Tags: "A Hint of Dark Matter Sends Physicists Looking to the Skies", European Organization for Nuclear Research (Organisation européenne pour la recherche nucléaire)(EU) [CERN] CAST Axion Solar Telescope., , Quanta Magazine (US)   

    From Quanta Magazine (US) : “A Hint of Dark Matter Sends Physicists Looking to the Skies” 

    From Quanta Magazine (US)

    October 19, 2021
    Jonathan O’Callaghan

    The CAST axion telescope tracks the sun from inside a building at the European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN] laboratory near Geneva. Credit: CERN / Madalin-Mihai Rosu.

    Approximately 85% of the mass in the universe is missing — we can infer its existence, we just can’t see it. Over the years, a number of different explanations for this Dark Matter have been proposed, from undiscovered particles to black holes. One idea in particular, however, is drawing renewed attention: the axion. And researchers are turning to the skies to track it down.

    Axions are hypothetical lightweight particles whose existence would resolve two major problems. The first, fussed over since the 1960s, is the strong charge-parity (CP) problem, which asks why the quarks and gluons that make up protons and neutrons obey a certain symmetry. Axions would show that an unseen field is responsible.

    The second is dark matter. Axions “are excellent dark matter candidates,” said Asimina Arvanitaki, a theoretical physicist at The Perimeter Institute for Theoretical Physics (CA). Axions would clump together in exactly the ways we expect dark matter to, and they have just the right properties to explain why they’re so hard to find — namely, they’re extremely light and reluctant to interact with regular matter.

    Earlier this year, a group of scientists reported that they might have spotted evidence [Physical Review Letters] of axions being produced by neutron stars — collapsed stars that are so dense, a tiny sample little bigger than a grain of sand would weigh as much as an aircraft carrier. Ever since the 1980s, physicists have thought [Physical Review Letters] that if axions do exist, they should be produced inside the hot cores of neutron stars, where neutrons and protons smash together at high energies.

    Axions should be billions of times less massive than electrons, so they would be able to escape a dense neutron star’s innards into space. Here they would encounter the extremely strong magnetic field of the neutron star. In the presence of such a strong magnetic field, axions are predicted to turn into ordinary photons, or particles of light. (This property forms the basis for earthbound axion searches such as the Axion Dark Matter Experiment, which uses powerful magnets to try and spot the transformation in action.)

    Axions flying through the neutron star’s magnetic field would be transformed into X-ray photons.

    These X-rays are hard to spot, however. Most known neutron stars are rapidly spinning pulsars, which release copious amounts of X-rays anyway — no axions needed. That’s why the new research focused on a group of seven neutron stars in our galaxy known as the “magnificent seven,” so named because they are the only known neutron stars not to rotate rapidly. “They are the most boring neutron stars you could possibly imagine,” said Benjamin Safdi, a physicist at The University of California-Berkeley (US), and a co-author on the study. “They’re just sitting there.”

    In the study, published in Physical Review Letters [above], Safdi and his colleagues suggest that all but one of these neutron stars show an excess of higher-energy X-rays that “could possibly be explained by the existence of axions,” Safdi said. The team does not claim a definitive discovery, but rather highlights the discrepancy for further investigation.

    Yet the trouble with looking into space for evidence of paradigm-breaking discoveries is that unlike an ultra-clean laboratory on Earth, space has a lot of things going on. We could simply be observing some other astrophysical process, unrelated to axions, or the excess X-ray signal might not really be there at all. Safdi’s team plans to investigate the matter further with additional instruments, like NASA’s NuSTAR X-ray telescope, which can observe higher-energy X-rays than other space telescopes can see.

    “By observing these higher-energy X-rays, we could separate out a potential signal of axions,” said Safdi.

    Other axion searches use our sun, which is expected to produce axions in its interior that then stream into space. A long-running experiment at CERN in Switzerland called the CERN Axion Solar Telescope (CAST)[above] points a 10-meter-long superconducting magnet at the sun. The magnet would turn any incoming axions into X-ray photons, which would then get picked up by a detector placed at the back end of the magnet.

    CAST hasn’t found any axions, but its results, like those of other searches taking place, provide useful constraints on axion characteristics, such as when axions might morph into photons. Work has begun on CAST’s successors, which will use bigger and more powerful magnets. By 2024, the Baby International Axion Observatory (BabyIAXO) will be switched on at a German accelerator center called DESY.

    CAD-drawing of BabyIAXO in its final form Image: DESY Electron Synchrotron[ Deütsches Elektronen-Synchrotron](DE).

    It will be 100 times more sensitive than CAST and will serve as the precursor to the full IAXO experiment, which will be “yet another factor of 100 better,” said Igor Irastorza, one of the leaders of CAST.

    Researchers are also exploring indirect ways to spot the influence of axions out in space. Some white dwarfs — the remnant cores of stars like our sun that have exhausted their fuel — appear to be cooling down faster than expected. One possibility might be that axions are escaping from the dead stars, taking energy with them. The rapid cooling is “exactly what one would expect if there are axions draining energy from this star,” said Irastorza. (A definitive link cannot yet be drawn, however.) Elsewhere, black holes have been touted as prime laboratories for probing the existence of axions by looking for signs of a process called superradiance, a phenomenon where lightweight particles — such as axions — could slow the spin of a black hole anywhere from 10% to 90% by causing it to lose energy and angular momentum. “If you see a very quickly spinning black hole, you know that this process did not happen,” said Masha Baryakhtar, a particle physicist at The University of Washington(US). But if we can measure the masses and spins of enough black holes, such as with the LIGO and Virgo gravitational wave detectors, we could begin to look for patterns that might “match up with the calculations of what they should be if axions are there,” said Baryakhtar.

    As axions have slowly become one of the most tantalizing dark matter candidates, researchers have come up with ever more elaborate ways to find a wisp of a particle that may not even exist. “The field is just exploding,” said Arvanitaki. And though earthbound searches “haven’t seen anything of note,” said Jesse Thaler, a particle physicist at The Massachusetts Institute of Technology (US), looking up might turn out to be the most promising way to track them down. “Because axions or other dark matter-like particles are so feebly interacting, you need a big number somewhere to ratchet it up to something you could see,” said Thaler. “And one of the biggest numbers you can imagine would be [to] leverage the entirety of the universe as a detector.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:09 pm on October 17, 2021 Permalink | Reply
    Tags: "‘Impossible’ Particle Discovery Adds Key Piece to the Strong Force Puzzle", , , Quanta Magazine (US), The double-charm tetraquark, The tetraquark now presents theorists with a solid target against which to test their mathematical machinery for approximating the strong force.   

    From Quanta Magazine (US) : “‘Impossible’ Particle Discovery Adds Key Piece to the Strong Force Puzzle” 

    From Quanta Magazine (US)

    September 27, 2021
    Charlie Wood

    A simple model can explain groupings of two or three quarks, but it fails to explain tetraquarks. Credit: Samuel Velasco/Quanta Magazine.

    This spring, at a meeting of Syracuse University (US)’s quark physics group, Ivan Polyakov announced that he had uncovered the fingerprints of a semi-mythical particle.

    “We said, ‘This is impossible. What mistake are you making?’” recalled Sheldon Stone, the group’s leader.

    Polyakov went away and double-checked his analysis of data from the Large Hadron Collider beauty (LHCb) experiment, which the Syracuse group is part of.

    European Organization for Nuclear Research (Organisation européenne pour la recherche nucléaire)(EU) LHCb.

    The evidence held. It showed that a particular set of four fundamental particles called quarks can form a tight clique, contrary to the belief of most theorists. The LHCb collaboration reported the discovery of the composite particle, dubbed the double-charm tetraquark, at a conference in July and in two papers posted earlier this month that are now undergoing peer review.


    The unexpected discovery of the double-charm tetraquark highlights an uncomfortable truth. While physicists know the exact equation that defines the strong force — the fundamental force that binds quarks together to make the protons and neutrons in the hearts of atoms, as well as other composite particles like tetraquarks — they can rarely solve this strange, endlessly iterative equation, so they struggle to predict the strong force’s effects.

    The tetraquark now presents theorists with a solid target against which to test their mathematical machinery for approximating the strong force. Honing their approximations represents physicists’ main hope for understanding how quarks behave inside and outside atoms — and for teasing apart the effects of quarks from subtle signs of new fundamental particles that physicists are pursuing.

    Quark Cartoon

    The bizarre thing about quarks is that physicists can approach them at two levels of complexity. In the 1960s, grappling with a zoo of newly discovered composite particles, they developed the cartoonish “quark model,” which simply says that quarks glom together in complementary sets of three to make the proton, the neutron and other so-called baryons, while pairs of quarks make up various types of “meson” particles.

    Gradually, though, a deeper theory known as quantum chromodynamics (QCD) emerged. It painted the proton as a seething mass of quarks roped together by tangled strings of “gluon” particles, the carriers of the strong force. Experiments have confirmed many aspects of QCD, but no known mathematical techniques can systematically unravel the theory’s central equation.

    Somehow, the quark model can stand in for the far more complicated truth, at least when it comes to the menagerie of baryons and mesons discovered in the 20th century. But the model failed to anticipate the fleeting tetraquarks and five-quark “pentaquarks” that started showing up in the 2000s. These exotic particles surely stem from QCD, but for nearly 20 years, theorists have been stumped as to how.

    “We just don’t know the pattern yet, which is embarrassing,” said Eric Braaten, a particle theorist at Thee Ohio State University (US).

    The newest tetraquark sharpens the mystery.

    It showed up in the debris of roughly 200 collisions at the LHCb experiment, where protons smash into each other 40 million times each second, giving quarks uncountable opportunities to cavort in all the ways nature permits. Quarks come in six “flavors” of different masses, with heavier quarks appearing more rarely. Each of those 200-odd collisions generated enough energy to make two charm-flavored quarks, which weigh more than the lightweight quarks that comprise protons, but less than the gigantic “beauty” quarks that are LHCb’s main quarry. The middleweight charm quarks also got close enough to attract each other and rope in two lightweight antiquarks. Polyakov’s analysis suggested that the four quarks banded together for a glorious 12 sextillionths of a second before an energy fluctuation conjured up two extra quarks and the group disintegrated into three mesons.

    For a tetraquark, that’s an eternity. Previous tetraquarks have contained quarks paired with their equally massive opposing antiquarks, and they tended to puff into nothingness thousands of times faster. The new tetraquark’s formation and subsequent stability surprised Stone’s group, who expected charm quarks to attract each other even more weakly than the quark-antiquark pairs that bind more ephemeral tetraquarks. It’s a fresh clue to the strong force enigma.

    Quark Rules of Thumb

    One of the few theorists to foresee why two charm quarks might mingle was Jean-Marc Richard, now at the Institute of Physics of the 2 Infinities in Lyon, France. In 1982, he and two colleagues studied a simple quark model and initially found that four quarks would rather form two pairs — mesons. A quark pair can tango much as a proton and electron can. But add two more, and the newcomers tend to get in the way, weakening the attraction and dooming the collective particle.

    But the theorists also noticed a loophole[Physical Review D]: Lopsided quartets can stick together, if the larger pair is heavy enough to not take much notice of the lighter pair. The question was, how skewed would the masses have to be?

    Further analysis by Richard and a colleague predicted that it’s not necessary to go all the way to the most gargantuan quarks; a pair of middleweight charm quarks [Physics Letters B] could anchor a tetraquark. But alternative extensions of the quark model predicted different tipping points, and the existence of the double-charm tetraquark remained doubtful. “There were more guesses that it would not exist than there were that it would exist,” Braaten said.

    The same was true of “lattice QCD” computer simulations, a powerful approach to approximating QCD. These simulations capture the richness of the theory by analyzing quarks and gluons interacting at points on a fine grid instead of throughout a smooth space. All lattice QCD simulations agreed that the heaviest quarks could make tetraquarks. But when researchers swapped in charm quarks, most simulations found that double-charm tetraquarks couldn’t form.

    Now the LHCb experiment has made a definitive ruling: Charm quarks can bind a tetraquark together. (Only barely, though — the physicists calculate that if the composite particle had just one-hundredth of a percent more mass, two mesons would win out instead.) Now theorists have a new benchmark for their models.

    For lattice QCD practitioners, the new tetraquark highlights the problem that key details about the midsize quarks may be getting lost between their lattice points. Lightweight quarks can zip around enough to allow their movement to be captured even against a coarse grid. And researchers can deal with heavy, more stationary quarks by pinning them to one spot. But charm quarks inhabit an awkward middle ground, and researchers think they’ll need to zoom in to better discern their behavior. “We need, most likely, a finer lattice,” said Pedro Bicudo, a lattice QCD specialist at the University of Lisbon in Portugal.

    More capable lattice QCD simulations will have far-reaching benefits. Particle physicists’ main goal in experiments like LHCb is to find signs of new fundamental particles, such as those that might make up the universe’s dark matter. To do so, they must be able to distinguish the dance of charm quarks and their kin from other, more novel influences.

    “Anywhere the charm quark is important, this [discovery] will spread there,” Bicudo said.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: