Tagged: Quanta Magazine (US) Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:00 am on April 1, 2022 Permalink | Reply
    Tags: "Massive Black Holes Shown to Act Like Quantum Particles", , , , , Quanta Magazine (US),   

    From Quanta Magazine (US): “Massive Black Holes Shown to Act Like Quantum Particles” 

    From Quanta Magazine (US)

    March 29, 2022
    Charlie Wood


    An entire gravitational wave can be known through the behavior of just one of its countless particles. Credit: DVDP for Quanta Magazine.

    When two black holes collide, the titanic crash ripples out through the very fabric of the cosmos.

    Physicists have used Albert Einstein’s Theory of General Relativity to predict the rough contours of these gravitational waves as they pass through Earth, and wave after wave has been confirmed by the LIGO and Virgo gravitational-wave detectors.

    But physicists are starting to flounder as they attempt to use Einstein’s thorny equations to extract ultra-precise shapes of all possible reverberations. These currently unknowable details will be essential to fully understand the fine ripples that next-generation observatories should pick up.

    Relief, however, may be coming from a seemingly unlikely direction.

    Over the past few years, physicists specializing in the arcane behavior of quantum particles have turned their mathematical machinery toward black holes, which, at a distance, resemble particles. Several groups have recently made a surprising finding. They have shown that the behavior of a gravitational (or electromagnetic) wave can be fully known through the actions of just one of its countless particles, as if we could learn the precise silhouette of a tsunami after examining a single water molecule.

    “I would not have thought it possible, and I’m still having a little bit of trouble wrapping my head around it,” said Radu Roiban, a theoretical physicist at The Pennsylvania State University who was not involved in the research.

    The results could help future researchers interpret the sharper quivers in space-time that future observatories will record. They also mark the next step in understanding how theories of quantum particles capture events taking place at our larger level of reality.

    “What’s the precise connection between these quantum ideas and the real world? That’s what [their research] is about,” said Zvi Bern, a theoretical particle physicist at the Bhaumik Institute for Theoretical Physics at The University of California-Los Angeles. “It [provides] a much better understanding of that than we had before.”

    Quantum Cheat Codes

    In principle, most physicists expect that quantum equations can also handle big objects. We are, after all, largely clouds of electrons and quarks. In practice, however, Newton’s laws suffice. If we’re calculating the arc of a cannonball, it doesn’t make sense to start with an electron.

    “No one in their right mind would do it by saying ‘Let’s consider the quantum theory, solve that problem, and extract the classical physics,’” Bern said. “That would be idiotic.”

    But gravitational wave astronomy is driving physicists to consider desperate measures. When two black holes spiral toward each other and slam together, the shape of the resulting agitation of space-time depends on their masses, spins and other properties. To fully understand the cosmic rumbles felt at gravitational wave facilities, physicists calculate ahead of time how various black hole pairings will jiggle space-time. Einstein’s equations of general relativity are too complicated to solve exactly, so some of LIGO/Virgo’s waveforms came from precise supercomputer simulations. Some of these might take a month. The LIGO/Virgo collaboration relies on a collection of hundreds of thousands of waveforms, cobbled together from simulations and other quicker but rougher methods.

    Particle physicists, at least in some cases, believe they can get faster and more accurate results. From a zoomed-out perspective, black holes look a bit like massive particles, and physicists have spent decades thinking about what happens when particles go bump in the vacuum.

    “Over the years we’ve gotten extremely good at quantum scattering in gravity,” Bern said. “We have all these amazing tools that allow us to do these very complicated calculations.”

    The main tools of the trade are known as amplitudes, mathematical expressions that give the odds of quantum events. A “four-point” amplitude, for instance, describes two particles coming in and two particles going out. In recent years, Bern and other theorists applied four-point quantum amplitudes to the motion of colossal, classical black holes, matching — and in some cases exceeding — the precision of certain pieces of cutting-edge waveform calculations.

    “It’s amazing how quickly these people [have advanced],” said Alessandra Buonanno, a director of The MPG Institute for Gravitational Physics [MPG Institut für Gravitationsphysik] (Albert Einstein Institute)(DE) and an award-winning theorist specializing in predicting the shape of gravitational waves. “They are really pushing this.”

    All in One

    Classical physicists have steered clear of amplitudes for good reason. They are rife with infinities. Even a collision described by a four-point function — two particles in, two out — can temporarily generate any number of short-lived particles. The more of these transient particles a calculation considers, the more “loops” it is said to have, and the more accurate it is.

    It gets worse. A four-point function can have an infinite number of possible loops. But when two black holes come together, a four-point function isn’t the only possibility. Researchers must also consider the five-point function (a collision spitting out one particle of radiation), as well as the six-point function (a collision producing two particles) and so on. A gravitational wave can be thought of as a collection of an infinite number of “graviton” particles, and an ideal calculation would cover them all — with an infinite number of functions, each with an infinite number of loops.

    3
    Merrill Sherman/Quanta Magazine.

    In this quantum haystack of infinite width and depth, amplitude researchers need to identify the classical needles that would contribute to the shape of the wave.

    One clue popped up in 2017 [Physical Review D], when Walter Goldberger of Yale University and Alexander Ridgway of The California Institute of Technology studied the classical radiation thrown off by two colliding objects with a sort of electric charge. They took inspiration from a curious relationship between gravity and the other forces (known as the double copy) and used it to turn the charged objects into black hole analogues. They calculated the shape of the waves that rolled outward and found an expression that was surprisingly simple, and strikingly quantum.

    “You sort of have to close your eyes to some terms,” said Donal O’Connell, a theorist at The University of Edinburgh (SCT). “But it looked to me that the thing they’d computed was a five-point amplitude.”

    Intrigued, O’Connell and his collaborators probed further. They first used a general quantum framework to calculate simple properties of a collision between two large classical bodies. Then in July 2021 they extended this approach to calculate certain classical wave properties, confirming that the five-point amplitude was, in fact, the right tool for the job.

    The researchers had stumbled upon an unexpected pattern in the amplitude haystack. It showed that they didn’t need an infinite number of amplitudes to study classical waves. Instead, they could stop at the five-point amplitude — which involves only a single particle of radiation.

    “This five-point amplitude really is the thing,” O’Connell said. “Each graviton or each photon that makes up the wave, it doesn’t care about the fact that there’s another one.”

    Further calculations revealed why the five-point amplitude tells us everything we need to know about the classical world.

    Quantum results have two defining features. They have uncertainty baked into them. Electrons, for instance, spread into a fuzzy cloud. In addition, the equations describing them, such as Schrödinger’s equation, feature a constant of nature known as Planck’s constant.

    Classical systems, such as a gravitational wave rippling through Earth, are perfectly crisp and can be described with nary a Planck’s constant in sight. These properties gave O’Connell’s group a litmus test for determining which parts of which amplitudes were classical: They must have no uncertainty, and there can be no Planck’s constant in the final description. The group found that the simplest five-point amplitude had two “fragments,” one with Planck’s constant and one without. The first fragment was a quantum piece that could be safely ignored. The second was the classical radiation — the useful part for gravitational wave astronomy.

    They then turned their attention to the no-loop six-point amplitude — the emission of two radiation particles. This amplitude gives the wave’s uncertainty, because having two radiation particles is like measuring the field twice. At first glance the amplitude was hard to interpret, with Planck’s constants all over the place.

    But when they computed the result in detail, many of the terms with Planck’s constant canceled each other out. In the end, O’Connell and his collaborators found that the six-point uncertainty also fell into a classical fragment and a quantum one. The classical uncertainty turned out to be zero, as it must. And the quantum part did not. In other words, the six-point amplitude had no classical information at all. In retrospect, the result seemed somewhat inevitable. But before investigating the fragments in detail, the researchers had naively expected that the six-point amplitude might still have some subtle classical meaning.

    “This is pure quantum. That was a bit of a shocker at least for me,” O’Connell said.

    O’Connell had studied a force related to electromagnetism. So to check if the result also held true for gravity, Ruth Britto at Trinity College Dublin, the University of Dublin, Ireland and others used various technical shortcuts to calculate the no-loop six-point amplitude for two massive particles. They found that it too has no classical content.

    “It’s hard to believe until you do the calculations,” said Riccardo Gonzo, also at Trinity College Dublin, who worked on both results.

    Similar logic leads the researchers to expect that at higher loops, all amplitudes with more than five points will be either all quantum, and thus ignorable, or expressible as a simpler function of known amplitudes. An unending parade of uncertainty relationships all but guarantees it.

    “The expectation is that quantum field theory does describe classical physics,” Roiban said. “It turns out that it is in this way that it does it, by having zero uncertainty in some states.”

    The upshot is that classical waves are easier to describe in the language of quantum mechanics than researchers feared. “A gravitational wave, or a wave of any kind, is something big and floppy. It should depend on many little things,” said Roiban. But “once you know the collision plus one photon or one graviton in the final state, then you know everything.”

    Spiraling Toward Mergers

    When LIGO/Virgo picks up gravitational waves, the signal is as much as 10% noise. Future detectors such as the space-based LISA may record ripples in space-time with 99% fidelity or better.

    At that level of crispness, researchers expect gravitational waves to reveal a wealth of information, such as the stiffness of merging neutron stars. The recent progress in predicting the shape of waves using quantum amplitudes raises hopes that researchers will be able to unlock that information.

    “If this turns out to really be the case,” Buonanno said, “it would be fantastic. I think it will simplify the calculation at the end, but we just have to see.”

    For now, though, the calculation of real, astrophysical waveforms from amplitudes remains an ambitious project. Four- and five-point amplitudes capture what happens when black holes “scatter,” or slingshot off each other, and the technique can currently be extrapolated to understand simple mergers where black holes don’t spin. But in their present state these amplitudes struggle to fully describe the more complicated mergers that gravitational wave observatories detect. Amplitude researchers believe they can tweak their methods to calculate realistic waveforms for a wide variety of mergers, but they haven’t done so yet.

    Beyond gravitational waves, the general nature of the research suggests that the way the uncertainty principle organizes the quantum haystack could prove useful in other areas of quantum theory. The infinite array of relationships between amplitudes could enable independent cross-checks, for example, providing valuable guidance for calculations that can take months. And it may serve as a sharp test for distinguishing quantum theories that can describe our macro world from those that can’t.

    “In the past it was intuition,” Roiban said. “Now it’s a clear-cut criterion. It’s a calculation, and it’s hard to argue with a calculation.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:43 am on February 18, 2022 Permalink | Reply
    Tags: "Machine Learning Becomes a Mathematical Collaborator", , , DeepMind, Quanta Magazine (US)   

    From Quanta Magazine (US): “Machine Learning Becomes a Mathematical Collaborator” 

    From Quanta Magazine (US)

    February 15, 2022
    Kelsey Houston-Edwards

    Two recent collaborations between mathematicians and DeepMind demonstrate the potential of machine learning to help researchers generate new mathematical conjectures.

    1
    Credit: Señor Salme for Quanta Magazine.

    Mathematicians often work together when they’re searching for insight into a hard problem. It’s a kind of freewheeling collaborative process that seems to require a uniquely human touch.

    But in two new results, the role of human collaborator has been replaced in part by a machine. The papers were completed at the end of November and summarized in a recent Nature article.

    “The things that I love about mathematics are its intuitive and creative aspects,” said Geordie Williamson, a mathematician at the University of Sydney and co-author of one of the papers. “The [machine learning] models were supporting that in a way that I hadn’t felt from computers before.”

    Two separate groups of mathematicians worked alongside DeepMind, a branch of Alphabet, Google’s parent company, dedicated to the development of advanced artificial intelligence systems.

    András Juhász and Marc Lackenby of the University of Oxford taught DeepMind’s machine learning models to look for patterns in geometric objects called knots. The models detected connections that Juhász and Lackenby elaborated to bridge two areas of knot theory that mathematicians had long speculated should be related. In separate work, Williamson used machine learning to refine an old conjecture that connects graphs and polynomials.

    Computers have aided in mathematical research for years, as proof assistants that make sure the logical steps in a proof really work and as brute force tools that can chew through huge amounts of data to search for counterexamples to conjectures.

    The new work represents a different form of human-machine collaboration. It demonstrates that by selectively incorporating machine learning into the generative phase of research, mathematicians can uncover leads that might have been hard to find without machine assistance.

    “The most amazing thing about this work — and it really is a big breakthrough — is the fact that all the pieces came together and that these people worked as a team,” said Radmila Sazdanovic of North Carolina State University. “It’s a truly transdisciplinary collaboration.”

    Some observers, however, view the collaboration as less of a sea change in the way mathematical research is conducted. While the computers pointed the mathematicians toward a range of possible relationships, the mathematicians themselves needed to identify the ones worth exploring.

    “All the hard work was done by the human mathematicians,” wrote Ernest Davis, a computer scientist at New York University, in an email.

    Patterns in Data

    Machine learning predicts outputs from inputs: Feed a model health data and it will output a diagnosis; show it an image of an animal and it will reply with the name of the species.

    This is often done using a machine learning approach called supervised learning in which researchers essentially teach the computer to make predictions by giving it many examples.

    For instance, imagine you want to teach a model to identify whether an image contains a cat or a dog. Researchers start by feeding the model many examples of each animal. Based on that training data, the computer constructs an extremely complicated mathematical function, which is essentially a machine for making predictions. Once the predictive function is established, researchers show the model a new image, and it will respond with the probability that the image is a cat or a dog.

    To make supervised learning useful as a research tool, mathematicians had to find the right questions for DeepMind to tackle. They needed problems that involved mathematical objects for which a lot of training data was available — a criterion that many mathematical investigations don’t meet.

    They also needed to find a way to take advantage of DeepMind’s powerful ability to perceive hidden connections, while also navigating its significant limitations as a collaborator. Often, machine learning works as a black box, producing outputs from inputs according to rules that human beings can’t decipher.

    “[The computer] could see really unusual things, but also struggled to explain very effectively,” said Alex Davies, a researcher at DeepMind.

    The mathematicians weren’t looking for DeepMind to merely output correct answers. To really advance the field they needed to also know why the connections held — a step that the computer couldn’t take.

    Bridging Invariants

    In 2018, Williamson and Demis Hassabis, the CEO and co-founder of DeepMind, were both elected as fellows of the Royal Society, a British organization of distinguished scientists. During a coffee break at the admissions ceremony, they discovered a mutual interest.

    “I’d thought a little bit about how machine learning could help mathematics, and he’d thought a lot about it,” said Williamson. “We just kind of bounced ideas off each other.”

    They decided that a branch of mathematics known as knot theory would be the ideal testing ground for a human-computer collaboration. It involves mathematical objects called knots, which you can think of as tangled loops of string. Knot theory fits the requirements for machine learning because it has abundant data — there are many millions of relatively simple knots — and because many properties of knots can be easily computed using existing software.

    Williamson suggested that DeepMind contact Lackenby, an established knot theorist, to find a specific problem to work on.

    Juhász and Lackenby understood the strengths and weaknesses of machine learning. Given those, they hoped to use it to find novel connections between different types of invariants, which are properties used to distinguish knots from each other.

    Two knots are considered different when it’s impossible to untangle them (without cutting them) so that they look like each other. Invariants are inherent properties of the knot that do not change during the untangling process (hence the name “invariant”). So if two knots have different values for an invariant, they can never be manipulated into one another.

    There are many different types of knot invariants, characterized by how they describe the knot. Some are more geometric, others are algebraic, and some are combinatorial. However, mathematicians have been able to prove very little about the relationships between invariants from different fields. They typically don’t know whether different invariants actually measure the same feature of a knot from multiple perspectives.

    Juhász and Lackenby saw an opportunity for machine learning to spot connections between different categories of invariants. From these connections they could gain a deeper insight into the nature of knot invariants.

    Signature Verification

    To pursue Juhász and Lackenby’s question, researchers at DeepMind developed a data set with over 2 million knots. For each knot, they computed different invariants. Then they used machine learning to search for patterns that tied invariants together. The computer perceived many, most of which were not especially interesting to the mathematicians.

    “We saw quite a few patterns that were either known or were known not to be true,” said Lackenby. “As mathematicians, we weeded out quite a lot of the stuff the machine learning was sending to us.”

    Unlike Juhász and Lackenby, the machine learning system does not understand the underlying mathematical theory. The input data was computed from knot invariants, but the computer only sees lists of numbers.

    “As far as the machine learning system was concerned, these could have been sales records of various kinds of foods at McDonald’s,” said Davis.

    Eventually the two mathematicians settled on trying to teach the computer to output an important algebraic invariant called the “signature” of a knot, based only on information about the knot’s geometric invariants.

    After Juhász and Lackenby identified the problem, researchers at DeepMind began to build the specific machine learning algorithm. They trained the computer to take 30 geometric invariants of a knot as an input and to output the knot’s signature. It worked well, and after a few weeks of work, DeepMind could accurately predict the signature of most knots.

    Next, the researchers needed to find out how the model was making these predictions. To do this, the team at DeepMind turned to a technique known as saliency analysis, which can be used to tease out which of the many inputs are most responsible for producing the output. They slightly changed the value of each input, one at a time, and examined which change had the most dramatic impact on the output.

    If an algorithm is designed to predict whether an image shows a cat, researchers performing saliency analysis will blur tiny sections of the picture and then check whether the computer still recognizes the cat. They might find, for instance, that the pixels in the corner of the image are less important than those that compose the cat’s ear.

    When the researchers applied saliency analysis to the data, they observed that three of the 30 geometric invariants seemed especially important to how the model was making predictions. All three of these invariants measure features of the cusp, which is a hollow tube encasing the knot, like the rubber coating around a cable.

    Based on this information, Juhász and Lackenby constructed a formula which relates the signature of a knot to those three geometric invariants. The formula also uses another common invariant, the volume of a sphere with the knot carved out of it. When they tested the formula on specific knots, it seemed to work, but that wasn’t enough to establish a new mathematical theorem. The mathematicians were looking for a precise statement that they could prove was always valid — and that was harder.

    “It just wasn’t quite working out,” said Lackenby.

    Juhász and Lackenby’s intuition, built up through years of studying similar problems, told them that the formula was still missing something. They realized they needed to introduce another geometric invariant, something called the injectivity radius, which roughly measures the length of certain curves related to the knot. It was a step that used the mathematicians’ trained intuition, but it was enabled by the particular insights they were able to glean from the many unedited connections identified by DeepMind’s model.

    “The good thing is that [machine learning models] have completely different strengths and weaknesses than humans do,” said Adam Zsolt Wagner of Tel Aviv University.

    The modification was successful. By combining information about the injectivity radius with the three geometric invariants DeepMind had singled out, Juhász and Lackenby created a failproof formula for computing the signature of a knot. The final result had the spirit of a real collaboration.

    “It was definitely an iterative process involving both the machine learning experts from DeepMind and us,” said Lackenby.

    Converting Graphs Into Polynomials

    Building on the momentum of the knot theory project, in early 2020 DeepMind turned back to Williamson to see if he wanted to test a similar process in his field, representation theory. Representation theory is a branch of math that looks for ways of combining basic elements of mathematics like symmetries to make more sophisticated objects.

    Within this field, Kazhdan-Lusztig polynomials are particularly important. They are based on ways of rearranging objects — such as by swapping the order of two objects in a list — called permutations. Each Kazhdan-Lusztig polynomial is built from a pair of permutations and encodes information about their relationship. They’re also very mysterious, and it is often difficult to compute their coefficients.

    2
    Mathematicians and DeepMind used machine learning to search for a formula for converting Bruhat graphs into polynomials. Credit: Geordie Williamson.

    Given this, mathematicians try to understand Kazhdan-Lusztig polynomials in terms of easier objects to work with called Bruhat graphs. Each vertex on a Bruhat graph represents a permutation of a specific number of objects. Edges connect vertices whose permutations differ by swapping just two elements.

    In the 1980s, George Lusztig and Matthew Dyer independently predicted that there should be a relationship between a Bruhat graph and a Kazhdan-Lusztig polynomial. The relationship would be useful because the polynomial is more fundamental, while the graph is simpler to compute.

    And, just like the problem of predicting one knot invariant by using another, this problem was well suited to DeepMind’s abilities. The DeepMind team started by training the model on nearly 20,000 paired Bruhat graphs and Kazhdan-Lusztig polynomials.

    Soon it was able to frequently predict the right Kazhdan-Lusztig polynomial from a Bruhat graph. But to write down a recipe for getting from one to the other, Williamson needed to know how the computer was making its predictions.

    A Formula, if You Can Prove It

    Here, again, the DeepMind researchers turned to saliency techniques. Bruhat graphs are huge, but the computer’s predictions were based mostly on a small number of edges. Edges that represented exchanging faraway numbers (like 1 and 9) were more important for the predictions than edges connecting permutations that flipped nearby numbers (like 4 and 5). It was a lead that Williamson then had to develop.

    “Alex [Davies] is telling me these edges, for whatever reason, are way more important than others,” said Williamson. “The ball was back in my court, and I kind of stared at these for a few months.”

    Williamson ultimately devised 10 or so formulas for converting Bruhat graphs into Kazhdan-Lusztig polynomials. The DeepMind team checked them against millions of examples of Bruhat graphs. For Williamson’s first several formulas, the DeepMind team quickly found examples that didn’t work — places the recipes failed.

    But eventually Williamson found a formula that seems likely to stick. It involves breaking the Bruhat graph into pieces which resemble cubes and using that information to compute the associated polynomial. DeepMind researchers have since verified the formula on millions of examples. Now it’s up to Williamson and other mathematicians to prove the recipe always works.

    Using computers to check for counterexamples is a standard part of mathematical research. But the recent collaborations make computers useful in a new way. For data-heavy problems, machine learning can help guide mathematicians in novel directions, much like a colleague making a casual suggestion.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:07 pm on January 28, 2022 Permalink | Reply
    Tags: "Researchers Build AI That Builds AI", , , Neural networks: a form of AI that learns to discern patterns in data., Quanta Magazine (US), Stochastic gradient descent   

    From Quanta Magazine (US): “Researchers Build AI That Builds AI” 

    From Quanta Magazine (US)

    January 25, 2022
    Anil Ananthaswamy

    1
    Olivia Fields for Quanta Magazine.

    Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them.

    Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn’t easy. “Training could take days, weeks or even months,” said Petar Veličković, a staff research scientist at DeepMind in London.

    That may soon change. Boris Knyazev of The University of Guelph (CA) and his colleagues have designed and trained a “hypernetwork” — a kind of overlord of other neural networks — that could speed up the training process. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. Because the hypernetwork learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications.

    For now, the hypernetwork performs surprisingly well in certain settings, but there’s still room for it to grow — which is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Veličković.

    Getting Hyper

    Currently, the best methods for training and optimizing deep neural networks are variations of a technique called stochastic gradient descent (SGD). Training involves minimizing the errors the network makes on a given task, such as image recognition. An SGD algorithm churns through lots of labeled data to adjust the network’s parameters and reduce the errors, or loss. Gradient descent is the iterative process of climbing down from high values of the loss function to some minimum value, which represents good enough (or sometimes even the best possible) parameter values.

    But this technique only works once you have a network to optimize. To build the initial neural network, typically made up of multiple layers of artificial neurons that lead from an input to an output, engineers must rely on intuitions and rules of thumb. These architectures can vary in terms of the number of layers of neurons, the number of neurons per layer, and so on.

    2
    Gradient descent takes a network down its “loss landscape,” where higher values represent greater errors, or loss. The algorithm tries to find the global minimum value to minimize loss.
    Credit: Samuel Velasco/Quanta Magazine. Source: http://www.math.stackexchange.com

    One can, in theory, start with lots of architectures, then optimize each one and pick the best. “But training [takes] a pretty nontrivial amount of time,” said Mengye Ren, now a visiting researcher at Google Brain. It would be impossible to train and test every candidate network architecture. “[It doesn’t] scale very well, especially if you consider millions of possible designs.”

    So in 2018, Ren, along with his former University of Toronto (CA) colleague Chris Zhang and their adviser Raquel Urtasun, tried a different approach. They designed what they called a graph hypernetwork (GHN) to find the best deep neural network architecture to solve some task, given a set of candidate architectures.

    The name outlines their approach. “Graph” refers to the idea that the architecture of a deep neural network can be thought of as a mathematical graph — a collection of points, or nodes, connected by lines, or edges. Here the nodes represent computational units (usually, an entire layer of a neural network), and edges represent the way these units are interconnected.

    Here’s how it works. A graph hypernetwork starts with any architecture that needs optimizing (let’s call it the candidate). It then does its best to predict the ideal parameters for the candidate. The team then sets the parameters of an actual neural network to the predicted values and tests it on a given task. Ren’s team showed that this method could be used to rank candidate architectures and select the top performer.

    When Knyazev and his colleagues came upon the graph hypernetwork idea, they realized they could build upon it. In their new paper, the team shows how to use GHNs not just to find the best architecture from some set of samples, but also to predict the parameters for the best network such that it performs well in an absolute sense. And in situations where the best is not good enough, the network can be trained further using gradient descent.

    “It’s a very solid paper. [It] contains a lot more experimentation than what we did,” Ren said of the new work. “They work very hard on pushing up the absolute performance, which is great to see.”

    Training the Trainer

    Knyazev and his team call their hypernetwork GHN-2, and it improves upon two important aspects of the graph hypernetwork built by Ren and colleagues.

    First, they relied on Ren’s technique of depicting the architecture of a neural network as a graph. Each node in the graph encodes information about a subset of neurons that do some specific type of computation. The edges of the graph depict how information flows from node to node, from input to output.

    The second idea they drew on was the method of training the hypernetwork to make predictions for new candidate architectures. This requires two other neural networks. The first enables computations on the original candidate graph, resulting in updates to information associated with each node, and the second takes the updated nodes as input and predicts the parameters for the corresponding computational units of the candidate neural network. These two networks also have their own parameters, which must be optimized before the hypernetwork can correctly predict parameter values.

    3
    Samuel Velasco/Quanta Magazine. Source: arxiv.org/abs/2110.13100

    To do this, you need training data — in this case, a random sample of possible artificial neural network (ANN) architectures. For each architecture in the sample, you start with a graph, and then you use the graph hypernetwork to predict parameters and initialize the candidate ANN with the predicted parameters. The ANN then carries out some specific task, such as recognizing an image. You calculate the loss made by the ANN and then — instead of updating the parameters of the ANN to make a better prediction — you update the parameters of the hypernetwork that made the prediction in the first place. This enables the hypernetwork to do better the next time around. Now, iterate over every image in some labeled training data set of images and every ANN in the random sample of architectures, reducing the loss at each step, until it can do no better. At some point, you end up with a trained hypernetwork.

    Knyazev’s team took these ideas and wrote their own software from scratch, since Ren’s team didn’t publicize their source code. Then Knyazev and colleagues improved upon it. For starters, they identified 15 types of nodes that can be mixed and matched to construct almost any modern deep neural network. They also made several advances to improve the prediction accuracy.

    Most significantly, to ensure that GHN-2 learns to predict parameters for a wide range of target neural network architectures, Knyazev and colleagues created a unique data set of 1 million possible architectures. “To train our model, we created random architectures [that are] as diverse as possible,” said Knyazev.

    As a result, GHN-2’s predictive prowess is more likely to generalize well to unseen target architectures. “They can, for example, account for all the typical state-of-the-art architectures that people use,” said Thomas Kipf, a research scientist at Google Research’s Brain Team in Amsterdam. “That is one big contribution.”

    Impressive Results

    The real test, of course, was in putting GHN-2 to work. Once Knyazev and his team trained it to predict parameters for a given task, such as classifying images in a particular data set, they tested its ability to predict parameters for any random candidate architecture. This new candidate could have similar properties to the million architectures in the training data set, or it could be different — somewhat of an outlier. In the former case, the target architecture is said to be in distribution; in the latter, it’s out of distribution. Deep neural networks often fail when making predictions for the latter, so testing GHN-2 on such data was important.

    Armed with a fully trained GHN-2, the team predicted parameters for 500 previously unseen random target network architectures. Then these 500 networks, their parameters set to the predicted values, were pitted against the same networks trained using stochastic gradient descent. The new hypernetwork often held its own against thousands of iterations of SGD, and at times did even better, though some results were more mixed.

    For a data set of images known as CIFAR-10, GHN-2’s average accuracy on in-distribution architectures was 66.9%, which approached the 69.2% average accuracy achieved by networks trained using 2,500 iterations of SGD. For out-of-distribution architectures, GHN-2 did surprisingly well, achieving about 60% accuracy. In particular, it achieved a respectable 58.6% accuracy for a specific well-known deep neural network architecture called ResNet-50. “Generalization to ResNet-50 is surprisingly good, given that ResNet-50 is about 20 times larger than our average training architecture,” said Knyazev, speaking at NeurIPS 2021, the field’s flagship meeting.

    GHN-2 didn’t fare quite as well with ImageNet, a considerably larger data set: On average, it was only about 27.2% accurate. Still, this compares favorably with the average accuracy of 25.6% for the same networks trained using 5,000 steps of SGD. (Of course, if you continue using SGD, you can eventually — at considerable cost — end up with 95% accuracy.) Most crucially, GHN-2 made its ImageNet predictions in less than a second, whereas using SGD to obtain the same performance as the predicted parameters took, on average, 10,000 times longer on their graphical processing unit (the current workhorse of deep neural network training).

    “The results are definitely super impressive,” Veličković said. “They basically cut down the energy costs significantly.”

    And when GHN-2 finds the best neural network for a task from a sampling of architectures, and that best option is not good enough, at least the winner is now partially trained and can be optimized further. Instead of unleashing SGD on a network initialized with random values for its parameters, one can use GHN-2’s predictions as the starting point. “Essentially we imitate pre-training,” said Knyazev.

    Beyond GHN-2

    Despite these successes, Knyazev thinks the machine learning community will at first resist using graph hypernetworks. He likens it to the resistance faced by deep neural networks before 2012. Back then, machine learning practitioners preferred hand-designed algorithms rather than the mysterious deep nets. But that changed when massive deep nets trained on huge amounts of data began outperforming traditional algorithms. “This can go the same way.”

    In the meantime, Knyazev sees lots of opportunities for improvement. For instance, GHN-2 can only be trained to predict parameters to solve a given task, such as classifying either CIFAR-10 or ImageNet images, but not at the same time. In the future, he imagines training graph hypernetworks on a greater diversity of architectures and on different types of tasks (image recognition, speech recognition and natural language processing, for instance). Then the prediction can be conditioned on both the target architecture and the specific task at hand.

    And if these hypernetworks do take off, the design and development of novel deep neural networks will no longer be restricted to companies with deep pockets and access to big data. Anyone could get in on the act. Knyazev is well aware of this potential to “democratize deep learning,” calling it a long-term vision.

    However, Veličković highlights a potentially big problem if hypernetworks like GHN-2 ever do become the standard method for optimizing neural networks. With graph hypernetworks, he said, “you have a neural network — essentially a black box — predicting the parameters of another neural network. So when it makes a mistake, you have no way of explaining [it].”

    Of course, this is already largely the case for neural networks. “I wouldn’t call it a weakness,” said Veličković. “I would call it a warning sign.”

    Kipf, however sees a silver lining. “Something [else] got me most excited about it.” GHN-2 showcases the ability of graph neural networks to find patterns in complicated data.

    Normally, deep neural networks find patterns in images or text or audio signals, which are fairly structured types of information. GHN-2 finds patterns in the graphs of completely random neural network architectures. “That’s very complicated data.”

    And yet, GHN-2 can generalize — meaning it can make reasonable predictions of parameters for unseen and even out-of-distribution network architectures. “This work shows us a lot of patterns are somehow similar in different architectures, and a model can learn how to transfer knowledge from one architecture to a different one,” said Kipf. “That’s something that could inspire some new theory for neural networks.”

    If that’s the case, it could lead to a new, greater understanding of those black boxes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 6:03 pm on January 21, 2022 Permalink | Reply
    Tags: "BQL" and "BQuL", "Computer Scientists Eliminate Pesky Quantum Computations", 28 years ago computer scientists established that for quantum algorithms you can wait until the end of a computation to make intermediate measurements without changing the final result., , , If at any point in a calculation you need to access the information contained in a qubit and you measure it the qubit collapses., Instead of encoding information in the 0s and 1s of typical bits quantum computers encode information in higher-dimensional combinations of bits called qubits., Proof that any quantum algorithm can be rearranged to move measurements performed in the middle of the calculation to the end of the process., Quanta Magazine (US), , , The basic difference between quantum computers and the computers we have at home is the way each stores information., This collapse possibly affects all the other qubits in the system., Virtually all algorithms require knowing the value of a computation as it’s in progress.   

    From Quanta Magazine (US): “Computer Scientists Eliminate Pesky Quantum Computations” 

    From Quanta Magazine (US)

    January 19, 2022
    Nick Thieme

    1
    Credit: Samuel Velasco/Quanta Magazine.

    As quantum computers have become more functional, our understanding of them has remained muddled. Work by a pair of computer scientists [Symposium on Theory of Computing] has clarified part of the picture, providing insight into what can be computed with these futuristic machines.

    “It’s a really nice result that has implications for quantum computation,” said John Watrous of The University of Waterloo (CA).

    The research, posted in June 2020 by Bill Fefferman and Zachary Remscrim of The University of Chicago (US), proves that any quantum algorithm can be rearranged to move measurements performed in the middle of the calculation to the end of the process, without changing the final result or drastically increasing the amount of memory required to carry out the task. Previously, computer scientists thought that the timing of those measurements affected memory requirements, creating a bifurcated view of the complexity of quantum algorithms.

    “This has been quite annoying,” said Fefferman. “We’ve had to talk about two complexity classes — one with intermediate measurements and one without.”

    This issue applies exclusively to quantum computers due to the unique way they work. The basic difference between quantum computers and the computers we have at home is the way each stores information. Instead of encoding information in the 0s and 1s of typical bits quantum computers encode information in higher-dimensional combinations of bits called qubits.

    This approach enables denser information storage and sometimes faster calculations. But it also presents a problem. If at any point in a calculation you need to access the information contained in a qubit and you measure it, the qubit collapses from a delicate combination of simultaneously possible bits into a single definite one, possibly affecting all the other qubits in the system.

    This can be a problem because virtually all algorithms require knowing the value of a computation as it’s in progress. For instance, an algorithm may contain a statement like “If the variable x is a number, multiply it by 10; if not, leave it alone.” Performing these steps would seem to require knowing what x is at that moment in the computation — a potential challenge for quantum computers, where measuring the state of a particle (to determine what x is) inherently changes it.

    But 28 years ago, computer scientists proved it’s possible to avoid this kind of no-win situation. They established that for quantum algorithms, you can wait until the end of a computation to make intermediate measurements without changing the final result.

    An essential part of that result showed that you can push intermediate measurements to the end of a computation without drastically increasing the total running time. These features of quantum algorithms — that measurements can be delayed without affecting the answer or the runtime — came to be called the principle of deferred measurement.

    This principle fortifies quantum algorithms, but at a cost. Deferring measurements uses a great deal of extra memory space, essentially one extra qubit per deferred measurement. While one bit per measurement might take only a tiny toll on a classical computer with 4 trillion bits, it’s prohibitive given the limited number of qubits currently in the largest quantum computers.

    Google 53-qubit “Sycamore” superconducting processor quantum computer.

    3
    IBM Unveils Breakthrough 127-Qubit Quantum Processor. Credit: IBM Corp.

    Fefferman and Remscrim’s work resolves this issue in a surprising way. With an abstract proof, they show that subject to a few caveats, anything calculable with intermediate measurements can be calculated without them. Their proof offers a memory-efficient way to defer intermediate measurements — circumventing the memory problems that such measurements created.

    3

    “In the most standard scenario, you don’t need intermediate measurements,” Fefferman said.

    Fefferman and Remscrim achieved their result by showing that a representative problem called “well-conditioned matrix powering” is, in a way, equivalent to a different kind of problem with important properties.

    The “well-conditioned matrix powering” problem effectively asks you to find the values for particular entries in a type of matrix (an array of numbers), given some conditions. Fefferman and Remscrim proved that matrix powering is just as hard as any other quantum computing problem that allows for intermediate measurements. This set of problems is called “BQL”, and the team’s work meant that matrix powering could serve as a representative for all other problems in that class — so anything they proved about matrix powering would be true for all other problems involving intermediate measurements.

    At this point, the researchers took advantage of some of their earlier work. In 2016, Fefferman and Cedric Lin proved that a related problem called “well-conditioned matrix inversion” was equivalent to the hardest problem in a very similar class of problems called “BQuL”. This class is like BQL’s little sibling. It’s identical to BQL, except that it comes with the requirement that every problem in the class must also be reversible.

    In quantum computing, the distinction between reversible and irreversible measurements is essential. If a calculation measures a qubit, it collapses the state of the qubit, making the initial information impossible to recover. As a result, all measurements in quantum algorithms are innately irreversible.

    That means that BQuL is not just the reversible version of BQL; it’s also BQL without any intermediate measurements (because intermediate measurements, like all quantum measurements, would be irreversible, violating the signal condition of the class). The 2016 work proved that matrix inversion is a prototypical quantum calculation without intermediate measurements — that is, a fully representative problem for BQuL.

    The new paper builds on that by connecting the two, proving that well-conditioned matrix powering, which represents all problems with intermediate measurements, can be reduced to well-conditioned matrix inversion, which represents all problems that cannot feature intermediate measurements. In other words, any quantum computing problem with intermediate measurements can be reduced to a quantum computing problem without intermediate measurements.

    This means that for quantum computers with limited memory, researchers no longer need to worry about intermediate measurements when classifying the memory needs of different types of quantum algorithms.

    In 2020, a group of researchers at Princeton University (US) — Ran Raz, Uma Girish and Wei Zhan — independently proved a slightly weaker but nearly identical result that they posted three days after Fefferman and Rimscrim’s work. Raz and Girish later extended the result, proving that intermediate measurements can be deferred in both a time-efficient and space-efficient way for a more limited class of computers.

    Altogether, the recent work provides a much better understanding of how limited-memory quantum computation works. With this theoretical guarantee, researchers have a road map for translating their theory into applied algorithms. Quantum algorithms are now free, in a sense, to proceed without the prohibitive costs of deferred measurements.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:45 pm on January 21, 2022 Permalink | Reply
    Tags: "Any Single Galaxy Reveals the Composition of an Entire Universe", A group of scientists may have stumbled upon a radical new way to do cosmology., , Cosmic density of matter, , , , , , , Quanta Magazine (US), The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project, Theoretical Astrophysics   

    From Quanta Magazine (US): “Any Single Galaxy Reveals the Composition of an Entire Universe” 

    From Quanta Magazine (US)

    January 20, 2022
    Charlie Wood

    1
    Credit: Kaze Wong / CAMELS collaboration.


    In the CAMELS project, coders simulated thousands of universes with diverse compositions, arrayed at the end of this video as cubes.

    A group of scientists may have stumbled upon a radical new way to do cosmology.

    Cosmologists usually determine the composition of the universe by observing as much of it as possible. But these researchers have found that a machine learning algorithm can scrutinize a single simulated galaxy and predict the overall makeup of the digital universe in which it exists — a feat analogous to analyzing a random grain of sand under a microscope and working out the mass of Eurasia. The machines appear to have found a pattern that might someday allow astronomers to draw sweeping conclusions about the real cosmos merely by studying its elemental building blocks.

    “This is a completely different idea,” said Francisco Villaescusa-Navarro, a theoretical astrophysicist at The Flatiron Institute Center for Computational Astrophysics (US) and lead author of the work. “Instead of measuring these millions of galaxies, you can just take one. It’s really amazing that this works.”

    It wasn’t supposed to. The improbable find grew out of an exercise Villaescusa-Navarro gave to Jupiter Ding, a Princeton University(US) undergraduate: Build a neural network that, knowing a galaxy’s properties, can estimate a couple of cosmological attributes. The assignment was meant merely to familiarize Ding with machine learning. Then they noticed that the computer was nailing the overall density of matter.

    “I thought the student made a mistake,” Villaescusa-Navarro said. “It was a little bit hard for me to believe, to be honest.”

    The results of the investigation that followed appeared on January 6 submitted for publication. The researchers analyzed 2,000 digital universes generated by The Cosmology and Astrophysics with Machine Learning Simulations (CAMELS) project [The Astrophysical Journal]. These universes had a range of compositions, containing between 10% and 50% matter with the rest made up of Dark Energy, which drives the universe to expand faster and faster. (Our actual cosmos consists of roughly one-third Dark Matter and visible matter and two-thirds Dark Energy.) As the simulations ran, Dark Matter and visible matter swirled together into galaxies. The simulations also included rough treatments of complicated events like supernovas and jets that erupt from supermassive black holes.

    Ding’s neural network studied nearly 1 million simulated galaxies within these diverse digital universes. From its godlike perspective, it knew each galaxy’s size, composition, mass, and more than a dozen other characteristics. It sought to relate this list of numbers to the density of matter in the parent universe.

    It succeeded. When tested on thousands of fresh galaxies from dozens of universes it hadn’t previously examined, the neural network was able to predict the cosmic density of matter to within 10%. “It doesn’t matter which galaxy you are considering,” Villaescusa-Navarro said. “No one imagined this would be possible.”

    “That one galaxy can get [the density to] 10% or so, that was very surprising to me,” said Volker Springel, an expert in simulating galaxy formation at The MPG Institute for Astrophysics [MPG Institut für Astrophysik](DE) who was not involved in the research.

    The algorithm’s performance astonished researchers because galaxies are inherently chaotic objects. Some form all in one go, and others grow by eating their neighbors. Giant galaxies tend to hold onto their matter, while supernovas and black holes in dwarf galaxies might eject most of their visible matter. Still, every galaxy had somehow managed to keep close tabs on the overall density of matter in its universe.

    One interpretation is “that the universe and/or galaxies are in some ways much simpler than we had imagined,” said Pauline Barmby, an astronomer at The Western University (CA). Another is that the simulations have unrecognized flaws.

    The team spent half a year trying to understand how the neural network had gotten so wise. They checked to make sure the algorithm hadn’t just found some way to infer the density from the coding of the simulation rather than the galaxies themselves. “Neural networks are very powerful, but they are super lazy,” Villaescusa-Navarro said.

    Through a series of experiments, the researchers got a sense of how the algorithm was divining the cosmic density. By repeatedly retraining the network while systematically obscuring different galactic properties, they zeroed in on the attributes that mattered most.

    Near the top of the list was a property related to a galaxy’s rotation speed, which corresponds to how much matter (dark and otherwise) sits in the galaxy’s central zone. The finding matches physical intuition, according to Springel. In a universe overflowing with Dark Matter, you’d expect galaxies to grow heavier and spin faster. So you might guess that rotation speed would correlate with the cosmic matter density, although that relationship alone is too rough to have much predictive power.

    The neural network found a much more precise and complicated relationship between 17 or so galactic properties and the matter density. This relationship persists despite galactic mergers, stellar explosions and black hole eruptions. “Once you get to more than [two properties], you can’t plot it and squint at it by eye and see the trend, but a neural network can,” said Shaun Hotchkiss, a cosmologist at The University of Auckland (NZ).

    While the algorithm’s success raises the question of how many of the universe’s traits might be extracted from a thorough study of just one galaxy, cosmologists suspect that real-world applications will be limited. When Villaescusa-Navarro’s group tested their neural network on a different property — cosmic clumpiness — it found no pattern. And Springel expects that other cosmological attributes, such as the accelerating expansion of the universe due to Dark Energy, have little effect on individual galaxies.

    The research does suggest that, in theory, an exhaustive study of the Milky Way and perhaps a few other nearby galaxies could enable an exquisitely precise measurement of our universe’s matter. Such an experiment, Villaescusa-Navarro said, could give clues to other numbers of cosmic import such as the sum of the unknown masses of the universe’s three types of neutrinos.

    3
    Neutrinos- Universe Today

    But in practice, the technique would have to first overcome a major weakness. The CAMELS collaboration cooks up its universes using two different recipes. A neural network trained on one of the recipes makes bad density guesses when given galaxies that were baked according to the other. The cross-prediction failure indicates that the neural network is finding solutions unique to the rules of each recipe. It certainly wouldn’t know what to do with the Milky Way, a galaxy shaped by the real laws of physics. Before applying the technique to the real world, researchers will need to either make the simulations more realistic or adopt more general machine learning techniques — a tall order.

    “I’m very impressed by the possibilities, but one needs to avoid being too carried away,” Springel said.

    But Villaescusa-Navarro takes heart that the neural network was able to find patterns in the messy galaxies of two independent simulations. The digital discovery raises the odds that the real cosmos may be hiding a similar link between the large and the small.

    “It’s a very beautiful thing,” he said. “It establishes a connection between the whole universe and a single galaxy.”

    _____________________________________________________________________________________
    The Dark Energy Survey

    Dark Energy Camera [DECam] built at DOE’s Fermi National Accelerator Laboratory(US).

    NOIRLab National Optical Astronomy Observatory(US) Cerro Tololo Inter-American Observatory(CL) Victor M Blanco 4m Telescope which houses the Dark-Energy-Camera – DECam at Cerro Tololo, Chile at an altitude of 7200 feet.

    NOIRLab(US)NSF NOIRLab NOAO (US) Cerro Tololo Inter-American Observatory(CL) approximately 80 km to the East of La Serena, Chile, at an altitude of 2200 meters.

    Timeline of the Inflationary Universe WMAP.

    The The Dark Energy Survey is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. The Dark Energy Survey began searching the Southern skies on August 31, 2013.

    According to Albert Einstein’s Theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up.

    Saul Perlmutter (center) [The Supernova Cosmology Project] shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt (right) and Adam Riess (left) [The High-z Supernova Search Team] for providing evidence that the expansion of the universe is accelerating.

    To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called Dark Energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    The Dark Energy Survey is designed to probe the origin of the accelerating universe and help uncover the nature of Dark Energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the Dark Energy Survey collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.
    _____________________________________________________________________________________

    Fritz Zwicky discovered Dark Matter in the 1930s when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM, denied the Nobel, some 30 years later, did most of the work on Dark Matter.

    Fritz Zwicky.
    Coma cluster via NASA/ESA Hubble, the original example of Dark Matter discovered during observations by Fritz Zwicky and confirmed 30 years later by Vera Rubin.
    In modern times, it was astronomer Fritz Zwicky, in the 1930s, who made the first observations of what we now call dark matter. His 1933 observations of the Coma Cluster of galaxies seemed to indicated it has a mass 500 times more than that previously calculated by Edwin Hubble. Furthermore, this extra mass seemed to be completely invisible. Although Zwicky’s observations were initially met with much skepticism, they were later confirmed by other groups of astronomers.

    Thirty years later, astronomer Vera Rubin provided a huge piece of evidence for the existence of dark matter. She discovered that the centers of galaxies rotate at the same speed as their extremities, whereas, of course, they should rotate faster. Think of a vinyl LP on a record deck: its center rotates faster than its edge. That’s what logic dictates we should see in galaxies too. But we do not. The only way to explain this is if the whole galaxy is only the center of some much larger structure, as if it is only the label on the LP so to speak, causing the galaxy to have a consistent rotation speed from center to edge.

    Vera Rubin, following Zwicky, postulated that the missing structure in galaxies is dark matter. Her ideas were met with much resistance from the astronomical community, but her observations have been confirmed and are seen today as pivotal proof of the existence of dark matter.
    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science).

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970.

    Vera Rubin measuring spectra, worked on Dark Matter(Emilio Segre Visual Archives AIP SPL).
    Dark Matter Research

    LBNL LZ Dark Matter Experiment (US) xenon detector at Sanford Underground Research Facility(US) Credit: Matt Kapust.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    DAMA at Gran Sasso uses sodium iodide housed in copper to hunt for dark matter LNGS-INFN.

    Yale HAYSTAC axion dark matter experiment at Yale’s Wright Lab.

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB (CA) deep in Sudbury’s Creighton Mine.

    The LBNL LZ Dark Matter Experiment (US) Dark Matter project at SURF, Lead, SD, USA.

    DAMA-LIBRA Dark Matter experiment at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) located in the Abruzzo region of central Italy.

    DARWIN Dark Matter experiment. A design study for a next-generation, multi-ton dark matter detector in Europe at The University of Zurich [Universität Zürich](CH).

    PandaX II Dark Matter experiment at Jin-ping Underground Laboratory (CJPL) in Sichuan, China.

    Inside the Axion Dark Matter eXperiment U Washington (US) Credit : Mark Stone U. of Washington. Axion Dark Matter Experiment.
    ______________________________________________________

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:35 pm on January 21, 2022 Permalink | Reply
    Tags: "In a Numerical Coincidence Some See Evidence for String Theory", "Massive Gravity", "String universality": a monopoly of string theories among viable fundamental theories of nature, , Asymptotically safe quantum gravity, , Graviton: A graviton is a closed string-or loop-in its lowest-energy vibration mode in which an equal number of waves travel clockwise and counterclockwise around the loop., Lorentz invariance: the same laws of physics must hold from all vantage points., , , Quanta Magazine (US), , , ,   

    From Quanta Magazine (US): “In a Numerical Coincidence Some See Evidence for String Theory” 

    From Quanta Magazine (US)

    January 21, 2022
    Natalie Wolchover

    1
    Dorine Leenders for Quanta Magazine.

    In a quest to map out a quantum theory of gravity, researchers have used logical rules to calculate how much Einstein’s theory must change. The result matches string theory perfectly.

    Quantum gravity researchers use “α” to denote the size of the biggest quantum correction to Albert Einstein’s Theory of General Relativity.

    Recently, three physicists calculated a number pertaining to the quantum nature of gravity. When they saw the value, “we couldn’t believe it,” said Pedro Vieira, one of the three.

    Gravity’s quantum-scale details are not something physicists usually know how to quantify, but the trio attacked the problem using an approach that has lately been racking up stunners in other areas of physics. It’s called the bootstrap.

    To bootstrap is to deduce new facts about the world by figuring out what’s compatible with known facts — science’s version of picking yourself up by your own bootstraps. With this method, the trio found a surprising coincidence: Their bootstrapped number closely matched the prediction for the number made by string theory. The leading candidate for the fundamental theory of gravity and everything else, string theory holds that all elementary particles are, close-up, vibrating loops and strings.

    Vieira, Andrea Guerrieri of The Tel Aviv University (IL), and João Penedones of The EPFL (Swiss Federal Institute of Technology in Lausanne) [École polytechnique fédérale de Lausanne](CH) reported their number and the match with string theory’s prediction in Physical Review Letters in August 2021. Quantum gravity theorists have been reading the tea leaves ever since.

    Some interpret the result as a new kind of evidence for string theory, a framework that sorely lacks even the prospect of experimental confirmation, due to the pointlike minuteness of the postulated strings.

    “The hope is that you could prove the inevitability of string theory using these ‘bootstrap’ methods,” said David Simmons-Duffin, a theoretical physicist at The California Institute of Technology (US). “And I think this is a great first step towards that.”

    2
    From left: Pedro Vieira, Andrea Guerrieri and João Penedones.
    Credit: Gabriela Secara / The Perimeter Institute for Theoretical Physics (CA); Courtesy of Andrea Guerrieri; The Swiss National Centres of Competence in Research (NCCRs) [Pôle national suisse de recherche en recherche][Schweizerisches Nationales Kompetenzzentrum für Forschung](CH) SwissMAP (CH)

    Irene Valenzuela, a theoretical physicist at the Institute for Theoretical Physics at The Autonomous University of Madrid [Universidad Autónoma de Madrid](ES), agreed. “One of the questions is if string theory is the unique theory of quantum gravity or not,” she said. “This goes along the lines that string theory is unique.”

    Other commentators saw that as too bold a leap, pointing to caveats about the way the calculation was done.

    Einstein, Corrected

    The number that Vieira, Guerrieri and Penedones calculated is the minimum possible value of “α” (alpha). Roughly, “α” is the size of the first and largest mathematical term that you have to add to Albert Einstein’s gravity equations in order to describe, say, an interaction between two gravitons — the presumed quantum units of gravity.

    Albert Einstein’s 1915 Theory of General Relativity paints gravity as curves in the space-time continuum created by matter and energy. It perfectly describes large-scale behavior such as a planet orbiting a star. But when matter is packed into too-small spaces, General Relativity short-circuits. “Some correction to Einsteinian gravity has to be there,” said Simon Caron-Huot, a theoretical physicist at McGill University (CA).

    Physicists can tidily organize their lack of knowledge of gravity’s microscopic nature using a scheme devised in the 1960s by Kenneth Wilson and Steven Weinberg: They simply add a series of possible “corrections” to General Relativity that might become important at short distances. Say you want to predict the chance that two gravitons will interact in a certain way. You start with the standard mathematical term from Relativity, then add new terms (using any and all relevant variables as building blocks) that matter more as distances get smaller. These mocked-up terms are fronted by unknown numbers labeled “α”, “β”, “γ” and so on, which set their sizes. “Different theories of quantum gravity will lead to different such corrections,” said Vieira, who has joint appointments at The Perimeter Institute for Theoretical Physics (CA), and The International Centre for Theoretical Physics at The South American Institute for Fundamental Research [Instituto sul-Americano de Pesquisa Fundamental] (BR). “So these corrections are our first way to tell such possibilities apart.”

    In practice, “α” has only been explicitly calculated in string theory, and even then only for highly symmetric 10-dimensional universes. The English string theorist Michael Green and colleagues determined in the 1990s that in such worlds “α” must be at least 0.1389. In a given stringy universe it might be higher; how much higher depends on the string coupling constant, or a string’s propensity to spontaneously split into two. (This coupling constant varies between versions of string theory, but all versions unite in a master framework called “M-theory”, where string coupling constants correspond to different positions in an extra 11th dimension.)

    Meanwhile, alternative quantum gravity ideas remain unable to make predictions about “α”. And since physicists can’t actually detect gravitons — the force of gravity is too weak — they haven’t been able to directly measure “α” as a way of investigating and testing quantum gravity theories.

    Then a few years ago, Penedones, Vieira and Guerrieri started talking about using the bootstrap method to constrain what can happen during particle interactions. They first successfully applied the approach to particles called pions. “We said, OK, here it’s working very well, so why not go for gravity?” Guerrieri said.

    Bootstrapping the Bound

    The trick of using accepted truths to constrain unknown possibilities was devised by particle physicists in the 1960s, then forgotten, then revived to fantastic effect over the past decade by researchers with supercomputers, which can solve the formidable formulas that bootstrapping tends to produce.

    Guerrieri, Vieira and Penedones set out to determine what “α” has to be in order to satisfy two consistency conditions. The first, known as unitarity, states that the probabilities of different outcomes must always add up to 100%. The second, known as Lorentz invariance, says that the same laws of physics must hold from all vantage points.

    The trio specifically considered the range of values of “α” permitted by those two principles in supersymmetric 10D universes. Not only is the calculation simple enough to pull off in that setting (not so, currently, for “α” in 4D universes like our own), but it also allowed them to compare their bootstrapped range to string theory’s prediction that “α” in that 10D setting is 0.1389 or higher.

    Unitarity and Lorentz invariance impose constraints on what can happen in a two-graviton interaction in the following way: When the gravitons approach and scatter off each other, they might fly apart as two gravitons, or morph into three gravitons or any number of other particles. As you crank up the energies of the approaching gravitons, the chance they’ll emerge from the encounter as two gravitons changes — but unitarity demands that this probability never surpass 100%. Lorentz invariance means the probability can’t depend on how an observer is moving relative to the gravitons, restricting the form of the equations. Together the rules yield a complicated bootstrapped expression that “α” must satisfy. Guerrieri, Penedones and Vieira programmed the Perimeter Institute’s computer clusters to solve for values that make the two-graviton interactions unitary and Lorentz-invariant.

    The computer spit out its lower bound for “α”: 0.14, give or take a hundredth — an extremely close and potentially exact match with string theory’s lower bound of 0.1389. In other words, string theory seems to span the whole space of allowed “α” values — at least in the 10D place where the researchers checked. “That was a huge surprise,” Vieira said.

    10-Dimensional Coincidence

    What might the numerical coincidence mean? According to Simmons-Duffin, whose work a few years ago helped drive the bootstrap’s resurgence, “they’re trying to tackle a question [that’s] fundamental and important. Which is: To what extent does string theory as we know it cover the space of all possible theories of quantum gravity?”

    String theory emerged in the 1960s as a putative picture of the stringy glue that binds composite particles called mesons. A different description ended up prevailing for that purpose, but years later people realized that string theory could set its sights higher: If strings are small — so small they look like points — they could serve as nature’s elementary building blocks. Electrons, photons and so on would all be the same kind of fundamental string strummed in different ways. The theory’s selling point is that it gives a quantum description of gravity: A graviton is a closed string, or loop, in its lowest-energy vibration mode, in which an equal number of waves travel clockwise and counterclockwise around the loop. This feature would underlie macroscopic properties of gravity like the corkscrew-patterned polarization of gravitational waves.

    But matching the theory to all other aspects of reality takes some fiddling. To get rid of negative energies that would correspond to unphysical, faster-than-light particles, string theory needs a property called “Supersymmetry”, which doubles the number of its string vibration modes. Every vibration mode corresponding to a matter particle must come with another mode signifying a force particle. String theory also requires the existence of 10 space-time dimensions for the strings to wiggle around in. Yet we haven’t found any supersymmetric partner particles, and our universe looks 4D, with three dimensions of space and one of time.

    Standard Model of Supersymmetry

    Both of these data points present something of a problem.

    If string theory describes our world, Supersymmetry must be broken here. That means the partner particles, if they exist, must be far heavier than the known set of particles — too heavy to muster in experiments. And if there really are 10 dimensions, six must be curled up so small they’re imperceptible to us — tight little knots of extra directions you can go in at any point in space. These “compactified” dimensions in a 4D-looking universe could have countless possible arrangements, all affecting strings (and numbers like “α”) differently.

    Broken Supersymmetry and invisible dimensions have led many quantum gravity researchers to seek or prefer alternative, non-stringy ideas.

    Mordehai Milgrom, MOND theorist, is an Israeli physicist and professor in the department of Condensed Matter Physics at The Weizmann Institute of Science (IL) in Rehovot, Israel http://cosmos.nautil.us

    MOND Rotation Curves with MOND Tully-Fisher

    MOND 1

    But so far the rival approaches have struggled to produce the kind of concrete calculations about things like graviton interactions that string theory can.

    Some physicists hope to see string theory win hearts and minds by default, by being the only microscopic description of gravity that’s logically consistent. If researchers can prove “string universality,” as this is sometimes called — a monopoly of string theories among viable fundamental theories of nature — we’ll have no choice but to believe in hidden dimensions and an inaudible orchestra of strings.

    To string theory sympathizers, the new bootstrap calculation opens a route to eventually proving string universality, and it gets the journey off to a rip-roaring start.

    Other researchers disagree with those implications. Astrid Eichhorn, a theoretical physicist at The South Danish University [Syddansk Universitet](DK) and The Ruprecht Karl University of Heidelberg [Ruprecht-Karls-Universität Heidelberg](DE) who specializes in a non-stringy approach called asymptotically safe quantum gravity, told me, “I would consider the relevant setting to collect evidence for or against a given quantum theory of gravity to be four-dimensional and non-supersymmetric” universes, since this “best describes our world, at least so far.”

    Eichhorn pointed out that there might be unitary, Lorentz-invariant descriptions of gravitons in 4D that don’t make any sense in 10D. “Simply by this choice of setting one might have ruled out alternative quantum gravity approaches” that are viable, she said.

    Vieira acknowledged that string universality might hold only in 10 dimensions, saying, “It could be that in 10D with supersymmetry, there’s only string theory, and when you go to 4D, there are many theories.” But, he said, “I doubt it.”

    Another critique, though, is that even if string theory saturates the range of allowed “α” values in the 10-dimensional setting the researchers probed, that doesn’t stop other theories from lying in the permitted range. “I don’t see any practical way we’re going to conclude that string theory is the only answer,” said Andrew Tolley of Imperial College London (UK).

    Just the Beginning

    Assessing the meaning of the coincidence will become easier if bootstrappers can generalize and extend similar results to more settings. “At the moment, many, many people are pursuing these ideas in various variations,” said Alexander Zhiboedov, a theoretical physicist at The European Organization for Nuclear Research [Organización Europea para la Investigación Nuclear][Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN], Europe’s particle physics laboratory.

    Guerrieri, Penedones and Vieira have already completed a “dual” bootstrap calculation, which bounds “α” from below by ruling out solutions less than the minimum rather than solving for viable “α” values above the bound, as they did previously. This dual calculation shows that their computer clusters didn’t simply miss smaller allowed “α” values, which would correspond to additional viable quantum gravity theories outside string theory’s range.

    They also plan to bootstrap the lower bound for worlds with nine large dimensions, where string theory calculations are still under some control (since only one dimension is curled up), to look for more evidence of a correlation. Aside from “α”, bootstrappers also aim to calculate “β” and “γ” — the allowed sizes of the second- and third-biggest quantum gravity corrections— and they have ideas for how to approach harder calculations about worlds where supersymmetry is broken or nonexistent, as it appears to be in reality. In this way they’ll try to carve out the space of allowed quantum gravity theories, and test string universality in the process.

    Claudia de Rham, a theorist at Imperial College, emphasized the need to be “agnostic,” noting that bootstrap principles are useful for exploring more ideas than just string theory. She and Tolley have used positivity — the rule that probabilities are always positive — to constrain a theory called “Massive Gravity”, which may or may not be a realization of string theory. They discovered potentially testable consequences, showing that massive gravity only satisfies positivity if certain exotic particles exist. De Rham sees bootstrap principles and positivity bounds as “one of the most exciting research developments at the moment” in fundamental physics.

    “No one has done this job of taking everything we know and taking consistency and putting it together,” said Zhiboedov. It’s “exciting,” he added, that theorists have work to do “at a very basic level.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 1:56 pm on January 3, 2022 Permalink | Reply
    Tags: "Mathematicians Outwit Hidden Number Conspiracy", A new proof has debunked a conspiracy that mathematicians feared might haunt the number line., An improved solution to a particular formulation of the Chowla conjecture., , , Chowla conjecture, Connected numbers represent exceptions to Chowla’s conjecture in which the factorization of one integer actually does bias that of another., Consider the number 1001 which is divisible by the primes 7; 11; and 13. In Tao’s graph it shares edges with 1008; 1012 and 1014 (by addition) as well as with 994; 990 and 988 (by subtraction)., Eigenvalues, Expander graphs, Expander graphs have previously led to new discoveries in theoretical computer science; group theory and other areas of math., Harald Helfgott of The University of Göttingen [Georg-August-Universität Göttingen](DE), Helfgott and Radziwiłł have expander graphs available for problems in number theory as well., Helfgott and Radziwiłł’s solution to the logarithmic Chowla conjecture marked a significant quantitative improvement on Tao’s result., Linking two arithmetic operations that usually live independently of one another., Liouville function, Maksym Radziwiłł of The California Institute of Technology (US), Many of number theory’s most important problems arise when mathematicians think about how multiplication and addition relate in terms of the prime numbers., , Problems about primes that involve addition have plagued mathematicians for centuries., Proving the Chowla conjecture is a “sort of warmup or steppingstone” to answering those more intractable problems., Quanta Magazine (US), Tao proved an easier version of the problem called the logarithmic Chowla conjecture., Terence Tao of The University of California-Los Angeles (US), The primes themselves are defined in terms of multiplication: They’re divisible by no numbers other than themselves and 1., The twin primes conjecture asserts that there are infinitely many primes that differ by only 2 (like 11 and 13)., There could be this vast conspiracy that every time a number n decides to be prime it has some secret agreement with its neighbor n + 2 saying you’re not allowed to be prime anymore., This work has given mathematicians another set of tools for understanding arithmetic’s fundamental building blocks-the prime numbers., When multiplied together they construct the rest of the integers.   

    From Quanta Magazine (US) : “Mathematicians Outwit Hidden Number Conspiracy” 

    From Quanta Magazine (US)

    January 3, 2022
    Jordana Cepelewicz

    A new proof has debunked a conspiracy that mathematicians feared might haunt the number line. In doing so, it has given them another set of tools for understanding arithmetic’s fundamental building blocks-the prime numbers.

    In a paper posted last March, Harald Helfgott of The University of Göttingen [Georg-August-Universität Göttingen](DE) and Maksym Radziwiłł of The California Institute of Technology (US) presented an improved solution to a particular formulation of the Chowla conjecture, a question about the relationships between integers.

    The conjecture predicts that whether one integer has an even or odd number of prime factors does not influence whether the next or previous integer also has an even or odd number of prime factors. That is, nearby numbers do not collude about some of their most basic arithmetic properties.

    That seemingly straightforward inquiry is intertwined with some of math’s deepest unsolved questions about the primes themselves. Proving the Chowla conjecture is a “sort of warmup or steppingstone” to answering those more intractable problems, said Terence Tao of The University of California-Los Angeles (US).

    3
    Terence Tao developed a strategy for using expander graphs to answer a version of the Chowla conjecture but couldn’t quite make it work. Courtesy of UCLA.

    And yet for decades, that warmup was a nearly impossible task itself. It was only a few years ago that mathematicians made any progress, when Tao proved an easier version of the problem called the logarithmic Chowla conjecture. But while the technique he used was heralded as innovative and exciting, it yielded a result that was not precise enough to help make additional headway on related problems, including ones about the primes. Mathematicians hoped for a stronger and more widely applicable proof instead.

    Now, Helfgott and Radziwiłł have provided just that. Their solution, which pushes techniques from graph theory squarely into the heart of number theory, has reignited hope that the Chowla conjecture will deliver on its promise — ultimately leading mathematicians to the ideas they’ll need to confront some of their most elusive questions.

    Conspiracy Theories

    Many of number theory’s most important problems arise when mathematicians think about how multiplication and addition relate in terms of the prime numbers.

    The primes themselves are defined in terms of multiplication: They’re divisible by no numbers other than themselves and 1, and when multiplied together they construct the rest of the integers. But problems about primes that involve addition have plagued mathematicians for centuries. For instance, the twin primes conjecture asserts that there are infinitely many primes that differ by only 2 (like 11 and 13). The question is challenging because it links two arithmetic operations that usually live independently of one another. “It’s difficult because we are mixing two worlds,” said Oleksiy Klurman of The University of Bristol (UK).

    1
    Maksym Radziwiłł. Caltech.

    2
    Harald Helfgott . University of Göttingen.

    Intuition tells mathematicians that adding 2 to a number should completely change its multiplicative structure — meaning there should be no correlation between whether a number is prime (a multiplicative property) and whether the number two units away is prime (an additive property). Number theorists have found no evidence to suggest that such a correlation exists, but without a proof, they can’t exclude the possibility that one might emerge eventually.

    “For all we know, there could be this vast conspiracy that every time a number n decides to be prime, it has some secret agreement with its neighbor n + 2 saying you’re not allowed to be prime anymore,” said Tao.

    No one has come close to ruling out such a conspiracy. That’s why, in 1965, Sarvadaman Chowla formulated a slightly easier way to think about the relationship between nearby numbers. He wanted to show that whether an integer has an even or odd number of prime factors — a condition known as the “parity” of its number of prime factors — should not in any way bias the number of prime factors of its neighbors.

    This statement is often understood in terms of the Liouville function, which assigns integers a value of −1 if they have an odd number of prime factors (like 12, which is equal to 2 × 2 × 3) and +1 if they have an even number (like 10, which is equal to 2 × 5). The conjecture predicts that there should be no correlation between the values that the Liouville function takes for consecutive numbers.

    Many state-of-the-art methods for studying prime numbers break down when it comes to measuring parity, which is precisely what Chowla’s conjecture is all about. Mathematicians hoped that by solving it, they’d develop ideas they could apply to problems like the twin primes conjecture.

    For years, though, it remained no more than that: a fanciful hope. Then, in 2015, everything changed.

    Dispersing Clusters

    Radziwiłł and Kaisa Matomäki of The University of Turku [Turun yliopisto](FI) didn’t set out to solve the Chowla conjecture. Instead, they wanted to study the behavior of the Liouville function over short intervals. They already knew that, on average, the function is +1 half the time and −1 half the time. But it was still possible that its values might cluster, cropping up in long concentrations of either all +1s or all −1s.

    In 2015, Matomäki and Radziwiłł proved that those clusters almost never occur [Annals of Mathematics]. Their work, published the following year, established that if you choose a random number and look at, say, its hundred or thousand nearest neighbors, roughly half have an even number of prime factors and half an odd number.

    “That was the big piece that was missing from the puzzle,” said Andrew Granville of The University of Montreal [Université de Montréal](CA). “They made this unbelievable breakthrough that revolutionized the whole subject.”

    It was strong evidence that numbers aren’t complicit in a large-scale conspiracy — but the Chowla conjecture is about conspiracies at the finest level. That’s where Tao came in. Within months, he saw a way to build on Matomäki and Radziwiłł’s work to attack a version of the problem that’s easier to study, the logarithmic Chowla conjecture. In this formulation, smaller numbers are given larger weights so that they are just as likely to be sampled as larger integers.

    Tao had a vision for how a proof of the logarithmic Chowla conjecture might go. First, he would assume that the logarithmic Chowla conjecture is false — that there is in fact a conspiracy between the number of prime factors of consecutive integers. Then he’d try to demonstrate that such a conspiracy could be amplified: An exception to the Chowla conjecture would mean not just a conspiracy among consecutive integers, but a much larger conspiracy along entire swaths of the number line.

    He would then be able to take advantage of Radziwiłł and Matomäki’s earlier result, which had ruled out larger conspiracies of exactly this kind. A counterexample to the Chowla conjecture would imply a logical contradiction — meaning it could not exist, and the conjecture had to be true.

    But before Tao could do any of that, he had to come up with a new way of linking numbers.

    A Web of Lies

    Tao started by capitalizing on a defining feature of the Liouville function. Consider the numbers 2 and 3. Both have an odd number of prime factors and therefore share a Liouville value of −1. But because the Liouville function is multiplicative, multiples of 2 and 3 also have the same sign pattern as each other.

    That simple fact carries an important implication. If 2 and 3 both have an odd number of prime factors due to some secret conspiracy, then there’s also a conspiracy between 4 and 6 — numbers that differ not by 1 but by 2. And it gets worse from there: A conspiracy between adjacent integers would also imply conspiracies between all pairs of their multiples.

    “For any prime, these conspiracies will propagate,” Tao said.

    To better understand this widening conspiracy, Tao thought about it in terms of a graph — a collection of vertices connected by edges. In this graph, each vertex represents an integer. If two numbers differ by a prime and are also divisible by that prime, they’re connected by an edge.

    For example consider the number 1001, which is divisible by the primes 7, 11 and 13. In Tao’s graph, it shares edges with 1,008, 1,012 and 1,014 (by addition), as well as with 994, 990 and 988 (by subtraction). Each of these numbers is in turn connected to many other vertices.

    4
    Samuel Velasco/Quanta Magazine

    Taken together, those edges encode broader networks of influence: Connected numbers represent exceptions to Chowla’s conjecture in which the factorization of one integer actually does bias that of another.

    To prove his logarithmic version of the Chowla conjecture, Tao needed to show that this graph has too many connections to be a realistic representation of values of the Liouville function. In the language of graph theory, that meant showing that his graph of interconnected numbers had a specific property — that it was an “expander” graph.

    Expander Walks

    An expander is an ideal yardstick for measuring the scope of a conspiracy. It’s a highly connected graph, even though it has relatively few edges compared to its number of vertices. That makes it difficult to create a cluster of interconnected vertices that don’t interact much with other parts of the graph.

    If Tao could show that his graph was a local expander — that any given neighborhood on the graph had this property — he’d prove that a single breach of the Chowla conjecture would spread across the number line, a clear violation of Matomäki and Radziwiłł’s 2015 result.

    “The only way to have correlations is if the entire population sort of shares that correlation,” said Tao.

    Proving that a graph is an expander often translates to studying random walks along its edges. In a random walk, each successive step is determined by chance, as if you were wandering through a city and flipping a coin at each intersection to decide whether to turn left or right. If the streets of that city form an expander, it’s possible to get pretty much anywhere by taking random walks of relatively few steps.

    But walks on Tao’s graph are strange and circuitous. It’s impossible, for instance, to jump directly from 1,001 to 1,002; that requires at least three steps. A random walk along this graph starts at an integer, adds or subtracts a random prime that divides it, and moves to another integer.

    It’s not obvious that repeating this process only a few times can lead to any point in a given neighborhood, which should be the case if the graph really is an expander. In fact, when the integers on the graph get big enough, it’s no longer clear how to even create random paths: Breaking numbers down into their prime factors — and therefore defining the graph’s edges — becomes prohibitively difficult.

    “It’s a scary thing, counting all these walks,” Helfgott said.

    When Tao tried to show that his graph was an expander, “it was a little too hard,” he said. He developed a new approach instead, based on a measure of randomness called entropy. This allowed him to circumvent the need to show the expander property — but at a cost.

    He could solve the logarithmic Chowla conjecture [Forum of Mathematics, Pi], but less precisely than he’d wanted to. In an ideal proof of the conjecture, independence between integers should always be evident, even along small sections of the number line. But with Tao’s proof, that independence doesn’t become visible until you sample over an astronomical number of integers.

    “It’s not quantitatively very strong,” said Joni Teräväinen of the University of Turku.

    Moreover, it wasn’t clear how to extend his entropy method to other problems.

    “Tao’s work was a complete breakthrough,” said James Maynard of The University of Oxford (UK), but because of those limitations, “it couldn’t possibly give those things that would lead to the natural next steps in the direction of problems more like the twin primes conjecture.”

    Five years later, Helfgott and Radziwiłł managed to do what Tao couldn’t — by extending the conspiracy he’d identified even further.

    Enhancing the Conspiracy

    Tao had built a graph that connected two integers if they differed by a prime and were divisible by that prime. Helfgott and Radziwiłł considered a new, “naïve” graph that did away with that second condition, connecting numbers merely if subtracting one from the other yielded a prime.

    The effect was an explosion of edges. On this naïve graph, 1,001 didn’t have just six connections with other vertices, it had hundreds. But the graph was also much simpler than Tao’s in a key way: Taking random walks along its edges didn’t require knowledge of the prime divisors of very large integers. That, along with the greater density of edges, made it much easier to demonstrate that any neighborhood in the naïve graph had the expander property — that you’re likely to get from any vertex to any other in a small number of random steps.

    Helfgott and Radziwiłł needed to show that this naïve graph approximated Tao’s graph. If they could show that the two graphs were similar, they would be able to infer properties of Tao’s graph by looking at theirs instead. And because they already knew their graph was a local expander, they’d be able to conclude that Tao’s was, too (and therefore that the logarithmic Chowla conjecture was true).

    But given that the naïve graph had so many more edges than Tao’s, the resemblance was buried, if it existed at all.

    “What does it even mean when you’re saying these graphs look like each other?” Helfgott said.

    Hidden Resemblance

    While the graphs don’t look like each other on the surface, Helfgott and Radziwiłł set out to prove that they approximate each other by translating between two perspectives. In one, they looked at the graphs as graphs; in the other, they looked at them as objects called matrices.

    First they represented each graph as a matrix, which is an array of values that in this case encoded connections between vertices. Then they subtracted the matrix that represented the naïve graph from the matrix that represented Tao’s graph. The result was a matrix that represented the difference between the two.

    Helfgott and Radziwiłł needed to prove that certain parameters associated with this matrix, called eigenvalues, were all small. This is because a defining characteristic of an expander graph is that its associated matrix has one large eigenvalue while the rest are significantly smaller. If Tao’s graph, like the naïve one, was an expander, then it too would have one large eigenvalue — and those two large eigenvalues would nearly cancel out when one matrix was subtracted from the other, leaving a set of eigenvalues that were all small.

    But eigenvalues are tricky to study by themselves. Instead, an equivalent way to prove that all the eigenvalues of this matrix were small involved a return to graph theory. And so, Helfgott and Radziwiłł converted this matrix (the difference between the matrices representing their naïve graph and Tao’s more complicated one) back into a graph itself.

    They then proved that this graph contained few random walks — of a certain length and in compliance with a handful of other properties — that looped back to their starting points. This implied that most random walks on Tao’s graph had essentially canceled out random walks on the naïve expander graph — meaning that the former could be approximated by the latter, and both were therefore expanders.

    A Way Forward

    Helfgott and Radziwiłł’s solution to the logarithmic Chowla conjecture marked a significant quantitative improvement on Tao’s result. They could sample over far fewer integers to arrive at the same outcome: The parity of the number of prime factors of an integer is not correlated with that of its neighbors.

    “That’s a very strong statement about how prime numbers and divisibility look random,” said Ben Green of Oxford.

    But the work is perhaps even more exciting because it provides “a natural way to attack the problem,” Matomäki said — exactly the intuitive approach that Tao first hoped for six years ago.

    Expander graphs have previously led to new discoveries in theoretical computer science; group theory and other areas of math. Now, Helfgott and Radziwiłł have made them available for problems in number theory as well. Their work demonstrates that expander graphs have the power to reveal some of the most basic properties of arithmetic — dispelling potential conspiracies and starting to disentangle the complex interplay between addition and multiplication.

    “Suddenly, when you’re using the graph language, it’s seeing all this structure in the problem that you couldn’t really see beforehand,” Maynard said. “That’s the magic.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:53 pm on December 10, 2021 Permalink | Reply
    Tags: "Gravitational Waves Should Permanently Distort Space-Time", , , , , , , , , Quanta Magazine (US)   

    From Quanta Magazine (US) : “Gravitational Waves Should Permanently Distort Space-Time” 

    From Quanta Magazine (US)

    December 8, 2021
    Katie McCormick

    1
    A black hole collision should forever scar space-time. Credit: Alfred Pasieka / Science Source.

    The first detection of gravitational waves in 2016 provided decisive confirmation of Einstein’s general theory of relativity. But another astounding prediction remains unconfirmed: According to general relativity, every gravitational wave should leave an indelible imprint on the structure of space-time. It should permanently strain space, displacing the mirrors of a gravitational wave detector even after the wave has passed.

    Caltech /MIT Advanced aLigo
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA.

    Since that first detection almost six years ago, physicists have been trying to figure out how to measure this so-called “memory effect.”

    “The memory effect is absolutely a strange, strange phenomenon,” said Paul Lasky, an astrophysicist at Monash University (AU). “It’s really deep stuff.”

    Their goals are broader than just glimpsing the permanent space-time scars left by a passing gravitational wave. By exploring the links between matter, energy and space-time, physicists hope to come to a better understanding of Stephen Hawking’s black hole information paradox, which has been a major focus of theoretical research for going on five decades. “There’s an intimate connection between the memory effect and the symmetry of space-time,” said Kip Thorne, a physicist at The California Institute of Technology (US) whose work on gravitational waves earned him part of the 2017 Nobel Prize in Physics. “It is connected ultimately to the loss of information in black holes, a very deep issue in the structure of space and time.”

    A Scar in Space-Time

    Why would a gravitational wave permanently change space-time’s structure? It comes down to general relativity’s intimate linking of space-time and energy.

    First consider what happens when a gravitational wave passes by a gravitational wave detector. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has two arms positioned in an L shape [see Livingstone, LA installation above]. If you imagine a circle circumscribing the arms, with the center of the circle at the arms’ intersection, a gravitational wave will periodically distort the circle, squeezing it vertically, then horizontally, alternating until the wave has passed. The difference in length between the two arms will oscillate — behavior that reveals the distortion of the circle, and the passing of the gravitational wave.

    According to the memory effect, after the passing of the wave, the circle should remain permanently deformed by a tiny amount. The reason why has to do with the particularities of gravity as described by general relativity.

    The objects that LIGO detects are so far away, their gravitational pull is negligibly weak. But a gravitational wave has a longer reach than the force of gravity. So, too, does the property responsible for the memory effect: the gravitational potential.

    In simple Newtonian terms, a gravitational potential measures how much energy an object would gain if it fell from a certain height. Drop an anvil off a cliff, and the speed of the anvil at the bottom can be used to reconstruct the “potential” energy that falling off the cliff can impart.

    But in general relativity, where space-time is stretched and squashed in different directions depending on the motions of bodies, a potential dictates more than just the potential energy at a location — it dictates the shape of space-time.

    “The memory is nothing but the change in the gravitational potential,” said Thorne, “but it’s a relativistic gravitational potential.” The energy of a passing gravitational wave creates a change in the gravitational potential; that change in potential distorts space-time, even after the wave has passed.

    How, exactly, will a passing wave distort space-time? The possibilities are literally infinite, and, puzzlingly, these possibilities are also equivalent to one another. In this manner, space-time is like an infinite game of Boggle. The classic Boggle game has 16 six-sided dice arranged in a four-by-four grid, with a letter on each side of each die. Each time a player shakes the grid, the dice clatter around and settle into a new arrangement of letters. Most configurations are distinguishable from one another, but all are equivalent in a larger sense. They are all at rest in the lowest-energy state that the dice could possibly be in. When a gravitational wave passes through, it shakes the cosmic Boggle board, changing space-time from one wonky configuration to another. But space-time remains in its lowest-energy state.

    Super Symmetries

    That characteristic — that you can change the board, but in the end things fundamentally stay the same — suggests the presence of hidden symmetries in the structure of space-time. Within the past decade, physicists have explicitly made this connection.

    The story starts back in the 1960s, when four physicists wanted to better understand general relativity. They wondered what would happen in a hypothetical region infinitely far from all mass and energy in the universe, where gravity’s pull can be neglected, but gravitational radiation cannot. They started by looking at the symmetries this region obeyed.

    They already knew the symmetries of the world according to special relativity, where space-time is flat and featureless. In such a smooth world, everything looks the same regardless of where you are, which direction you’re facing, and the speed at which you’re moving. These properties correspond to the translational, rotational and boost symmetries, respectively. The physicists expected that infinitely far from all the matter in the universe, in a region referred to as “asymptotically flat,” these simple symmetries would reemerge.

    To their surprise, they found an infinite set of symmetries in addition to the expected ones. The new “supertranslation” symmetries indicated that individual sections of space-time could be stretched, squeezed and sheared, and the behavior in this infinitely distant region would remain the same.

    In the 1980s, Abhay Ashtekar, a physicist at The Pennsylvania State University (US), discovered that the memory effect was the physical manifestation of these symmetries. In other words, a supertranslation was exactly what would cause the Boggle universe to pick a new but equivalent way to warp space-time.

    His work connected these abstract symmetries in a hypothetical region of the universe to real effects. “To me that’s the exciting thing about measuring the memory effect — it’s just proving these symmetries are really physical,” said Laura Donnay, a physicist at The Vienna University of Technology (TU Wien)[Technische Universität Wien](AT). “Even very good physicists don’t quite grasp that they act in a nontrivial way and give you physical effects. And the memory effect is one of them.”

    Probing a Paradox

    The point of the Boggle game is to search the seemingly random arrangement of letters on the grid to find words. Each new configuration hides new words, and hence new information.

    Like Boggle, space-time has the potential to store information, which could be the key to solving the infamous black hole information paradox. Briefly, the paradox is this: Information cannot be created or destroyed. So where does the information about particles go after they fall into a black hole and are re-emitted as information-less Hawking radiation?

    In 2016, Andrew Strominger, a physicist at Harvard University (US), along with Stephen Hawking [The University of Cambridge (UK)] and Malcolm Perry [The University of Cambridge (UK) and Queen Mary University of London (UK)] realized that the horizon of a black hole has the same supertranslation symmetries as those in asymptotically flat space. And by the same logic as before, there would be an accompanying memory effect. This meant the infalling particles could alter space-time near the black hole, thereby changing its information content. This offered a possible solution to the information paradox. Knowledge of the particles’ properties wasn’t lost — it was permanently encoded in the fabric of space-time.

    “The fact that you can say something interesting about black hole evaporation is pretty cool,” said Sabrina Pasterski, a theoretical physicist at Princeton University (US). “The starting point of the framework has already had interesting results. And now we’re pushing the framework even further.”

    Pasterski and others have launched a new research program relating statements about gravity and other areas of physics to these infinite symmetries. In chasing the connections, they’ve discovered new, exotic memory effects. Pasterski established a connection between a different set of symmetries and a spin memory effect, where space-time becomes gnarled and twisted from gravitational waves that carry angular momentum.

    A Ghost in the Machine

    Alas, LIGO scientists haven’t yet seen evidence of the memory effect. The change in the distance between LIGO’s mirrors from a gravitational wave is minuscule — about one-thousandth the width of a proton — and the memory effect is predicted to be 20 times smaller.

    LIGO’s placement on our noisy planet worsens matters. Low-frequency seismic noise mimics the memory effect’s long-term changes in the mirror positions, so disentangling the signal from noise is tricky business.

    Earth’s gravitational pull also tends to restore LIGO’s mirrors to their original position, erasing its memory. So even though the kinks in space-time are permanent, the changes in the mirror position — which enables us to measure the kinks — are not. Researchers will need to measure the displacement of the mirrors caused by the memory effect before gravity has time to pull them back down.

    While detecting the memory effect caused by a single gravitational wave is infeasible with current technology, astrophysicists like Lasky and Patricia Schmidt of The University of Birmingham (UK) have thought up clever workarounds. “What you can do is effectively stack up the signal from multiple mergers,” said Lasky, “accumulating evidence in a very statistically rigorous way.”

    Lasky and Schmidt have independently predicted that they’ll need over 1,000 gravitational wave events to accumulate enough statistics to confirm they’ve seen the memory effect. With ongoing improvements to LIGO, as well as contributions from the VIRGO detector in Italy and KAGRA in Japan, Lasky thinks reaching 1,000 detections is a few short years away.

    _____________________________________________________________________________________
    LIGOVIRGOKAGRA

    Caltech /MIT Advanced aLigo.

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation.

    VIRGO Gravitational Wave interferometer, near Pisa, Italy.

    KAGRA Large-scale Cryogenic Gravitational Wave Telescope Project (JP).
    _____________________________________________________________________________________

    LIGO Virgo Kagra Masses in the Stellar Graveyard. Credit: Frank Elavsky and Aaron Geller at Northwestern University(US).

    “It is such a special prediction,” said Schmidt. “It’s quite exciting to see if it’s actually true.”

    Correction: December 9, 2021
    The original version of this article attributed the original discovery of the connection between supertranslation symmetries and the memory effect to Andrew Strominger in 2014. In fact, that connection had previously been known. The 2014 discovery by Strominger was between supertranslation symmetries, the memory effect and a third topic.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:16 pm on November 12, 2021 Permalink | Reply
    Tags: "A New Theory for Systems That Defy Newton’s Third Law", , , , Quanta Magazine (US)   

    From Quanta Magazine (US) : “A New Theory for Systems That Defy Newton’s Third Law” 

    From Quanta Magazine (US)

    November 11, 2021
    Stephen Ornes

    1
    By programming a fleet of robots to behave nonreciprocally — blue cars react to red cars differently than red cars react to blue cars — a team of researchers elicited spontaneous phase transitions.Credit: Kristen Norman for Quanta Magazine.

    Newton’s third law tells us that for every action, there’s an equal reaction going the opposite way. It’s been reassuring us for 400 years, explaining why we don’t fall through the floor (the floor pushes up on us too), and why paddling a boat makes it glide through water. When a system is in equilibrium, no energy goes in or out and such reciprocity is the rule. Mathematically, these systems are elegantly described with statistical mechanics, the branch of physics that explains how collections of objects behave. This allows researchers to fully model the conditions that give rise to phase transitions in matter, when one state of matter transforms into another, such as when water freezes.

    But many systems exist and persist far from equilibrium. Perhaps the most glaring example is life itself. We’re kept out of equilibrium by our metabolism, which converts matter into energy. A human body that settles into equilibrium is a dead body.

    In such systems, Newton’s third law becomes moot. Equal-and-opposite falls apart. “Imagine two particles,” said Vincenzo Vitelli, a condensed matter theorist at The University of Chicago (US), “where A interacts with B in a different way than how B interacts with A.” Such nonreciprocal relationships show up in systems like neuron networks and particles in fluids and even, on a larger scale, in social groups. Predators eat prey, for example, but prey doesn’t eat its predators.

    For these unruly systems, statistical mechanics falls short in representing phase transitions. Out of equilibrium, nonreciprocity dominates. Flocking birds show how easily the law is broken: Because they can’t see behind them, individuals change their flight patterns in response to the birds ahead of them. So bird A doesn’t interact with bird B in the same way that bird B interacts with bird A; it’s not reciprocal. Cars barreling down a highway or stuck in traffic are similarly nonreciprocal. Engineers and physicists who work with metamaterials — which get their properties from structure, rather than substance — have harnessed nonreciprocal elements to design acoustic, quantum and mechanical devices.

    Many of these systems are kept out of equilibrium because individual constituents have their own power source — ATP for cells, gas for cars. But all these extra energy sources and mismatched reactions make for a complex dynamical system beyond the reach of statistical mechanics. How can we analyze phases in such ever-changing systems?

    Vitelli and his colleagues see an answer in mathematical objects called exceptional points. Generally, an exceptional point in a system is a singularity, a spot where two or more characteristic properties become indistinguishable and mathematically collapse into one. At an exceptional point, the mathematical behavior of a system differs dramatically from its behavior at nearby points, and exceptional points often describe curious phenomena in systems — like lasers — in which energy is gained and lost continuously.

    Now the team has found [Nature] that these exceptional points also control phase transitions in nonreciprocal systems. Exceptional points aren’t new; physicists and mathematicians have studied them for decades in a variety of settings. But they’ve never been associated so generally with this type of phase transition. “That’s what no one has thought about before, using these in the context of nonequilibrium systems,” said the physicist Cynthia Reichhardt of DOE’s Los Alamos National Laboratory (US). “So you can bring all the machinery that we already have about exceptional points to study these systems.”

    The new work also draws connections among a range of areas and phenomena that, for years, haven’t seemed to have anything to say to each other. “I believe their work represents rich territory for mathematical development,” said Robert Kohn of the Courant Institute of Mathematical Sciences at New York University (US).

    When Symmetry Breaks

    The work began not with birds or neurons, but with quantum weirdness. A few years ago, two of the authors of the new paper — Ryo Hanai, a postdoctoral researcher at The University of Chicago (US), and Peter Littlewood, Hanai’s adviser — were investigating a kind of quasiparticle called a polariton. (Littlewood is on the scientific advisory board of the The Flatiron Institute Center for Computational Astrophysics (US), a research division of The Simons Foundation (US), which also funds this editorially independent publication.)

    A quasiparticle isn’t a particle per se. It’s a collection of quantum behaviors that, en masse, look as if they should be connected to a particle. A polariton shows up when photons (the particles responsible for light) couple with excitons (which themselves are quasiparticles). Polaritons have exceptionally low mass, which means they can move very fast and can form a state of matter called a Bose-Einstein condensate (BEC) — in which separate atoms all collapse into a single quantum state — at higher temperatures than other particles.

    However, using polaritons to create a BEC is complicated. It’s leaky. Some photons continuously escape the system, which means light must be pumped continuously into the system to make up the difference. That means it’s out of equilibrium. “From the theory side, that’s what was interesting to us,” said Hanai.

    To Hanai and Littlewood, it was analogous to creating lasers. “Photons are leaking out all the time, but nonetheless you maintain some coherent state,” said Littlewood. This is because of the constant addition of new energy powering the laser. They wanted to know: How does being out of equilibrium affect the transition into BEC or other exotic quantum states of matter? And, in particular, how does that change affect the system’s symmetry?

    The concept of symmetry is at the heart of phase transitions. Liquids and gases are considered highly symmetric because if you found yourself hurtling through them in a molecule-size jet, the spray of particles would look the same in every direction. Fly your ship through a crystal or other solid, though, and you’ll see that molecules occupy straight rows, with the patterns you see determined by where you are. When a material changes from a liquid or gas to a solid, researchers say its symmetry “breaks.”

    In physics, one of the most well-studied phase transitions shows up in magnetic materials. The atoms in a magnetic material like iron or nickel each have something called a magnetic moment, which is basically a tiny individual magnetic field. In magnets, these magnetic moments all point in the same direction and collectively produce a magnetic field. But if you heat the material enough — even with a candle, in high school science demonstrations — those magnetic moments become jumbled. Some point one way, and others a different way. The overall magnetic field is lost, and symmetry is restored. When it cools, the moments again align, breaking that free-form symmetry, and magnetism is restored.

    The flocking of birds can also be viewed as a breaking of symmetry: Instead of flying in random directions, they align like the spins in a magnet. But there is an important difference: A ferromagnetic phase transition is easily explained using statistical mechanics because it’s a system in equilibrium.

    But birds — and cells, bacteria and cars in traffic — add new energy to the system. “Because they have a source of internal energy, they behave differently,” said Reichhardt. “And because they don’t conserve energy, it appears out of nowhere, as far as the system is concerned.”

    Beyond Quantum

    Hanai and Littlewood started their investigation into BEC phase transitions by thinking about ordinary, well-known phase transitions. Consider water: Even though liquid water and steam look different, Littlewood said, there’s basically no symmetry distinction between them. Mathematically, at the point of the transition, the two states are indistinguishable. In a system in equilibrium, that point is called a critical point.

    Critical phenomena show up all over the place — in cosmology, high-energy physics, even biological systems. But in all these examples, researchers couldn’t find a good model for the condensates that form when quantum mechanical systems are coupled to the environment, undergoing constant damping and pumping.

    Hanai and Littlewood suspected that critical points and exceptional points had to share some important properties, even if they clearly arose from different mechanisms. “Critical points are sort of an interesting mathematical abstraction,” said Littlewood, “where you can’t tell the difference between these two phases. Exactly the same thing happens in these polariton systems.”

    They also knew that under the mathematical hood, a laser — technically a state of matter — and a polariton-exciton BEC had the same underlying equations. In a paper published in 2019 [Physical Review Letters], the researchers connected the dots, proposing a new and, crucially, universal mechanism by which exceptional points give rise to phase transitions in quantum dynamical systems.

    “We believe that was the first explanation for those transitions,” said Hanai.

    At about the same time, Hanai said, they realized that even though they were studying a quantum state of matter, their equations weren’t dependent on quantum mechanics. Did the phenomenon they were studying apply to even bigger and more general phenomena? “We started to suspect that this idea [connecting a phase transition to an exceptional point] could be applied to classical systems as well.”

    But to chase that idea, they’d need help. They approached Vitelli and Michel Fruchart, a postdoctoral researcher in Vitelli’s lab, who study unusual symmetries in the classical realm. Their work extends to metamaterials, which are rich in nonreciprocal interactions; they may, for example, exhibit different reactions to being pressed on one side or another and can also display exceptional points.

    Vitelli and Fruchart were immediately intrigued. Was some universal principle playing out in the polariton condensate, some fundamental law about systems where energy isn’t conserved?

    Getting in Sync

    Now a quartet, the researchers began looking for general principles underpinning the connection between nonreciprocity and phase transitions. For Vitelli, that meant thinking with his hands. He has a habit of building physical mechanical systems to illustrate difficult, abstract phenomena. In the past, for example, he’s used Legos to build lattices that become topological materials that move differently on the edges than in the interior.

    “Even though what we’re talking about is theoretical, you can demonstrate it with toys,” he said.

    But for exceptional points, he said, “Legos aren’t enough.” He realized that it would be easier to model nonreciprocal systems using building blocks that could move on their own but were governed by nonreciprocal rules of interaction.

    So the team whipped up a fleet of two-wheeled robots programmed to behave nonreciprocally. These robot assistants are small, cute and simple. The team programmed them all with certain color-coded behaviors. Red ones would align with other reds, and the blues with other blues. But here’s the nonreciprocity: The red ones would also orient themselves in the same directions as the blues, while the blues would point in the opposite direction of reds. This arrangement guarantees that no agent will ever get what it wants.


    Each robot is programmed to align with others of the same color, but they’re also programmed to behave nonreciprocally: Red ones want to align with blue ones, while blue ones want to point in the opposite direction. The result is a spontaneous phase transition, as they all began rotating in place.

    The group scattered the robots across the floor and turned them all on at the same time. Almost immediately, a pattern emerged. The robots began to move, turning slowly but simultaneously, until they were all rotating, basically in place, in the same direction. Rotation wasn’t built into the robots, Vitelli said. “It’s due to all these frustrated interactions. They’re perpetually frustrated in their motions.”

    It’s tempting to let the charm of a fleet of spinning, frustrated robots overshadow the underlying theory, but those rotations exactly demonstrated a phase transition for a system out of equilibrium. And the symmetry-breaking that they demonstrated lines up mathematically with the same phenomenon Hanai and Littlewood found when looking at exotic quantum condensates.

    To better explore that comparison, the researchers turned to the mathematical field of bifurcation theory. A bifurcation is a qualitative change in the behavior of a dynamical system, often taking the form of one state splitting into two.


    The researchers also created simulations of two groups of agents moving at constant speed with different relationships to each other. At left, the two groups move randomly. In the next frame, blue and red agents fly in the same direction, spontaneously breaking symmetry and displaying flocking behavior. When the two groups fly in opposite directions, there’s a similar antiflocking phase. In a nonreciprocal situation, at right, a new phase appears where they run in circles — another case of spontaneous symmetry breaking.

    Mathematicians draw bifurcation diagrams (the simplest look like pitchforks) to analyze how the states of a system respond to changes in their parameters. Often, a bifurcation divides stability from instability; it may also divide different types of stable states. It’s useful in studying systems associated with mathematical chaos, where small changes in the starting point (one parameter at the outset) can trigger outsize changes in the outcomes. The system shifts from non-chaotic to chaotic behaviors through a cascade of bifurcation points. Bifurcations have a long-standing connection to phase transitions, and the four researchers built on that link to better understand nonreciprocal systems.

    That meant they also had to think about the energy landscape. In statistical mechanics, the energy landscape of a system shows how energy changes form (such as from potential to kinetic) in space. At equilibrium, phases of matter correspond to the minima — the valleys — of the energy landscape. But this interpretation of phases of matter requires the system to end up at those minima, says Fruchart.

    Vitelli said perhaps the most important aspect of the new work is that it reveals the limitations of the existing language that physicists and mathematicians use to describe systems in flux. When equilibrium is a given, he said, statistical mechanics frames the behavior and phenomena in terms of minimizing the energy — since no energy is added or lost. But when a system is out of equilibrium, “by necessity, you can no longer describe it with our familiar energy language, but you still have a transition between collective states,” he said. The new approach relaxes the fundamental assumption that to describe a phase transition you must minimize energy.

    “When we assume there is no reciprocity, we can no longer define our energy,” Vitelli said, “and we have to recast the language of these transitions into the language of dynamics.”

    Looking for Exotic Phenomena

    The work has wide implications. To demonstrate how their ideas work together, the researchers analyzed a range of nonreciprocal systems. Because the kinds of phase transitions they’ve connected to exceptional points can’t be described by energy considerations, these exceptional-point symmetry shifts can only occur in nonreciprocal systems. That suggests that beyond reciprocity lie a range of phenomena in dynamical systems that could be described with the new framework.

    And now that they’ve laid the foundation, Littlewood said, they’ve begun to investigate just how widely it can be applied. “We’re beginning to generalize this to other dynamical systems we didn’t think had the same properties,” he said.

    Vitelli said almost any dynamical system with nonreciprocal behaviors would be worth probing with this new approach. “It’s really a step towards a general theory of collective phenomena in systems whose dynamics is not governed by an optimization principle.”

    Littlewood said he’s most excited about looking for phase transitions in one of the most complicated dynamical systems of all — the human brain. “Where we’re going next is neuroscience,” he said. He points out that neurons have been shown to come in “many flavors,” sometimes excited, sometimes inhibited. “That is nonreciprocal, pretty clearly.” That means their connections and interactions might be accurately modeled using bifurcations, and by looking for phase transitions in which the neurons synchronize and show cycles. “It’s a really exciting direction we’re exploring,” he said, “and the mathematics works.”

    Mathematicians are excited too. Kohn, at the Courant Institute, said the work may have connections to other mathematical topics — like turbulent transport or fluid flow — that researchers haven’t yet recognized. Nonreciprocal systems may turn out to exhibit phase transitions or other spatial patterns for which an appropriate mathematical language is currently lacking.

    “This work may be full of new opportunities, and maybe we’ll need new math,” Kohn said. “That’s sort of the heart of how mathematics and physics connect, to the benefit of both. Here’s a sandbox that we haven’t noticed so far, and here’s a list of things we might do.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine (US) is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: