Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:21 pm on November 26, 2017 Permalink | Reply
    Tags: Einstein, ER - for Einstein-Rosen bridges, ER = EPR - (the EPR paradox named for its authors - Einstein Boris Podolsky and Nathan Rosen), Eventually Susskind — in a discovery that shocked even him — realized (with Gerard ’t Hooft) that all the information that fell down the hole was actually trapped on the black hole’s two-dimen, , , Quanta Magazine, , , The particles still inside the hole would be directly connected to particles that left long ago,   

    From Quanta: “Wormholes Untangle a Black Hole Paradox” 2015 but Worth It. 

    Quanta Magazine
    Quanta Magazine

    April 24, 2015
    K.C. Cole

    1
    Hannes Hummel for Quanta Magazine

    One hundred years after Albert Einstein developed his general theory of relativity, physicists are still stuck with perhaps the biggest incompatibility problem in the universe. The smoothly warped space-time landscape that Einstein described is like a painting by Salvador Dalí — seamless, unbroken, geometric. But the quantum particles that occupy this space are more like something from Georges Seurat: pointillist, discrete, described by probabilities. At their core, the two descriptions contradict each other. Yet a bold new strain of thinking suggests that quantum correlations between specks of impressionist paint actually create not just Dalí’s landscape, but the canvases that both sit on, as well as the three-dimensional space around them. And Einstein, as he so often does, sits right in the center of it all, still turning things upside-down from beyond the grave.

    Like initials carved in a tree, ER = EPR, as the new idea is known, is a shorthand that joins two ideas proposed by Einstein in 1935. One involved the paradox implied by what he called “spooky action at a distance” between quantum particles (the EPR paradox, named for its authors, Einstein, Boris Podolsky and Nathan Rosen). The other showed how two black holes could be connected through far reaches of space through “wormholes” (ER, for Einstein-Rosen bridges). At the time that Einstein put forth these ideas — and for most of the eight decades since — they were thought to be entirely unrelated.

    1
    When Einstein, Podolsky and Rosen published their seminal paper pointing out puzzling features of what we now call entanglement, The New York Times treated it as front-page news. The New York Times

    But if ER = EPR is correct, the ideas aren’t disconnected — they’re two manifestations of the same thing. And this underlying connectedness would form the foundation of all space-time. Quantum entanglement — the action at a distance that so troubled Einstein — could be creating the “spatial connectivity” that “sews space together,” according to Leonard Susskind, a physicist at Stanford University and one of the idea’s main architects. Without these connections, all of space would “atomize,” according to Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, N.J., who developed the idea together with Susskind. “In other words, the solid and reliable structure of space-time is due to the ghostly features of entanglement,” he said. What’s more, ER = EPR has the potential to address how gravity fits together with quantum mechanics.

    Not everyone’s buying it, of course (nor should they; the idea is in “its infancy,” said Susskind). Joe Polchinski, a researcher at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, whose own stunning paradox about firewalls in the throats of black holes triggered the latest advances, is cautious, but intrigued. “I don’t know where it’s going,” he said, “but it’s a fun time right now.”

    The Black Hole Wars

    3
    Juan Maldacena at the Institute for Advanced Study in Princeton, N.J. Andrea Kane/Institute for Advanced Study

    The road that led to ER = EPR is a Möbius strip of tangled twists and turns that folds back on itself, like a drawing by M.C. Escher.

    A fair place to start might be quantum entanglement. If two quantum particles are entangled, they become, in effect, two parts of a single unit. What happens to one entangled particle happens to the other, no matter how far apart they are.

    Maldacena sometimes uses a pair of gloves as an analogy: If you come upon the right-handed glove, you instantaneously know the other is left-handed. There’s nothing spooky about that. But in the quantum version, both gloves are actually left- and right-handed (and everything in between) up until the moment you observe them. Spookier still, the left-handed glove doesn’t become left until you observe the right-handed one — at which moment both instantly gain a definite handedness.

    Entanglement played a key role in Stephen Hawking’s 1974 discovery that black holes could evaporate. This, too, involved entangled pairs of particles. Throughout space, short-lived “virtual” particles of matter and anti-matter continually pop into and out of existence. Hawking realized that if one particle fell into a black hole and the other escaped, the hole would emit radiation, glowing like a dying ember. Given enough time, the hole would evaporate into nothing, raising the question of what happened to the information content of the stuff that fell into it.

    But the rules of quantum mechanics forbid the complete destruction of information. (Hopelessly scrambling information is another story, which is why documents can be burned and hard drives smashed. There’s nothing in the laws of physics that prevents the information lost in a book’s smoke and ashes from being reconstructed, at least in principle.) So the question became: Would the information that originally went into the black hole just get scrambled? Or would it be truly lost? The arguments set off what Susskind called the “black hole wars,” which have generated enough stories to fill many books. (Susskind’s was subtitled My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics.)

    4
    Leonard Susskind at home in Palo Alto, Calif. Jeff Singer

    5
    Stephen Hawking. No image credit

    Eventually Susskind — in a discovery that shocked even him — realized (with Gerard ’t Hooft) that all the information that fell down the hole was actually trapped on the black hole’s two-dimensional event horizon, the surface that marks the point of no return. The horizon encoded everything inside, like a hologram. It was as if the bits needed to re-create your house and everything in it could fit on the walls. The information wasn’t lost — it was scrambled and stored out of reach.

    Susskind continued to work on the idea with Maldacena, whom Susskind calls “the master,” and others. Holography began to be used not just to understand black holes, but any region of space that can be described by its boundary. Over the past decade or so, the seemingly crazy idea that space is a kind of hologram has become rather humdrum, a tool of modern physics used in everything from cosmology to condensed matter. “One of the things that happen to scientific ideas is they often go from wild conjecture to reasonable conjecture to working tools,” Susskind said. “It’s gotten routine.”

    Holography was concerned with what happens on boundaries, including black hole horizons. That left open the question of what goes on in the interiors, said Susskind, and answers to that “were all over the map.” After all, since no information could ever escape from inside a black hole’s horizon, the laws of physics prevented scientists from ever directly testing what was going on inside.

    Then in 2012 Polchinski, along with Ahmed Almheiri, Donald Marolf and James Sully, all of them at the time at Santa Barbara, came up with an insight so startling it basically said to physicists: Hold everything. We know nothing.

    The so-called AMPS paper (after its authors’ initials) presented a doozy of an entanglement paradox — one so stark it implied that black holes might not, in effect, even have insides, for a “firewall” just inside the horizon would fry anyone or anything attempting to find out its secrets.

    Scaling the Firewall

    Here’s the heart of their argument: If a black hole’s event horizon is a smooth, seemingly ordinary place, as relativity predicts (the authors call this the “no drama” condition), the particles coming out of the black hole must be entangled with particles falling into the black hole. Yet for information not to be lost, the particles coming out of the black hole must also be entangled with particles that left long ago and are now scattered about in a fog of Hawking radiation. That’s one too many kinds of entanglements, the AMPS authors realized. One of them would have to go.

    The reason is that maximum entanglements have to be monogamous, existing between just two particles. Two maximum entanglements at once — quantum polygamy — simply cannot happen, which suggests that the smooth, continuous space-time inside the throats of black holes can’t exist. A break in the entanglement at the horizon would imply a discontinuity in space, a pileup of energy: the “firewall.”


    Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics. Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Steven Gutheinz.

    The AMPS paper became a “real trigger,” said Stephen Shenker, a physicist at Stanford, and “cast in sharp relief” just how much was not understood. Of course, physicists love such paradoxes, because they’re fertile ground for discovery.

    Both Susskind and Maldacena got on it immediately. They’d been thinking about entanglement and wormholes, and both were inspired by the work of Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, who had conducted a pivotal thought experiment suggesting that entanglement and space-time are intimately related.

    “Then one day,” said Susskind, “Juan sent me a very cryptic message that contained the equation ER = EPR. I instantly saw what he was getting at, and from there we went back and forth expanding the idea.”

    Their investigations, which they presented in a 2013 paper, “Cool Horizons for Entangled Black Holes,” argued for a kind of entanglement they said the AMPS authors had overlooked — the one that “hooks space together,” according to Susskind. AMPS assumed that the parts of space inside and outside of the event horizon were independent. But Susskind and Maldacena suggest that, in fact, particles on either side of the border could be connected by a wormhole. The ER = EPR entanglement could “kind of get around the apparent paradox,” said Van Raamsdonk. The paper contained a graphic that some refer to half-jokingly as the “octopus picture” — with multiple wormholes leading from the inside of a black hole to Hawking radiation on the outside.

    4
    The ER = EPR idea posits that entangled particles inside and outside of a black hole’s event horizon are connected via wormholes. Olena Shmahalo/Quanta Magazine.

    In other words, there was no need for an entanglement that would create a kink in the smooth surface of the black hole’s throat. The particles still inside the hole would be directly connected to particles that left long ago. No need to pass through the horizon, no need to pass Go. The particles on the inside and the far-out ones could be considered one and the same, Maldacena explained — like me, myself and I. The complex “octopus” wormhole would link the interior of the black hole directly to particles in the long-departed cloud of Hawking radiation.

    Holes in the Wormhole

    No one is sure yet whether ER = EPR will solve the firewall problem. John Preskill, a physicist at the California Institute of Technology in Pasadena, reminded readers of Quantum Frontiers, the blog for Caltech’s Institute for Quantum Information and Matter, that sometimes physicists rely on their “sense of smell” to sniff out which theories have promise. “At first whiff,” he wrote, “ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while.”

    Whatever happens, the correspondence between entangled quantum particles and the geometry of smoothly warped space-time is a “big new insight,” said Shenker. It’s allowed him and his collaborator Douglas Stanford, a researcher at the Institute for Advanced Study, to tackle complex problems in quantum chaos through what Shenker calls “simple geometry that even I can understand.”

    To be sure, ER = EPR does not yet apply to just any kind of space, or any kind of entanglement. It takes a special type of entanglement and a special type of wormhole. “Lenny and Juan are completely aware of this,” said Marolf, who recently co-authored a paper describing wormholes with more than two ends. ER = EPR works in very specific situations, he said, but AMPS argues that the firewall presents a much broader challenge.

    Like Polchinski and others, Marolf worries that ER = EPR modifies standard quantum mechanics. “A lot of people are really interested in the ER = EPR conjecture,” said Marolf. “But there’s a sense that no one but Lenny and Juan really understand what it is.” Still, “it’s an interesting time to be in the field.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    Advertisements
     
  • richardmitnick 7:02 am on October 25, 2017 Permalink | Reply
    Tags: A new theory that cast the origin of life as an inevitable outcome of thermodynamics, , , , , First Support for a Physics Theory of Life, Jeremy England, , Quanta Magazine   

    From Quanta: “First Support for a Physics Theory of Life” 

    Quanta Magazine
    Quanta Magazine

    July 26, 2017 [Where has this been hiding?]
    Natalie Wolchover

    Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.

    1
    Shayla Fish for Quanta Magazine

    The biophysicist Jeremy England made waves in 2013 with a new theory that cast the origin of life as an inevitable outcome of thermodynamics. His equations suggested that under certain conditions, groups of atoms will naturally restructure themselves so as to burn more and more energy, facilitating the incessant dispersal of energy and the rise of “entropy” or disorder in the universe. England said this restructuring effect, which he calls dissipation-driven adaptation, fosters the growth of complex structures, including living things. The existence of life is no mystery or lucky break, he told Quanta in 2014, but rather follows from general physical principles and “should be as unsurprising as rocks rolling downhill.”

    Since then, England, a 35-year-old associate professor at the Massachusetts Institute of Technology, has been testing aspects of his idea in computer simulations. The two most significant of these studies were published this month — the more striking result in the Proceedings of the National Academy of Sciences (PNAS) and the other in Physical Review Letters (PRL). The outcomes of both computer experiments appear to back England’s general thesis about dissipation-driven adaptation, though the implications for real life remain speculative.

    “This is obviously a pioneering study,” Michael Lässig, a statistical physicist and quantitative biologist at the University of Cologne in Germany, said of the PNAS paper written by England and an MIT postdoctoral fellow, Jordan Horowitz. It’s “a case study about a given set of rules on a relatively small system, so it’s maybe a bit early to say whether it generalizes,” Lässig said. “But the obvious interest is to ask what this means for life.”

    The paper strips away the nitty-gritty details of cells and biology and describes a simpler, simulated system of chemicals in which it is nonetheless possible for exceptional structure to spontaneously arise — the phenomenon that England sees as the driving force behind the origin of life. “That doesn’t mean you’re guaranteed to acquire that structure,” England explained. The dynamics of the system are too complicated and nonlinear to predict what will happen.

    The simulation involved a soup of 25 chemicals that react with one another in myriad ways. Energy sources in the soup’s environment facilitate or “force” some of these chemical reactions, just as sunlight triggers the production of ozone in the atmosphere and the chemical fuel ATP drives processes in the cell. Starting with random initial chemical concentrations, reaction rates and “forcing landscapes” — rules that dictate which reactions get a boost from outside forces and by how much — the simulated chemical reaction network evolves until it reaches its final, steady state, or “fixed point.”

    3
    eremy England, an associate professor of physics at the Massachusetts Institute of Technology, thinks he has found the physical mechanism underlying the origin of life. Katherine Taylor for Quanta Magazine

    Often, the system settles into an equilibrium state, where it has a balanced concentration of chemicals and reactions that just as often go one way as the reverse. This tendency to equilibrate, like a cup of coffee cooling to room temperature, is the most familiar outcome of the second law of thermodynamics, which says that energy constantly spreads and the entropy of the universe always increases. (The second law is true because there are more ways for energy to be spread out among particles than to be concentrated, so as particles move around and interact, the odds favor their energy becoming increasingly shared.)

    But for some initial settings, the chemical reaction network in the simulation goes in a wildly different direction: In these cases, it evolves to fixed points far from equilibrium, where it vigorously cycles through reactions by harvesting the maximum energy possible from the environment. These cases “might be recognized as examples of apparent fine-tuning” between the system and its environment, Horowitz and England write, in which the system finds “rare states of extremal thermodynamic forcing.”

    Living creatures also maintain steady states of extreme forcing: We are super-consumers who burn through enormous amounts of chemical energy, degrading it and increasing the entropy of the universe, as we power the reactions in our cells. The simulation emulates this steady-state behavior in a simpler, more abstract chemical system and shows that it can arise “basically right away, without enormous wait times,” Lässig said — indicating that such fixed points can be easily reached in practice.

    Many biophysicists think something like what England is suggesting may well be at least part of life’s story. But whether England has identified the most crucial step in the origin of life depends to some extent on the question: What’s the essence of life? Opinions differ.

    Form and Function

    England, a prodigy by many accounts who spent time at Harvard, Oxford, Stanford and Princeton universities before landing on the faculty at MIT at 29, sees the essence of living things as the exceptional arrangement of their component atoms. “If I imagine randomly rearranging the atoms of the bacterium — so I just take them, I label them all, I permute them in space — I’m presumably going to get something that is garbage,” he said earlier this month. “Most arrangements [of atomic building blocks] are not going to be the metabolic powerhouses that a bacterium is.”

    It’s not easy for a group of atoms to unlock and burn chemical energy. To perform this function, the atoms must be arranged in a highly unusual form. According to England, the very existence of a form-function relationship “implies that there’s a challenge presented by the environment that we see the structure of the system as meeting.”

    But how and why do atoms acquire the particular form and function of a bacterium, with its optimal configuration for consuming chemical energy? England hypothesizes that it’s a natural outcome of thermodynamics in far-from-equilibrium systems.

    The Nobel-Prize-winning physical chemist Ilya Prigogine pursued similar ideas in the 1960s, but his methods were limited. Traditional thermodynamic equations work well only for studying near-equilibrium systems like a gas that is slowly being heated or cooled. Systems driven by powerful external energy sources have much more complicated dynamics and are far harder to study.

    The situation changed in the late 1990s, when the physicists Gavin Crooks and Chris Jarzynski derived “fluctuation theorems” that can be used to quantify how much more often certain physical processes happen than reverse processes. These theorems allow researchers to study how systems evolve — even far from equilibrium. England’s “novel angle,” said Sara Walker, a theoretical physicist and origins-of-life specialist at Arizona State University, has been to apply the fluctuation theorems “to problems relevant to the origins of life. I think he’s probably the only person doing that in any kind of rigorous way.”

    Coffee cools down because nothing is heating it up, but England’s calculations [PhysRevX] suggested that groups of atoms that are driven by external energy sources can behave differently: They tend to start tapping into those energy sources, aligning and rearranging so as to better absorb the energy and dissipate it as heat. He further showed that this statistical tendency to dissipate energy might foster self-replication [The Journal of Chemical Physics]. (As he explained it in 2014, “A great way of dissipating more is to make more copies of yourself.”) England sees life, and its extraordinary confluence of form and function, as the ultimate outcome of dissipation-driven adaptation and self-replication.

    However, even with the fluctuation theorems in hand, the conditions on early Earth or inside a cell are far too complex to predict from first principles. That’s why the ideas have to be tested in simplified, computer-simulated environments that aim to capture the flavor of reality.

    In the PRL paper, England and his coauthors Tal Kachman and Jeremy Owen of MIT simulated a system of interacting particles. They found that the system increases its energy absorption over time by forming and breaking bonds in order to better resonate with a driving frequency. “This is in some sense a little bit more basic as a result” than the PNAS findings involving the chemical reaction network, England said.

    Crucially, in the latter work, he and Horowitz created a challenging environment where special configurations would be required to tap into the available energy sources, just as the special atomic arrangement of a bacterium enables it to metabolize energy. In the simulated environment, external energy sources boosted (or “forced”) certain chemical reactions in the reaction network. The extent of this forcing depended on the concentrations of the different chemical species. As the reactions progressed and the concentrations evolved, the amount of forcing would change abruptly. Such a rugged forcing landscape made it difficult for the system “to find combinations of reactions which are capable of extracting free energy optimally,” explained Jeremy Gunawardena, a mathematician and systems biologist at Harvard Medical School.

    Yet when the researchers let the chemical reaction networks play out in such an environment, the networks seemed to become fine-tuned to the landscape. A randomized set of starting points went on to achieve rare states of vigorous chemical activity and extreme forcing four times more often than would be expected. And when these outcomes happened, they happened dramatically: These chemical networks ended up in the 99th percentile in terms of how much forcing they experienced compared with all possible outcomes. As these systems churned through reaction cycles and dissipated energy in the process, the basic form-function relationship that England sees as essential to life set in.

    Information Processors

    Experts said an important next step for England and his collaborators would be to scale up their chemical reaction network and to see if it still dynamically evolves to rare fixed points of extreme forcing. They might also try to make the simulation less abstract by basing the chemical concentrations, reaction rates and forcing landscapes on conditions that might have existed in tidal pools or near volcanic vents in early Earth’s primordial soup (but replicating the conditions that actually gave rise to life is guesswork). Rahul Sarpeshkar, a professor of engineering, physics and microbiology at Dartmouth College, said, “It would be nice to have some concrete physical instantiation of these abstract constructs.” He hopes to see the simulations re-created in real experiments, perhaps using biologically relevant chemicals and energy sources such as glucose.

    But even if the fine-tuned fixed points can be observed in settings that are increasingly evocative of life and its putative beginnings, some researchers see England’s overarching thesis as “necessary but not sufficient” to explain life, as Walker put it, because it cannot account for what many see as the true hallmark of biological systems: their information-processing capacity. From simple chemotaxis (the ability of bacteria to move toward nutrient concentrations or away from poisons) to human communication, life-forms take in and respond to information about their environment.

    To Walker’s mind, this distinguishes us from other systems that fall under the umbrella of England’s dissipation-driven adaptation theory, such as Jupiter’s Great Red Spot. “That’s a highly non-equilibrium dissipative structure that’s existed for at least 300 years, and it’s quite different from the non-equilibrium dissipative structures that are existing on Earth right now that have been evolving for billions of years,” she said. Understanding what distinguishes life, she added, “requires some explicit notion of information that takes it beyond the non-equilibrium dissipative structures-type process.” In her view, the ability to respond to information is key: “We need chemical reaction networks that can get up and walk away from the environment where they originated.”

    Gunawardena noted that aside from the thermodynamic properties and information-processing abilities of life-forms, they also store and pass down genetic information about themselves to their progeny. The origin of life, Gunawardena said, “is not just emergence of structure, it’s the emergence of a particular kind of dynamics, which is Darwinian. It’s the emergence of structures that reproduce. And the ability for the properties of those objects to influence their reproductive rates. Once you have those two conditions, you’re basically in a situation where Darwinian evolution kicks in, and to biologists, that’s what it’s all about.”

    Eugene Shakhnovich, a professor of chemistry and chemical biology at Harvard who supervised England’s undergraduate research, sharply emphasized the divide between his former student’s work and questions in biology. “He started his scientific career in my lab and I really know how capable he is,” Shakhnovich said, but “Jeremy’s work represents potentially interesting exercises in non-equilibrium statistical mechanics of simple abstract systems.” Any claims that it has to do with biology or the origins of life, he added, are “pure and shameless speculations.”

    Even if England is on the right track about the physics, biologists want more particulars — such as a theory of what the primitive “protocells” were that evolved into the first living cells, and how the genetic code arose. England completely agrees that his findings are mute on such topics. “In the short term, I’m not saying this tells me a lot about what’s going in a biological system, nor even claiming that this is necessarily telling us where life as we know it came from,” he said. Both questions are “a fraught mess” based on “fragmentary evidence,” that, he said, “I am inclined to steer clear of for now.” He is rather suggesting that in the tool kit of the first life- or proto-life-forms, “maybe there’s more that you can get for free, and then you can optimize it using the Darwinian mechanism.”

    Sarpeshkar seemed to see dissipation-driven adaptation as the opening act of life’s origin story. “What Jeremy is showing is that as long as you can harvest energy from your environment, order will spontaneously arise and self-tune,” he said. Living things have gone on to do a lot more than England and Horowitz’s chemical reaction network does, he noted. “But this is about how did life first arise, perhaps — how do you get order from nothing.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
    • stewarthoughblog 12:29 am on October 27, 2017 Permalink | Reply

      This is intellectually insulting as none of this “new physics” resolves the intractable problems of homochirality, homopolymerization, cell membranes, and nucleotide coding, nor does it give any viability to correct the myths of chemical evolution.

      Like

    • richardmitnick 12:47 pm on October 27, 2017 Permalink | Reply

      I only approved this comment so as to not deny freedom of speech. This does not mean that I agree or disagree, only that you have freedom of speech for your opinions.

      Like

  • richardmitnick 2:44 pm on October 23, 2017 Permalink | Reply
    Tags: A new species of traversable wormhole has emerged, , , , Black holes and wormholes, Carl Sagan and Kip Thorne, , Many theorists believe in black hole interiors but in order to understand them they must discover the fate of information that falls inside, Quanta Magazine, , The paradox has loomed since 1974 when the British physicist Stephen Hawking determined that black holes evaporate, The repulsive negative energy in the wormhole’s throat can be generated from the outside by a special quantum connection between the pair of black holes that form the wormhole’s two mouths, The wormhole also safeguards unitarity — the principle that information is never lost, While traversable wormholes won’t revolutionize space travel according to Preskill the new wormhole discovery provides “a promising resolution” to the black hole firewall question by suggesting   

    From Quanta: “Newfound Wormhole Allows Information to Escape Black Holes” 

    Quanta Magazine
    Quanta Magazine

    October 23, 2017
    Natalie Wolchover

    1
    Tomáš Müller for Quanta Magazine

    In 1985, when Carl Sagan was writing the novel Contact, he needed to quickly transport his protagonist Dr. Ellie Arroway from Earth to the star Vega. He had her enter a black hole and exit light-years away, but he didn’t know if this made any sense. The Cornell University astrophysicist and television star consulted his friend Kip Thorne, a black hole expert at the California Institute of Technology (who won a Nobel Prize earlier this month). Thorne knew that Arroway couldn’t get to Vega via a black hole, which is thought to trap and destroy anything that falls in. But it occurred to him that she might make use of another kind of hole consistent with Albert Einstein’s general theory of relativity: a tunnel or “wormhole” connecting distant locations in space-time.

    While the simplest theoretical wormholes immediately collapse and disappear before anything can get through, Thorne wondered whether it might be possible for an “infinitely advanced” sci-fi civilization to stabilize a wormhole long enough for something or someone to traverse it. He figured out that such a civilization could in fact line the throat of a wormhole with “exotic material” that counteracts its tendency to collapse. The material would possess negative energy, which would deflect radiation and repulse space-time apart from itself. Sagan used the trick in Contact, attributing the invention of the exotic material to an earlier, lost civilization to avoid getting into particulars. Meanwhile, those particulars enthralled Thorne, his students and many other physicists, who spent years exploring traversable wormholes and their theoretical implications. They discovered that these wormholes can serve as time machines, invoking time-travel paradoxes — evidence that exotic material is forbidden in nature.

    Now, decades later, a new species of traversable wormhole has emerged, free of exotic material and full of potential for helping physicists resolve a baffling paradox about black holes. This paradox is the very problem that plagued the early draft of Contact and led Thorne to contemplate traversable wormholes in the first place; namely, that things that fall into black holes seem to vanish without a trace. This total erasure of information breaks the rules of quantum mechanics, and it so puzzles experts that in recent years, some have argued that black hole interiors don’t really exist — that space and time strangely end at their horizons.

    The flurry of findings started last year with a paper [Journal not named] that reported the first traversable wormhole that doesn’t require the insertion of exotic material to stay open. Instead, according to Ping Gao and Daniel Jafferis of Harvard University and Aron Wall of Stanford University, the repulsive negative energy in the wormhole’s throat can be generated from the outside by a special quantum connection between the pair of black holes that form the wormhole’s two mouths. When the black holes are connected in the right way, something tossed into one will shimmy along the wormhole and, following certain events in the outside universe, exit the second. Remarkably, Gao, Jafferis and Wall noticed that their scenario is mathematically equivalent to a process called quantum teleportation, which is key to quantum cryptography and can be demonstrated in laboratory experiments.

    John Preskill, a black hole and quantum gravity expert at Caltech, says the new traversable wormhole comes as a surprise, with implications for the black hole information paradox and black hole interiors. “What I really like,” he said, “is that an observer can enter the black hole and then escape to tell about what she saw.” This suggests that black hole interiors really exist, he explained, and that what goes in must come out.

    ____________________________________________________________________

    2

    Lucy Reading-Ikkanda/Quanta Magazine
    ____________________________________________________________________

    The new wormhole work began in 2013, when Jafferis attended an intriguing talk at the Strings conference in South Korea. The speaker, Juan Maldacena, a professor of physics at the Institute for Advanced Study in Princeton, New Jersey, had recently concluded, based on various hints and arguments, that “ER = EPR.” That is, wormholes between distant points in space-time, the simplest of which are called Einstein-Rosen or “ER” bridges, are equivalent (albeit in some ill-defined way) to entangled quantum particles, also known as Einstein-Podolsky-Rosen or “EPR” pairs. The ER = EPR conjecture, posed by Maldacena and Leonard Susskind of Stanford, was an attempt to solve the modern incarnation of the infamous black hole information paradox by tying space-time geometry, governed by general relativity, to the instantaneous quantum connections between far-apart particles that Einstein called “spooky action at a distance.”

    The paradox has loomed since 1974, when the British physicist Stephen Hawking determined that black holes evaporate — slowly giving off heat in the form of particles now known as “Hawking radiation.” Hawking calculated that this heat is completely random; it contains no information about the black hole’s contents. As the black hole blinks out of existence, so does the universe’s record of everything that went inside. This violates a principle called “unitarity,” the backbone of quantum theory, which holds that as particles interact, information about them is never lost, only scrambled, so that if you reversed the arrow of time in the universe’s quantum evolution, you’d see things unscramble into an exact re-creation of the past.

    Almost everyone believes in unitarity, which means information must escape black holes — but how? In the last five years, some theorists, most notably Joseph Polchinski of the University of California, Santa Barbara, have argued that black holes are empty shells with no interiors at all — that Ellie Arroway, upon hitting a black hole’s event horizon, would fizzle on a “firewall” and radiate out again.

    Many theorists believe in black hole interiors (and gentler transitions across their horizons), but in order to understand them, they must discover the fate of information that falls inside. This is critical to building a working quantum theory of gravity, the long-sought union of the quantum and space-time descriptions of nature that comes into sharpest relief in black hole interiors, where extreme gravity acts on a quantum scale.

    The quantum gravity connection is what drew Maldacena, and later Jafferis, to the ER = EPR idea, and to wormholes. The implied relationship between tunnels in space-time and quantum entanglement posed by ER = EPR resonated with a popular recent belief that space is essentially stitched into existence by quantum entanglement. It seemed that wormholes had a role to play in stitching together space-time and in letting black hole information worm its way out of black holes — but how might this work? When Jafferis heard Maldacena talk about his cryptic equation and the evidence for it, he was aware that a standard ER wormhole is unstable and non-traversable. But he wondered what Maldacena’s duality would mean for a traversable wormhole like the ones Thorne and others played around with decades ago. Three years after the South Korea talk, Jafferis and his collaborators Gao and Wall presented their answer. The work extends the ER = EPR idea by equating, not a standard wormhole and a pair of entangled particles, but a traversable wormhole and quantum teleportation: a protocol discovered in 1993 [Physical Review Letters]that allows a quantum system to disappear and reappear unscathed somewhere else.

    When Maldacena read Gao, Jafferis and Wall’s paper, “I viewed it as a really nice idea, one of these ideas that after someone tells you, it’s obvious,” he said. Maldacena and two collaborators, Douglas Stanford and Zhenbin Yang, immediately began exploring the new wormhole’s ramifications for the black hole information paradox; their paper appeared in April. Susskind and Ying Zhao of Stanford followed this with a paper about wormhole teleportation in July. The wormhole “gives an interesting geometric picture for how teleportation happens,” Maldacena said. “The message actually goes through the wormhole.”


    Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics. Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Steven Gutheinz.

    Diving Into Wormholes

    In their paper, “Diving Into Traversable Wormholes,” published in Fortschritte der Physik, Maldacena, Stanford and Yang consider a wormhole of the new kind that connects two black holes: a parent black hole and a daughter one formed from half of the Hawking radiation given off by the parent as it evaporates. The two systems are as entangled as they can be. Here, the fate of the older black hole’s information is clear: It worms its way out of the daughter black hole.

    During an interview this month in his tranquil office at the IAS, Maldacena, a reserved Argentinian-American with a track record of influential insights, described his radical musings. On the right side of a chalk-dusty blackboard, Maldacena drew a faint picture of two black holes connected by the new traversable wormhole. On the left, he sketched a quantum teleportation experiment, performed by the famous fictional experimenters Alice and Bob, who are in possession of entangled quantum particles a and b, respectively. Say Alice wants to teleport a qubit q to Bob. She prepares a combined state of q and a, measures that combined state (reducing it to a pair of classical bits, 1 or 0), and sends the result of this measurement to Bob. He can then use this as a key for operating on b in a way that re-creates the state q. Voila, a unit of quantum information has teleported from one place to the other.

    Maldacena turned to the right side of the blackboard. “You can do operations with a pair of black holes that are morally equivalent to what I discussed [about quantum teleportation]. And in that picture, this message really goes through the wormhole.”

    4
    Juan Maldacena, a professor of physics at the Institute for Advanced Study. Sasha Maslov for Quanta Magazine

    Say Alice throws qubit q into black hole A. She then measures a particle of its Hawking radiation, a, and transmits the result of the measurement through the external universe to Bob, who can use this knowledge to operate on b, a Hawking particle coming out of black hole B. Bob’s operation reconstructs q, which appears to pop out of B, a perfect match for the particle that fell into A. This is why some physicists are excited: Gao, Jafferis and Wall’s wormhole allows information to be recovered from black holes. In their paper, they set up their wormhole in a negatively curved space-time geometry that often serves as a useful, if unrealistic, playground for quantum gravity theorists. However, their wormhole idea seems to extend to the real world as long as two black holes are coupled in the right way: “They have to be causally connected and then the nature of the interaction that we took is the simplest thing you can imagine,” Jafferis explained. If you allow the Hawking radiation from one of the black holes to fall into the other, the two black holes become entangled, and the quantum information that falls into one can exit the other.

    The quantum-teleportation format precludes using these traversable wormholes as time machines. Anything that goes through the wormhole has to wait for Alice’s message to travel to Bob in the outside universe before it can exit Bob’s black hole, so the wormhole doesn’t offer any superluminal boost that could be exploited for time travel. It seems traversable wormholes might be permitted in nature as long as they offer no speed advantage. “Traversable wormholes are like getting a bank loan,” Gao, Jafferis and Wall wrote in their paper: “You can only get one if you are rich enough not to need it.”

    A Naive Octopus

    While traversable wormholes won’t revolutionize space travel, according to Preskill the new wormhole discovery provides “a promising resolution” to the black hole firewall question by suggesting that there is no firewall at black hole horizons. Preskill said the discovery rescues “what we call ‘black hole complementarity,’ which means that the interior and exterior of the black hole are not really two different systems but rather two very different, complementary ways of looking at the same system.” If complementarity holds, as is widely assumed, then in passing across a black hole horizon from one realm to the other, Contact’s Ellie Arroway wouldn’t notice anything strange. This seems more likely if, under certain conditions, she could even slide all the way through a Gao-Jafferis-Wall wormhole.

    The wormhole also safeguards unitarity — the principle that information is never lost — at least for the entangled black holes being studied. Whatever falls into one black hole eventually exits the other as Hawking radiation, Preskill said, which “can be thought of as in some sense a very scrambled copy of the black hole interior.”

    Taking the findings to their logical conclusion, Preskill thinks it ought to be possible (at least for an infinitely advanced civilization) to influence the interior of one of these black holes by manipulating its radiation. This “sounds crazy,” he wrote in an email, but it “might make sense if we can think of the radiation, which is entangled with the black hole — EPR — as being connected to the black hole interior by wormholes — ER. Then tickling the radiation can send a message which can be read from inside the black hole!” He added, “We still have a ways to go, though, before we can flesh out this picture in more detail.”

    Indeed, obstacles remain in the quest to generalize the new wormhole findings to a statement about the fate of all quantum information, or the meaning of ER = EPR.

    5
    A sketch known as the “octopus” that expresses the ER = EPR idea.

    https://arxiv.org/abs/1306.0533 [hep-th]

    In Maldacena and Susskind’s paper proposing ER = EPR, they included a sketch that’s become known as the “octopus”: a black hole with tentacle-like wormholes leading to distant Hawking particles that have evaporated out of it. The authors explained that the sketch illustrates “the entanglement pattern between the black hole and the Hawking radiation. We expect that this entanglement leads to the interior geometry of the black hole.”

    But according to Matt Visser, a mathematician and general-relativity expert at Victoria University of Wellington in New Zealand who has studied wormholes since the 1990s, the most literal reading of the octopus picture doesn’t work. The throats of wormholes formed from single Hawking particles would be so thin that qubits could never fit through. “A traversable wormhole throat is ‘transparent’ only to wave packets with size smaller than the throat radius,” Visser explained. “Big wave packets will simply bounce off any small wormhole throat without crossing to the other side.”

    Stanford, who co-wrote the recent paper with Maldacena and Yang, acknowledged that this is a problem with the simplest interpretation of the ER = EPR idea, in which each particle of Hawking radiation has its own tentacle-like wormhole. However, a more speculative interpretation of ER = EPR that he and others have in mind does not suffer from this failing. “The idea is that in order to recover the information from the Hawking radiation using this traversable wormhole,” Stanford said, one has to “gather the Hawking radiation together and act on it in a complicated way.” This complicated collective measurement reveals information about the particles that fell in; it has the effect, he said, of “creating a large, traversable wormhole out of the small and unhelpful octopus tentacles. The information would then propagate through this large wormhole.” Maldacena added that, simply put, the theory of quantum gravity might have a new, generalized notion of geometry for which ER equals EPR. “We think quantum gravity should obey this principle,” he said. “We view it more as a guide to the theory.”

    In his 1994 popular science book, Black Holes and Time Warps, Kip Thorne celebrated the style of reasoning involved in wormhole research. “No type of thought experiment pushes the laws of physics harder than the type triggered by Carl Sagan’s phone call to me,” he wrote; “thought experiments that ask, ‘What things do the laws of physics permit an infinitely advanced civilization to do, and what things do the laws forbid?’”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 9:38 am on October 22, 2017 Permalink | Reply
    Tags: , , New Life Found That Lives Off Electricity, Quanta Magazine, , The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand   

    From Quanta: “New Life Found That Lives Off Electricity” 

    Quanta Magazine
    Quanta Magazine

    June 21, 2016 [Just found in social media. Where has it been?]
    Emily Singer

    1
    Yamini Jangir and Moh El-Naggar

    Last year, biophysicist Moh El-Naggar and his graduate student Yamini Jangir plunged beneath South Dakota’s Black Hills into an old gold mine that is now more famous as a home to a dark matter detector.

    2
    A bottom-up view inside the Large Underground Xenon dark matter experiment, which is located a mile beneath the surface in the Black Hills of South Dakota. LUX Dark Matter.

    Unlike most scientists who make pilgrimages to the Black Hills these days, El-Naggar and Jangir weren’t there to hunt for subatomic particles. They came in search of life.

    In the darkness found a mile underground, the pair traversed the mine’s network of passages in search of a rusty metal pipe. They siphoned some of the pipe’s ancient water, directed it into a vessel, and inserted a variety of electrodes. They hoped the current would lure their prey, a little-studied microbe that can live off pure electricity.

    The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand. They inhabit largely uncharted worlds: the bubbling cauldrons of deep sea vents; mineral-rich veins deep beneath the planet’s surface; ocean sediments just a few inches below the deep seafloor. The microbes represent a segment of life that has been largely ignored, in part because their strange habitats make them incredibly difficult to grow in the lab.

    Yet early surveys suggest a potential microbial bounty. A recent sampling of microbes collected from the seafloor near Catalina Island, off the coast of Southern California, uncovered a surprising variety of microbes that consume or shed electrons by eating or breathing minerals or metals. El-Naggar’s team is still analyzing their gold mine data, but he says that their initial results echo the Catalina findings. Thus far, whenever scientists search for these electron eaters in the right locations — places that have lots of minerals but not a lot of oxygen — they find them.

    As the tally of electron eaters grows, scientists are beginning to figure out just how they work. How does a microbe consume electrons out of a piece of metal, or deposit them back into the environment when it is finished with them? A study published last year revealed the way that one of these microbes catches and consumes its electrical prey. And not-yet-published work suggests that some metal eaters transport electrons directly across their membranes — a feat once thought impossible.

    The Rock Eaters

    Though eating electricity seems bizarre, the flow of current is central to life. All organisms require a source of electrons to make and store energy. They must also be able to shed electrons once their job is done. In describing this bare-bones view of life, Nobel Prize-winning physiologist Albert Szent-Györgyi once said, “Life is nothing but an electron looking for a place to rest.”

    Humans and many other organisms get electrons from food and expel them with our breath. The microbes that El-Naggar and others are trying to grow belong to a group called lithoautotrophs, or rock eaters, which harvest energy from inorganic substances such as iron, sulfur or manganese. Under the right conditions, they can survive solely on electricity.

    The microbes’ apparent ability to ingest electrons — known as direct electron transfer — is particularly intriguing because it seems to defy the basic rules of biophysics. The fatty membranes that enclose cells act as an insulator, creating an electrically neutral zone once thought impossible for an electron to cross. “No one wanted to believe that a bacterium would take an electron from inside of the cell and move it to the outside,” said Kenneth Nealson, a geobiologist at the University of Southern California, in a lecture to the Society for Applied Microbiology in London last year.


    Ken Nealson – Environmental Microbiology Annual Lecture 2015: Extracellular electron transport (EET): opening new windows of metabolic opportunity for microbes.
    For more information about Environmental Microbiology
    visit http://goo.gl/7ZJOc6 For more information about Environmental Microbiology Reports
    visit http://goo.gl/NBdORV

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    In the 1980s, Nealson and others discovered a surprising group of bacteria that can expel electrons directly onto solid minerals. It took until 2006 to discover the molecular mechanism behind this feat: A trio of specialized proteins [PubMed] sits in the cell membrane, forming a conductive bridge that transfers electrons to the outside of cell. (Scientists still debate whether the electrons traverse the entire distance of the membrane unescorted.)

    Inspired by the electron-donators, scientists began to wonder whether microbes could also do the reverse and directly ingest electrons as a source of energy. Researchers focused their search on a group of microbes called methanogens, which are known for making methane. Most methanogens aren’t strict metal eaters. But in 2009, Bruce Logan, an environmental engineer at Pennsylvania State University, and collaborators showed for the first time that a methanogen could survive using only energy from an electrode [PubMed]. The researchers proposed that the microbes were directly sucking up electrons, perhaps via a molecular bridge similar to the ones the electron-producers use to shuttle electrons across the cell wall. But they lacked direct proof.

    Then last year, Alfred Spormann, a microbiologist at Stanford University, and collaborators poked a hole in Logan’s theory. They uncovered a way [PubMed] that these organisms can survive on electrodes without eating naked electrons.

    The microbe Spormann studied, Methanococcus maripaludis, excretes an enzyme that sits on the electrode’s surface. The enzyme pairs an electron from the electrode with a proton from water to create a hydrogen atom, which is a well-established food source among methanogens. “Rather than having a conductive pathway, they use an enzyme,” said Daniel Bond, a microbiologist at the University of Minnesota Twin Cities. “They don’t need to build a bridge out of conductive materials.”

    Though the microbes aren’t eating naked electrons, the results are surprising in their own right. Most enzymes work best inside the cell and rapidly degrade outside. “What’s unique is how stable the enzymes are when they [gather on] the surface of the electrode,” Spormann said. Past experiments suggest these enzymes are active outside the cell for only a few hours, “but we showed they are active for six weeks.”

    Spormann and others still believe that methanogens and other microbes can directly suck up electricity, however. “This is an alternative mechanism to direct electron transfer, it doesn’t mean direct electron transfer can’t exist,” said Largus Angenent, an environmental engineer at Cornell University, and president of the International Society for Microbial Electrochemistry and Technology. Spormann said his team has already found a microbe capable of taking in naked electrons. But they haven’t yet published the details.

    Microbes on Mars

    Only a tiny fraction — perhaps 2 percent — of all the planet’s microorganisms can be grown in the lab. Scientists hope that these new approaches — growing microbes on electrodes rather than in traditional culture systems — will provide a way to study many of the microbes that have been so far impossible to cultivate.

    “Using electrodes as proxies for minerals has helped us open and expand this field,” said Annette Rowe, a postdoctoral researcher at USC working with El-Naggar. “Now we have a way to grow the bacteria and monitor their respiration and really have a look at their physiology.”

    Rowe has already had some success.

    In 2013, she went on a microbe prospecting trip to the iron-rich sediments that surround California’s Catalina Island. She identified at least 30 new varieties [PubMed]of electric microbes in a study published last year. “They are from very diverse groups of microbes that are quite common in marine systems,” Rowe said. Before her experiment, no one knew these microbes could take up electrons from an inorganic substrate, she said. “That’s something we weren’t expecting.”

    Just as fishermen use different lures to attract different fish, Rowe set the electrodes to different voltages to draw out a rich diversity of microbes. She knew when she had a catch because the current changed — metal eaters generate a negative current, as the microbes suck electrons from the negative electrode.

    3
    Yamini Jangir, then a graduate student in Moh El-Naggar’s lab at the University of Southern California, collects water from a pipe at the Sanford Underground Research Facility nearly a mile underground. Connie A. Walter and Matt Kapust

    SURF-Sanford Underground Research Facility


    SURF Above Ground

    SURF Out with the Old


    SURF An Empty Slate


    SURF Carving New Space


    SURF Shotcreting


    SURF Bolting and Wire Mesh


    SURF Outfitting Begins


    SURF circular wooden frame was built to form a concrete ring to hold the 72,000-gallon (272,549 liters) water tank that would house the LUX dark matter detector


    SURF LUX water tank was transported in pieces and welded together in the Davis Cavern


    SURF Ground Support


    SURF Dedicated to Science


    SURF Building a Ship in a Bottle


    SURF Tight Spaces


    SURF Ready for Science


    SURF Entrance Before Outfitting


    SURF Entrance After Outfitting


    SURF Common Corridior


    SURF Davis


    SURF Davis A World Class Site


    SURF Davis a Lab Site


    SURF DUNE LBNF Caverns at Sanford Lab


    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford


    U Washington LUX Xenon experiment at SURF


    SURF Before Majorana


    U Washington Majorana Demonstrator Experiment at SURF

    The different varieties of bacteria that Rowe collected thrive under different electrical conditions, suggesting they employ different strategies for eating electrons. “Each bacteria had a different energy level where electron uptake would happen,” Rowe said. “We think that is indicative of different pathways.”

    Rowe is now searching new environments for additional microbes, focusing on fluids from a deep spring with low acidity. She’s also helping with El-Naggar’s gold mine expedition. “We are trying to understand how life works under these conditions,” said El-Naggar. “We now know that life goes far deeper than we thought, and there’s a lot more than we thought, but we don’t have a good idea for how they are surviving.”

    El-Naggar emphasizes that the field is still in its infancy, likening the current state to the early days of neuroscience, when researchers poked at frogs with electrodes to make their muscles twitch. “It took a long time for the basic mechanistic stuff to come out,” he said. “It’s only been 30 years since we discovered that microbes can interact with solid surfaces.”

    Given the bounty from these early experiments, it seems that scientists have only scratched the surface of the microbial diversity that thrives beneath the planet’s shallow exterior. The results could give clues to the origins of life on Earth and beyond. One theory for the emergence of life suggests it originated on mineral surfaces, which could have concentrated biological molecules and catalyzed reactions. New research could fill in one of the theory’s gaps — a mechanism for transporting electrons from mineral surfaces into cells.

    Moreover, subsurface metal eaters may provide a blueprint for life on other worlds, where alien microbes might be hidden beneath the planet’s shallow exterior. “For me, one of the most exciting possibilities is finding life-forms that might survive in extreme environments like Mars,” said El-Naggar, whose gold mine experiment is funded by NASA’s Astrobiology Institute. Mars, for example, is iron-rich and has water flowing beneath its surface. “If you have a system that can pick up electrons from iron and have some water, then you have all the ingredients for a conceivable metabolism,” said El-Naggar. Perhaps a former mine a mile underneath South Dakota won’t be the most surprising place that researchers find electron-eating life.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:09 am on October 18, 2017 Permalink | Reply
    Tags: , , , Nigel Goldenfeld, , Quanta Magazine   

    From Quanta: “Seeing Emergent Physics Behind Evolution” 

    Quanta Magazine
    Quanta Magazine

    August 31, 2017 [Previously hidden.]
    Jordana Cepelewicz

    1
    Nigel Goldenfeld, director of the NASA Astrobiology Institute for Universal Biology, spends little time in his office in the physics department at the University of Illinois, Urbana-Champaign. But even when working on biology studies, he applies to them the informative principles of condensed matter physics and emergent states. Seth Lowe for Quanta Magazine

    The physicist Nigel Goldenfeld hates biology — “at least the way it was presented to me” when he was in school, he said. “It seemed to be a disconnected collection of facts. There was very little quantitation.” That sentiment may come as a surprise to anyone who glances over the myriad projects Goldenfeld’s lab is working on. He and his colleagues monitor the individual and swarm behaviors of honeybees, analyze biofilms, watch genes jump, assess diversity in ecosystems and probe the ecology of microbiomes. Goldenfeld himself is director of the NASA Astrobiology Institute for Universal Biology, and he spends most of his time not in the physics department at the University of Illinois but in his biology lab on the Urbana-Champaign campus.

    Goldenfeld is one in a long list of physicists who have sought to make headway on questions in biology: In the 1930s Max Delbrück transformed the understanding of viruses; later, Erwin Schrödinger published What is Life? The Physical Aspect of the Living Cell; Francis Crick, a pioneer of X-ray crystallography, helped discover the structure of DNA. Goldenfeld wants to make use of his expertise in condensed matter theory, in which he models how patterns in dynamic physical systems evolve over time, to better understand diverse phenomena including turbulence, phase transitions, geological formations and financial markets. His interest in emergent states of matter has compelled him to explore one of biology’s greatest mysteries: the origins of life itself. And he’s only branched out from there. “Physicists can ask questions in a different way,” Goldenfeld said. “My motivation has always been to look for areas in biology where that kind of approach would be valued. But to be successful, you have to work with biologists and essentially become one yourself. You need both physics and biology.”

    Quanta Magazine recently spoke with Goldenfeld about collective phenomena, expanding the Modern Synthesis model of evolution, and using quantitative and theoretical tools from physics to gain insights into mysteries surrounding early life on Earth and the interactions between cyanobacteria and predatory viruses. A condensed and edited version of that conversation follows.

    Physics has an underlying conceptual framework, while biology does not. Are you trying to get at a universal theory of biology?

    God, no. There’s no unified theory of biology. Evolution is the nearest thing you’re going to get to that. Biology is a product of evolution; there aren’t exceptions to the fact that life and its diversity came from evolution. You really have to understand evolution as a process to understand biology.

    So how can collective effects in physics inform our understanding of evolution?

    When you think about evolution, you typically tend to think about population genetics, the frequency of genes in a population. But if you look to the Last Universal Common Ancestor — the organism ancestral to all others, which we can trace through phylogenetics [the study of evolutionary relationships] — that’s not the beginning of life. There was definitely simpler life before that — life that didn’t even have genes, when there were no species. So we know that evolution is a much broader phenomenon than just population genetics.

    The Last Universal Common Ancestor is dated to be about 3.8 billion years ago. The earth is 4.6 billion years old. Life went from zero to essentially the complexity of the modern cell in less than a billion years. In fact, probably a lot less: Since then, relatively little has happened in terms of the evolution of cellular architecture. So evolution was slow for the last 3.5 billion years, but very fast initially.

    Why did life evolve so fast?

    The late biophysicist] Carl Woese and I felt that it was because it evolved in a different way. The way life evolves in the present era is through vertical descent: You give your genes to your children, they give their genes to your grandchildren, and so on. Horizontal gene transfer gives genes to an organism that’s not related to you. It happens today in bacteria and other organisms, with genes that aren’t really so essential to the structure of the cell. Genes that give you resistance to antibiotics, for example — that’s why bacteria evolve defenses against drugs so quickly. But in the earlier phase of life, even the core machinery of the cell was transmitted horizontally. Life early on would have been a collective state, more of a community held together by gene exchange than simply the sum of a collection of individuals. There are many other well-known examples of collective states: for example, a bee colony or a flock of birds, where the collective seems to have its own identity and behavior, arising from the constituents and the ways that they communicate and respond to each other. Early life communicated through gene transfer.

    How do you know?

    Life could only have evolved as rapidly and optimally as it did if we assume this early network effect, rather than a [family] tree. We discovered about 10 years ago that this was the case with the genetic code, the rules that tell the cell which amino acids to use to make protein. Every organism on the planet has the same genetic code, with very minor perturbations. In the 1960s Carl was the first to have the idea that the genetic code we have is about as good as it could possibly be for minimizing errors. Even if you get the wrong amino acid — through a mutation, or because the cell’s translational machinery made a mistake — the genetic code specifies an amino acid that’s probably similar to the one you should have gotten. In that way, you’ve still got a chance that the protein you make will function, so the organism won’t die. David Haig [at Harvard University] and Laurence Hurst [at the University of Bath] were the first to show that this idea could be made quantitative through Monte Carlo simulation — they looked for which genetic code is most resilient against these kinds of errors. And the answer is: the one that we have. It’s really amazing, and not as well known as it should be.

    Later, Carl and I, together with Kalin Vetsigian [at the University of Wisconsin-Madison], did a digital life simulation of communities of organisms with many synthetic, hypothetical genetic codes. We made computer virus models that mimicked living systems: They had a genome, expressed proteins, could replicate, experienced selection, and their fitness was a function of the proteins that they had. We found that it was not just their genomes that evolved. Their genetic code evolved, too. If you just have vertical evolution [between generations], the genetic code never becomes unique or optimal. But if you have this collective network effect, then the genetic code evolves rapidly and to a unique, optimal state, as we observe today.

    So those findings, and the questions about how life could get this error-minimizing genetic code so quickly, suggest that we should see signatures of horizontal gene transfer earlier than the Last Universal Common Ancestor, for example. Sure enough, some of the enzymes that are associated with the cell’s translation machineries and gene expression show strong evidence of early horizontal gene transfers.

    How have you been able to build on those findings?

    Tommaso Biancalani [now at the Massachusetts Institute of Technology] and I discovered in the last year or so — and our paper on this has been accepted for publication — that life automatically shuts off the horizontal gene transfer once it has evolved enough complexity. When we simulate it, it basically shuts itself off on its own. It’s still trying to do horizontal gene transfer, but almost nothing sticks. Then the only evolutionary mechanism that dominates is vertical evolution, which was always present. We’re now trying to do experiments to see whether all the core cellular machinery has gone through this transition from horizontal to vertical transmission.
    Is this understanding of early evolution why you’ve said that we need a new way to talk about biology?

    People tend to think about evolution as being synonymous with population genetics. I think that’s fine, as far as it goes. But it doesn’t go far enough. Evolution was going on before genes even existed, and that can’t possibly be explained by the statistical models of population genetics alone. There are collective modes of evolution that one needs to take seriously, too. Processes like horizontal gene transfer, for example.

    It’s in that sense that I think our view of evolution as a process needs to be expanded — by thinking about dynamical systems, and how it is possible that systems capable of evolving and reproducing can exist at all. If you think about the physical world, it is not at all obvious why you don’t just make more dead stuff. Why does a planet have the capability to sustain life? Why does life even occur? The dynamics of evolution should be able to address that question. Remarkably, we don’t have an idea even in principle of how to address that question — which, given that life started as something physical and not biological, is fundamentally a physics question.

    How does your work on cyanobacteria fit into these applications of condensed matter theory?

    My graduate student Hong-Yan Shih and I modeled the ecosystem of an organism called Prochlorococcus, a type of cyanobacteria that lives in the ocean through photosynthesis. I think it may well be the most numerous cellular organism on the planet. There are viruses, called phages, that prey on the bacteria. Ten years or so ago, it was discovered that these phages have photosynthesis genes, too. Now, you normally wouldn’t think of a virus as needing to do photosynthesis. So why are they carrying these genes around?

    It seems that the bacteria and phages don’t quite behave as the dynamics of a predator-prey ecosystem would predict. The bacteria actually benefit from the phages. In fact, the bacteria could prevent the phages from attacking them in many ways, but they don’t, not entirely. The phages’ photosynthesis genes originally came from the bacteria — and, amazingly, the phages then transferred them back to the bacteria. Photosynthesis genes have shuttled back and forth between the bacteria and the phages several times over the last 150 million years.

    It turns out that genes evolve much more rapidly in the viruses than they do in the bacteria, because the replication process for the viruses is much shorter and more likely to make mistakes. As a side effect of the phages’ predation on the bacteria, bacterial genes sometimes get transferred into the viruses, where they can spread, evolve quickly and then be given back to the bacteria, which can then reap the benefits. So the phages have been useful to the bacteria. For example, there are two strains of Prochlorococcus, which live at different depths. One of those ecotypes adapted to live closer to the surface, where the light is much more intense and has a different frequency. That adaptation could occur because the viruses made rapid evolution available.

    And the viruses benefit from the genes, too. When a virus infects its host and replicates, the number of new viruses it makes depends on how long the hijacked cell can survive. If the virus carries with it a life-support system — the photosynthesis genes — it can keep the cell alive longer to make more copies of the virus. The virus that carries the photosynthesis genes has a competitive advantage over one that doesn’t. There’s a selection pressure on the viruses to carry genes that benefit the host. You’d expect that because the viruses have such a high mutation rate, their genes would deteriorate rapidly. But in the calculations that we’ve done, we’ve found that the bacteria filter the good genes and transfer them to the viruses.

    So there’s a nice story here: a collective behavior between the bacteria and the viruses that mimics the kind of things that happen in condensed matter systems — and that we can model, so that we can predict features of the system.

    2
    Seth Lowe for Quanta Magazine

    We’ve been talking about a physics-based approach to biology. Have you encountered the reverse, where the biology has informed the physics?

    Yes. I work on turbulence. When I go home at night, that’s what I lie awake thinking about. In a paper published last year in Nature Physics, Hong-Yan Shih, Tsung-Lin Hsieh and I wanted to better understand how a fluid in a pipe goes from being laminar, where it flows smoothly and predictably, to turbulent, where its behavior is unpredictable, irregular and stochastic. We discovered that very close to the transition, turbulence behaves kind of like an ecosystem. There’s a particular dynamical mode of the fluid flow that’s like a predator: It tries to “eat” the turbulence, and the interplay between this mode and the emerging turbulence gives rise to some of the phenomena that you see as the fluid becomes turbulent. Ultimately, our work predicts that a certain type of phase transition happens in fluids, and indeed that’s what the experiments show. Because the physics problem turned out to be mappable onto this biology problem — the ecology of predator and prey — Hong-Yan and I knew how to simulate and model the system and reproduce what people see in experiments. Knowing the biology actually helped us understand the physics.

    What are the limitations to a physics-based approach to biology?

    On one hand, there is a danger of replicating only what is known, so that you can’t make any new predictions. On the other, sometimes your abstraction or minimal representation is oversimplified, and then you’ve lost something in the process.

    You can’t think too theoretically. You have to roll up your sleeves and learn the biology, be closely tied with real experimental phenomena and real data. That’s why our work is done in collaboration with experimentalists: With experimentalist colleagues, I’ve collected microbes from the hot springs of Yellowstone National Park, watched jumping genes in real time in living cells, sequenced the gastrointestinal microbiome of vertebrates. Every day you’ll find me working in the Institute for Genomic Biology, even though my home department is physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:02 am on October 12, 2017 Permalink | Reply
    Tags: , , , , , , , Quanta Magazine, The Math That’s Too Difficult for Physics   

    From Quanta: “The Math That’s Too Difficult for Physics” 

    Quanta Magazine
    Quanta Magazine

    November 18, 2016 [Wow!!]
    Kevin Hartnett

    1
    Christian Gwiozda

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event


    CERN CMS Higgs Event

    Higgs Always the last place your look.

    How do physicists reconstruct what really happened in a particle collision? Through calculations that are so challenging that, in some cases, they simply can’t be done. Yet.

    It’s one thing to smash protons together. It’s another to make scientific sense of the debris that’s left behind.

    This is the situation at CERN, the laboratory that houses the Large Hadron Collider, the largest and most powerful particle accelerator in the world.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    In order to understand all the data produced by the collisions there, experimental physicists and theoretical physicists engage in a continual back and forth. Experimentalists come up with increasingly intricate experimental goals, such as measuring the precise properties of the Higgs boson. Ambitious goals tend to require elaborate theoretical calculations, which the theorists are responsible for. The experimental physicists’ “wish list is always too full of many complicated processes,” said Pierpaolo Mastrolia, a theoretical physicist at the University of Padua in Italy. “Therefore we identify some processes that can be computed in a reasonable amount of time.”

    By “processes,” Mastrolia is referring to the chain of events that unfolds after particles collide. For example, a pair of gluons might combine through a series of intermediate steps — particles morphing into other particles — to form a Higgs boson, which then decays into still more particles. In general, physicists prefer to study processes involving larger numbers of particles, since the added complexity assists in searches for physical effects that aren’t described by today’s best theories. But each additional particle requires more math.

    To do this math, physicists use a tool called a Feynman diagram, which is essentially an accounting device that has the look of a stick-figure drawing: Particles are represented by lines that collide at vertices to produce new particles.

    3
    Feynman Diagrams Depicting Possible Formations of the Higgs Boson. Image Credit: scienceblogs.com. astrobites

    Physicists then take the integral of every possible path an experiment could follow from beginning to end and add those integrals together. As the number of possible paths goes up, the number of integrals that theorists must compute — and the difficulty of calculating each individual integral — rises precipitously.

    When deciding on the kinds of collisions they want to study, physicists have two main choices to make. First, they decide on the number of particles they want to consider in the initial state (coming in) and the final state (going out). In most experiments, it’s two incoming particles and anywhere from one to a dozen outgoing particles (referred to as “legs” of the Feynman diagram). Then they decide on the number of “loops” they’ll take into account. Loops represent all the intermediate collisions that could take place between the initial and final states. Adding more loops increases the precision of the measurement. They also significantly add to the burden of calculating Feynman diagrams. Generally speaking, there’s a trade-off between loops and legs: If you want to take into account more loops, you need to consider fewer legs. If you want to consider more legs, you’re limited to just a few loops.

    “If you go to two loops, the largest number [of legs] going out is two. People are pushing toward three particles going out at two loops — that’s the boundary that’s really beyond the state of the art,” said Gavin Salam, a theoretical physicist at CERN.

    Physicists already have the tools to calculate probabilities for tree-level (zero loop) and one-loop diagrams featuring any number of particles going in and out. But accounting for more loops than that is still a major challenge and could ultimately be a limiting factor in the discoveries that can be achieved at the LHC.

    “Once we discover a particle and want to determine its properties, its spin, mass, angular momentum or couplings with other particles, then higher-order calculations” with loops become necessary, said Mastrolia.

    And that’s why many are excited about the emerging connections between Feynman diagrams and number theory that I describe in the recent article “Strange Numbers Found in Particle Collisions.” If mathematicians and physicists can identify patterns in the values generated from diagrams of two or more loops, their calculations would become much simpler — and experimentalists would have the mathematics they need to study the kinds of collisions they’re most interested in.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:20 pm on October 8, 2017 Permalink | Reply
    Tags: , , , , , , , , Perimeter Institute of Theoretical Physics, , Quanta Magazine,   

    From Quanta: Women in STEM: “Mining Black Hole Collisions for New Physics” Asimina Arvanitaki 

    Quanta Magazine
    Quanta Magazine

    July 21, 2016
    Joshua Sokol

    The physicist Asimina Arvanitaki is thinking up ways to search gravitational wave data for evidence of dark matter particles orbiting black holes.

    1
    Asimina Arvanitaki during a July visit to the CERN particle physics laboratory in Geneva, Switzerland.
    Samuel Rubio for Quanta Magazine

    When physicists announced in February that they had detected gravitational waves firsthand, the foundations of physics scarcely rattled.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    The signal exactly matched the expectations physicists had arrived at after a century of tinkering with Einstein’s theory of general relativity. “There is a question: Can you do fundamental physics with it? Can you do things beyond the standard model with it?” said Savas Dimopoulos, a theoretical physicist at Stanford University. “And most people think the answer to that is no.”

    Asimina Arvanitaki is not one of those people. A theoretical physicist at Ontario’s Perimeter Institute of Theoretical Physics,


    Perimeter Institute in Waterloo, Canada

    Arvanitaki has been dreaming up ways to use black holes to explore nature’s fundamental particles and forces since 2010, when she published a paper with Dimopoulos, her mentor from graduate school, and others. Together, they sketched out a “string axiverse,” a pantheon of as yet undiscovered, weakly interacting particles. Axions such as these have long been a favored candidate to explain dark matter and other mysteries.

    In the intervening years, Arvanitaki and her colleagues have developed the idea through successive papers. But February’s announcement marked a turning point, where it all started to seem possible to test these ideas. Studying gravitational waves from the newfound population of merging black holes would allow physicists to search for those axions, since the axions would bind to black holes in what Arvanitaki describes as a “black hole atom.”

    “When it came up, we were like, ‘Oh my god, we’re going to do it now, we’re going to look for this,’” she said. “It’s a whole different ball game if you actually have data.”

    That’s Arvanitaki’s knack: matching what she calls “well-motivated,” field-hopping theoretical ideas with the precise experiment that could probe them. “By thinking away from what people are used to thinking about, you see that there is low-hanging fruit that lie in the interfaces,” she said. At the end of April, she was named the Stavros Niarchos Foundation’s Aristarchus Chair at the Perimeter Institute, the first woman to hold a research chair there.

    It’s a long way to come for someone raised in the small Grecian village of Koklas, where the graduating class at her high school — at which both of her parents taught — consisted of nine students. Quanta Magazine spoke with Arvanitaki about her plan to use black holes as particle detectors. An edited and condensed version of those discussions follows.

    QUANTA MAGZINE: When did you start to think that black holes might be good places to look for axions?

    ASIMINA ARVANITAKI: When we were writing the axiverse paper, Nemanja Kaloper, a physicist who is very good in general relativity, came and told us, “Hey, did you know there is this effect in general relativity called superradiance?” And we’re like, “No, this cannot be, I don’t think this happens. This cannot happen for a realistic system. You must be wrong.” And then he eventually convinced us that this could be possible, and then we spent like a year figuring out the dynamics.
    What is superradiance, and how does it work?

    An astrophysical black hole can rotate. There is a region around it called the “ergo region” where even light has to rotate. Imagine I take a piece of matter and throw it in a trajectory that goes through the ergo region. Now imagine you have some explosives in the matter, and it breaks apart into pieces. Part of it falls into the black hole and part escapes into infinity. The piece that is coming out has more total energy than the piece that went in the black hole.

    You can perform the same experiment by scattering radiation from a black hole. Take an electromagnetic wave pulse, scatter it from the black hole, and you see that the pulse you got back has a higher amplitude.

    So you can send a pulse of light near a black hole in such a way that it would take some energy and angular momentum from the black hole’s spin?

    This is old news, by the way, this is very old news. In ’72 Press and Teukolsky wrote a Nature paper that suggested the following cute thing. Let’s imagine you performed the same experiment as the light, but now imagine that you have the black hole surrounded by a giant mirror. What will happen in that case is the light will bounce on the mirror many times, the amplitude [of the light] grows exponentially, and the mirror eventually explodes due to radiation pressure. They called it the black hole bomb.

    The property that allows light to do this is that light is made of photons, and photons are bosons — particles that can sit in the same space at the same time with the same wave function. Now imagine that you have another boson that has a mass. It can [orbit] the black hole. The particle’s mass acts like a mirror, because it confines the particle in the vicinity of the black hole.

    In this way, axions might get stuck around a black hole?

    This process requires that the size of the particle is comparable to the black hole size. Turns out that [axion] mass can be anywhere from Hubble scale — with a quantum wavelength as big as the universe — or you could have a particle that’s tiny in size.

    So if they exist, axions can bind to black holes with a similar size and mass. What’s next?

    What happens is the number of particles in this bound orbit starts growing exponentially. At the same time the black hole spins down. If you solve for the wave functions of the bound orbits, what you find is that they look like hydrogen wave functions. Instead of electromagnetism binding your atom, what’s binding it is gravity. There are three quantum numbers you can describe, just the same. You can use the exact terminology that you can use in the hydrogen atom.

    How could we check to see if any of the black holes LIGO finds have axion clouds orbiting around black hole nuclei?

    This is a process that extracts energy and angular momentum from the black hole. If you were to measure spin versus mass of black holes, you should see that in a certain mass range for black holes you see no quickly rotating black holes.

    This is where Advanced LIGO comes in. You saw the event they saw. [Their measurements] allowed them to measure the masses of the merging objects, the mass of the final object, the spin of the final object, and to have some information about the spins of the initial objects.

    If I were to take the spins of the black holes before they merged, they could have been affected by superradiance. Now imagine a graph of black hole spin versus mass. Advanced LIGO could maybe get, if the things that we hear are correct, a thousand events per year. Now you have a thousand data points on this plot. So you may trace out the region that is affected by this particle just by those measurements.

    That would be supercool.

    That’s of course indirect. So the other cool thing is that it turns out there are signatures that have to do with the cloud of particles themselves. And essentially what they do is turn the black hole into a gravitational wave laser.

    Awesome. OK, what does that mean?

    2
    Samuel Rubio for Quanta Magazine

    Yeah, what that means is important. Just like you have transitions of electrons in an excited atom, you can have transitions of particles in the gravitational wave atom. The rate of emission of gravitational waves from these transitions is enhanced by the 1080 particles that you have. It would look like a very monochromatic line. It wouldn’t look like a transient. Imagine something now that emits a signal at a very fixed frequency.

    Where could LIGO expect to see signals like this?

    In Advanced LIGO, you actually see the birth of a black hole. You know when and where a black hole was born with a certain mass and a certain spin. So if you know the particle masses that you’re looking for, you can predict when the black hole will start growing the [axion] cloud around it. It could be that you see a merger in that day, and one or 10 years down the line, they go back to the same position and they see this laser turning on, they see this monochromatic line coming out from the cloud.

    You can also do a blind search. Because you have black holes that are roaming the universe by themselves, and they could still have some leftover cloud around them, you can do a blind search for monochromatic gravitational waves.

    Were you surprised to find out that axions and black holes could combine to produce such a dramatic effect?

    Oh my god yes. What are you talking about? We had panic attacks. You know how many panic attacks we had saying that this effect, no, this cannot be true, this is too good to be true? So yes, it was a surprise.

    The experiments you suggest draw from a lot of different theoretical ideas — like how we could look for high-frequency gravitational waves with tabletop sensors, or test whether dark matter oscillates using atomic clocks. When you’re thinking about making risky bets on physics beyond the standard model, what sorts of theories seem worth the effort?

    What is well motivated? Things that are not: “What if you had this?” People imagine: “What if dark matter was this thing? What if dark matter was the other thing?” For example, supersymmetry makes predictions about what types of dark matter should be there. String theory makes predictions about what types of particles you should have. There is always an underlying reason why these particles are there; it’s not just the endless theoretical possibilities that we have.

    And axions fit that definition?

    This is a particle that was proposed 30 years ago to explain the smallness of the observed electric dipole moment of the neutron. There are several experiments around the world looking for it already, at different wavelengths. So this particle, we’ve been looking for it for 30 years. This can be the dark matter. That particle solves an outstanding problem of the standard model, so that makes it a good particle to look for.

    Now, whether or not the particle is there I cannot answer for nature. Nature will have to answer.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 8:48 am on October 8, 2017 Permalink | Reply
    Tags: , , , , , , Quanta Magazine, U Toronta Dragon Fly Telescope Array, UDGs-“ultra-diffuse galaxies”   

    From Quanta: “Strange Dark Galaxy Puzzles Astrophysicists” 

    Quanta Magazine
    Quanta Magazine

    September 27, 2016
    Joshua Sokol

    The surprising discovery of a massive, Milky Way-size galaxy that is made of 99.99 percent dark matter has astronomers dreaming up new ideas about how galaxies form.

    1
    Astronomers have long known of small dark-matter dominated galaxies. None were supposed to be as big as ordinary spiral galaxies such as NGC 3810, seen here in negative. Photo illustration by Olena Shmahalo/Quanta Magazine. Source: NASA/ESA Hubble.

    NASA/ESA Hubble Telescope

    Among the thousand-plus galaxies in the Coma cluster, a massive clump of matter some 300 million light-years away, is at least one — and maybe a few hundred — that shouldn’t exist.

    Coma cluster via NASA/ESA Hubble

    Dragonfly 44 is a dim galaxy, with one star for every hundred in our Milky Way.

    2
    The ultra-diffuse galaxy Dragonfly 44. Image credit: Pieter van Dokkum / Roberto

    But it spans roughly as much space as the Milky Way.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    In addition, it’s heavy enough to rival our own galaxy in mass, according to results published in The Astrophysical Journal Letters at the end of August. That odd combination is crucial: Dragonfly 44 is so dark, so fluffy, and so heavy that some astronomers believe it will either force a revision of our theories of galaxy formation or help us understand the properties of dark matter, the mysterious stuff that interacts with normal matter via gravity and not much else. Or both.

    The discovery came almost by accident. The astronomers Pieter van Dokkum of Yale University and Roberto Abraham of the University of Toronto were interested in testing theories of how galaxies form by searching for objects that have been invisible to even the most advanced telescopes: faint, wispy and extended objects in the sky. So their team built the Dragonfly Telephoto Array, a collection of modified Canon lenses that focus light onto commercial camera sensors.

    U Toronta Dragon Fly Telescope Array

    This setup cut down on any scattered light inside the system that might hide a dim object.

    The plan was to study the faint fringes of nearby galaxies. But the famous Coma cluster — the collection of galaxies that long ago inspired astronomer Fritz Zwicky’s conjecture that such a thing as dark matter might exist — beckoned. “Partway through, we just could not resist looking at Coma,” Abraham said. “You could argue that this discovery emerged from a lack of discipline.” They planned to study the Coma cluster’s intracluster light — the faint glow of loose stars floating between the cluster’s galaxies.

    Instead, they found 47 faint smudges that wouldn’t go away. These smudges seemed to have diameters roughly the same size as the Milky Way. Yet according to the commonly accepted models of galaxy formation, anything that big shouldn’t be so dim.

    In these theories, clumps of dark matter seed the universe with light. First, clouds of dark matter coalesce into relatively dense dark-matter haloes.

    Dark matter halo Image credit: Virgo consortium / A. Amblard / ESA

    Then gas and fragments of other galaxies, drawn by the halo’s gravity, collect at the center. They spin out into a disk and collapse into luminous stars to form something we can see through telescopes. The whole process seems to be reasonably predictable for big galaxies such as our Milky Way. Having measured either a galaxy’s dark-matter halo or its assortment of stars, you should be able to predict the other to within a factor of two.

    3
    The dark galaxy Dragonfly 44. The scale bar represents a distance of 10 kiloparsecs, or about 33,000 light years. Pieter van Dokkum, Roberto Abraham, Gemini Observatory/AURA.

    “It’s not just dogma. It’s basically that there are no exceptions that we knew of,” said Jeremiah Ostriker, an astrophysicist at Columbia University.

    After Abraham and van Dokkum realized that they appeared to be looking at 47 exceptions, they did a search through the literature. They found that similar fuzzy blobs have been on the edge of discovery since the 1970s. Van Dokkum thinks astronomy’s transition from photographic plates — which were perhaps better suited to picking up extended, diffuse objects — to modern digital sensors may actually have hid them from further attention.

    Abraham and van Dokkum first noticed their smudges in the spring of 2014. Since then, similar “ultra-diffuse galaxies,” or UDGs, have been discovered in other galaxy groupings like the Virgo and Fornax clusters. And in the Coma cluster, one study suggested [The Astrophysical Journal Letters], there may be a thousand more of them, including 332 that are about as large as the Milky Way.

    Meanwhile, the Dragonfly team has been advancing the case that these new dim galaxies really are oddballs that challenge current theory. They’re failed galaxies, this argument holds. Dark matter planted the seeds of a spiral disk and stars, but somehow the luminous structure didn’t sprout.

    That argument has convinced outside experts like Ostriker, who finds van Dokkum’s prior record highly credible. “There are many, many other people who could have ‘discovered’ this where I’d be much more skeptical,” Ostriker said. “The simplest way of putting it is: His papers aren’t wrong.”

    Not everyone is so convinced. While these UDGs may be large, they’re not necessarily massive, argue some astronomers. One idea is that UDGs might be lightweight galaxies that look puffy because they are in the process of being torn apart by gravitational tides from the rest of the Coma cluster.

    Michelle Collins, an astronomer at the University of Surrey, argues that “the only other place we’ve seen things that are that extreme or more extreme are a handful of galaxies around the Local Group,” referring to small, dim “dwarf galaxies” that frequently orbit larger galaxies such as our Milky Way. “They are all things that are currently being ripped apart.” That would make most UDGs just large dwarf galaxies in the process of being ripped to shreds.

    Another possibility hinges on the idea that galaxies can “breathe.” At the end of 2015, Kareem El-Badry, who was at the time an undergraduate student at Yale University, proposed that galaxies can swell out and then collapse in size by over a factor of two. In this process, gas first falls into the galaxy, forming massive stars — the breathing in. The stars quickly end their lives in supernova explosions that blast the gas outside the galaxy — the breathing out. The gas eventually cools, and gravity pulls it back toward the galactic center. In a lone galaxy, this rhythm can continue indefinitely. But in the harsh environment of the Coma cluster, where hot gas fills the space between galaxies, the gas after the galaxy exhales could be stripped away, leaving the whole galaxy stuck in a puffy state.

    3
    Lucy Reading-Ikkanda for Quanta Magazine

    Yet another interpretation, suggested in March 2016 by Harvard University astrophysicists Nicola Amorisco and Avi Loeb, is that UDGs are ordinary galaxies that are just spinning fast. “In our scenario, it’s very natural,” Loeb said.

    That idea piggybacks on standard theories of galaxy formation, in which gas pours into a dark-matter halo to build a galaxy. As the material falls, it begins to rotate. The amount of rotation determines the size of the final galaxy. Without much spin, gravity pulls the galaxy into a compact shape. But galaxies that get a big rotational push can spin themselves out into large, lightweight disks.

    It could be, according to this model, that the UDGs are natural examples of the very fastest spinners. If so, their stretched-out disks wouldn’t be dense enough to form as many stars as a slower rotator like the Milky Way, explaining why they look so faint.

    These ideas may well explain some of the UDG population, according to Abraham. “Probably this is going to evolve into a mixed bag of things,” he said. But according to his team’s latest data, obtained from observations that spanned a total of 33.5 hours on the 10-meter Keck II telescope in Hawaii, there is no evidence that the Dragonfly 44 galaxy is rotating.


    Keck Observatory, Maunakea, Hawaii, USA.4,207 m (13,802 ft) above sea level

    In addition, they argue that the total mass of the galaxy is around a trillion suns — massive enough to prevent it being ripped apart like a dwarf galaxy, and heavier than the galaxies thought to periodically puff up.

    That mass measurement is the real sticking point, said Philip Hopkins, a theoretical astrophysicist at the California Institute of Technology who is preparing several papers on UDGs. It comes from two observations of different parts of Dragonfly 44. First, the motions of stars in the galaxy’s inner regions suggest that the area is massive, filled with dark matter. Second, the outskirts of the galaxy are home to a number of globular clusters — tight, ancient balls of stars. Just as the number of stars in a galaxy is ordinarily linked to the amount of dark matter, observations show that the more globular clusters a galaxy has, the higher the mass of its dark-matter halo. Dragonfly 44 has Milky Way-level clusters. Other UDGs seem to have lots of globular clusters, too.

    Because of this, even if these UDGs don’t have heavy dark-matter haloes, researchers will still be left to explain why they have far more globular clusters than the known relationship suggests they should. “Something is weird about these things,” Hopkins said. “Either way, it’s really cool.”

    The discovery has generated enough interest to earn the team precious time on the Hubble Space Telescope to study Dragonfly 44’s globular clusters. “The thing I find hilarious is we’re using humanity’s most powerful telescope in space to follow up a bunch of telephoto lenses,” Abraham said. To fully understand the relationship between dark matter and the globular clusters, though, they have to measure the motions of the clusters — for which they’ll need to wait until the James Webb Space Telescope launches in 2018 [revised to 2019].

    NASA/ESA/CSA Webb Telescope annotated

    In parallel, they’re looking to find and characterize more Dragonfly 44s, preferably a few located both outside of a cluster — and thus free of the harsh cluster environment — and closer to us. It’s an open question as to whether they exist elsewhere and, if so, what form they take. “The resolution of whether the UDGs are what we argue they are, or something else, would come from finding them outside of clusters of galaxies and seeing how they look there,” Loeb said. A few candidates have emerged, van Dokkum said, and they are now being followed up with Keck and Hubble.

    For theorists like Ostriker, that’s an exciting prospect. If the motion of stars in a galaxy like Dragonfly 44 can be studied up close, it would be a make-or-break test for current dark-matter theories, which make different predictions about how the missing mass should be distributed. The leading theory, called cold dark matter, suggests dark matter should surge at the heart of a galaxy. Right now, though, the dark-matter-dominated galaxies we have to study are nearby dwarf galaxies, and they don’t exhibit that characteristic. “Many of the properties that dark matter is supposed to have … these little galaxies don’t show,” Ostriker said. “But we say, ‘We don’t really know how these things were formed anyway,’ and we just change the subject.”

    By contrast, an otherwise normal-but-dark Milky Way would eliminate that loophole. In the universe’s other Milky Way-size galaxies, stars and gas can outweigh dark matter in the central regions by a factor of five to one. That makes disentangling the gravitational pull of dark matter alone tricky. But the center of Dragonfly 44’s disk is 98 percent dark matter, meaning a map of its central mass would give unprecedented insight into dark matter’s properties, Ostriker said.

    The way forward to understand UDGs isn’t clear yet, Abraham said, but hopefully at least some of the ideas now being proposed will persist through the next few years of observations. “In astronomy, it’s still valid to be just an explorer. In the case of Dragonfly, we’re like Leif Eriksson,” he said. “You’ve been on the ship for months, and suddenly somebody said, ‘Land ho!’ And it’s not on the map.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:21 pm on August 4, 2017 Permalink | Reply
    Tags: , , , , , , , , Quanta Magazine   

    From Quanta: “Scientists Unveil a New Inventory of the Universe’s Dark Contents” 

    Quanta Magazine
    Quanta Magazine

    August 3, 2017
    Natalie Wolchover

    In a much-anticipated analysis of its first year of data, the Dark Energy Survey (DES) telescope experiment has gauged the amount of dark energy and dark matter in the universe by measuring the clumpiness of galaxies — a rich and, so far, barely tapped source of information that many see as the future of cosmology.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    The analysis, posted on DES’s website today and based on observations of 26 million galaxies in a large swath of the southern sky, tweaks estimates only a little. It draws the pie chart of the universe as 74 percent dark energy and 21 percent dark matter, with galaxies and all other visible matter — everything currently known to physicists — filling the remaining 5 percent sliver.

    The results are based on data from the telescope’s first observing season, which began in August 2013 and lasted six months. Since then, three more rounds of data collection have passed; the experiment begins its fifth and final planned observing season this month. As the 400-person team analyzes more of this data in the coming years, they’ll begin to test theories about the nature of the two invisible substances that dominate the cosmos — particularly dark energy, “which is what we’re ultimately going after,” said Joshua Frieman, co-founder and director of DES and an astrophysicist at Fermi National Accelerator Laboratory (Fermilab) and the University of Chicago. Already, with their first-year data, the experimenters have incrementally improved the measurement of a key quantity that will reveal what dark energy is.

    Both terms — dark energy and dark matter — are mental place holders for unknown physics. “Dark energy” refers to whatever is causing the expansion of the universe to accelerate, as astronomers first discovered it to be doing in 1998. And great clouds of missing “dark matter” have been inferred from 80 years of observations of their apparent gravitational effect on visible matter (though whether dark matter consists of actual particles or something else, nobody knows).

    The balance of the two unknown substances sculpts the distribution of galaxies. “As the universe evolves, the gravity of dark matter is making it more clumpy, but dark energy makes it less clumpy because it’s pushing galaxies away from each other,” Frieman said. “So the present clumpiness of the universe is telling us about that cosmic tug-of-war between dark matter and dark energy.”

    2
    The Dark Energy Survey uses a 570-megapixel camera mounted on the Victor M. Blanco Telescope in Chile (left). The camera is made out of 74 individual light-gathering wafers.

    A Dark Map

    Until now, the best way to inventory the cosmos has been to look at the Cosmic Microwave Background [CMB]: pristine light from the infant universe that has long served as a wellspring of information for cosmologists, but which — after the Planck space telescope mapped it in breathtakingly high resolution in 2013 — has less and less to offer.

    CMB per ESA/Planck

    ESA/Planck

    Cosmic microwaves come from the farthest point that can be seen in every direction, providing a 2-D snapshot of the universe at a single moment in time, 380,000 years after the Big Bang (the cosmos was dark before that). Planck’s map of this light shows an extremely homogeneous young universe, with subtle density variations that grew into the galaxies and voids that fill the universe today.

    Galaxies, after undergoing billions of years of evolution, are more complex and harder to glean information from than the cosmic microwave background, but according to experts, they will ultimately offer a richer picture of the universe’s governing laws since they span the full three-dimensional volume of space. “There’s just a lot more information in a 3-D volume than on a 2-D surface,” said Scott Dodelson, co-chair of the DES science committee and an astrophysicist at Fermilab and the University of Chicago.

    To obtain that information, the DES team scrutinized a section of the universe spanning an area 1,300 square degrees wide in the sky — the total area of 6,500 full moons — and stretching back 8 billion years (the data were collected by the half-billion-pixel Dark Energy Camera mounted on the Victor M. Blanco Telescope in Chile). They statistically analyzed the separations between galaxies in this cosmic volume. They also examined the distortion in the galaxies’ apparent shapes — an effect known as “weak gravitational lensing” that indicates how much space-warping dark matter lies between the galaxies and Earth. These two probes — galaxy clustering and weak lensing — are two of the four approaches that DES will eventually use to inventory the cosmos. Already, the survey’s measurements are more precise than those of any previous galaxy survey, and for the first time, they rival Planck’s.

    4

    “This is entering a new era of cosmology from galaxy surveys,” Frieman said. With DES’s first-year data, “galaxy surveys have now caught up to the cosmic microwave background in terms of probing cosmology. That’s really exciting because we’ve got four more years where we’re going to go deeper and cover a larger area of the sky, so we know our error bars are going to shrink.”

    For cosmologists, the key question was whether DES’s new cosmic pie chart based on galaxy surveys would differ from estimates of dark energy and dark matter inferred from Planck’s map of the cosmic microwave background. Comparing the two would reveal whether cosmologists correctly understand how the universe evolved from its early state to its present one. “Planck measures how much dark energy there should be” at present by extrapolating from its state at 380,000 years old, Dodelson said. “We measure how much there is.”

    The DES scientists spent six months processing their data without looking at the results along the way — a safeguard against bias — then “unblinded” the results during a July 7 video conference. After team leaders went through a final checklist, a member of the team ran a computer script to generate the long-awaited plot: DES’s measurement of the fraction of the universe that’s matter (dark and visible combined), displayed together with the older estimate from Planck. “We were all watching his computer screen at the same time; we all saw the answer at the same time. That’s about as dramatic as it gets,” said Gary Bernstein, an astrophysicist at the University of Pennsylvania and co-chair of the DES science committee.

    Planck pegged matter at 33 percent of the cosmos today, plus or minus two or three percentage points. When DES’s plots appeared, applause broke out as the bull’s-eye of the new matter measurement centered on 26 percent, with error bars that were similar to, but barely overlapped with, Planck’s range.

    “We saw they didn’t quite overlap,” Bernstein said. “But everybody was just excited to see that we got an answer, first, that wasn’t insane, and which was an accurate answer compared to before.”

    Statistically speaking, there’s only a slight tension between the two results: Considering their uncertainties, the 26 and 33 percent appraisals are between 1 and 1.5 standard deviations or “sigma” apart, whereas in modern physics you need a five-sigma discrepancy to claim a discovery. The mismatch stands out to the eye, but for now, Frieman and his team consider their galaxy results to be consistent with expectations based on the cosmic microwave background. Whether the hint of a discrepancy strengthens or vanishes as more data accumulate will be worth watching as the DES team embarks on its next analysis, expected to cover its first three years of data.

    If the possible discrepancy between the cosmic-microwave and galaxy measurements turns out to be real, it could create enough of a tension to lead to the downfall of the “Lambda-CDM model” of cosmology, the standard theory of the universe’s evolution. Lambda-CDM is in many ways a simple model that starts with Albert Einstein’s general theory of relativity, then bolts on dark energy and dark matter. A replacement for Lambda-CDM might help researchers uncover the quantum theory of gravity that presumably underlies everything else.

    What Is Dark Energy?

    According to Lambda-CDM, dark energy is the “cosmological constant,” represented by the Greek symbol lambda Λ in Einstein’s theory; it’s the energy that infuses space itself, when you get rid of everything else. This energy has negative pressure, which pushes space away and causes it to expand. New dark energy arises in the newly formed spatial fabric, so that the density of dark energy always remains constant, even as the total amount of it relative to dark matter increases over time, causing the expansion of the universe to speed up.

    The universe’s expansion is indeed accelerating, as two teams of astronomers discovered in 1998 by observing light from distant supernovas. The discovery, which earned the leaders of the two teams the 2011 Nobel Prize in physics, suggested that the cosmological constant has a positive but “mystifyingly tiny” value, Bernstein said. “There’s no good theory that explains why it would be so tiny.” (This is the “cosmological constant problem” that has inspired anthropic reasoning and the dreaded multiverse hypothesis.)

    On the other hand, dark energy could be something else entirely. Frieman, whom colleagues jokingly refer to as a “fallen theorist,” studied alternative models of dark energy before co-founding DES in 2003 in hopes of testing his and other researchers’ ideas. The leading alternative theory envisions dark energy as a field that pervades space, similar to the “inflaton field” that most cosmologists think drove the explosive inflation of the universe during the Big Bang. The slowly diluting energy of the inflaton field would have exerted a negative pressure that expanded space, and Frieman and others have argued that dark energy might be a similar field that is dynamically evolving today.

    DES’s new analysis incrementally improves the measurement of a parameter that distinguishes between these two theories — the cosmological constant on the one hand, and a slowly changing energy field on the other. If dark energy is the cosmological constant, then the ratio of its negative pressure and density has to be fixed at −1. Cosmologists call this ratio w. If dark energy is an evolving field, then its density would change over time relative to its pressure, and w would be different from −1.

    Remarkably, DES’s first-year data, when combined with previous measurements, pegs w’s value at −1, plus or minus roughly 0.04. However, the present level of accuracy still isn’t enough to tell if we’re dealing with a cosmological constant rather than a dynamic field, which could have w within a hair of −1. “That means we need to keep going,” Frieman said.

    The DES scientists will tighten the error bars around w in their next analysis, slated for release next year; they’ll also measure the change in w over time, by probing its value at different cosmic distances. (Light takes time to reach us, so distant galaxies reveal the universe’s past). If dark energy is the cosmological constant, the change in w will be zero. A nonzero measurement would suggest otherwise.

    Larger galaxy surveys might be needed to definitively measure w and the other cosmological parameters. In the early 2020s, the ambitious Large Synoptic Survey Telescope (LSST) will start collecting light from 20 billion galaxies and other cosmological objects, creating a high-resolution map of the universe’s clumpiness that will yield a big jump in accuracy.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The data might confirm that we occupy a Lambda-CDM universe, infused with an inexplicably tiny cosmological constant and full of dark matter whose nature remains elusive. But Frieman doesn’t discount the possibility of discovering that dark energy is an evolving quantum field, which would invite a deeper understanding by going beyond Einstein’s theory and tying cosmology to quantum physics.

    “With these surveys — DES and LSST that comes after it — the prospects are quite bright,” Dodelson said. “It is more complicated to analyze these things because the cosmic microwave background is simpler, and that is good for young people in the field because there’s a lot of work to do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:07 am on June 11, 2017 Permalink | Reply
    Tags: , Bell test, Cosmic Bell test, Experiment Reaffirms Quantum Weirdness, John Bell, Quanta Magazine, , , , Superdeterminism   

    From Quanta: “Experiment Reaffirms Quantum Weirdness” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2017 [I wonder where this was hiding. It just appeared today in social media.]
    Natalie Wolchover

    Physicists are closing the door on an intriguing loophole around the quantum phenomenon Einstein called “spooky action at a distance.”

    1
    Olena Shmahalo/Quanta Magazine

    There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described today in Physical Review Letters — a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna — some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

    “Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.

    00:00/09:42

    According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another — at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

    In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

    Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula — one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

    In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators — devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent — if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs — this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

    The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

    This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice [Phys.Rev.Lett.]. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent — if only one bit specifies them once in every 22 runs — this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

    The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

    2
    Members of the cosmic Bell test team calibrating the telescope used to choose the settings of one of their two detectors located in far-apart buildings in Vienna. Jason Gallicchio

    Friedman, Alan Guth and colleagues at MIT were entertaining such speculations a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit.

    4
    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    5
    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

    In the first of a planned series of “cosmic Bell test” experiments, the team sent pairs of photons from the roof of Zeilinger’s lab in Vienna through the open windows of two other buildings and into optical modulators, tallying coincident detections as usual. But this time, they attempted to lower the chance that the modulator settings might somehow become correlated with the states of the photons in the moments before each measurement. They pointed a telescope out of each window, trained each telescope on a bright and conveniently located (but otherwise random) star, and, before each measurement, used the color of an incoming photon from each star to set the angle of the associated modulator. The colors of these photons were decided hundreds of years ago, when they left their stars, increasing the chance that they (and therefore the measurement settings) were independent of the states of the photons being measured.

    And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

    Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

    Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear — which would prove that Einstein was right to doubt spooky action.

    “For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

    There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning — that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

    But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement — and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: