Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:05 am on February 12, 2018 Permalink | Reply
    Tags: , Gil Kalai, Quanta Magazine, The Argument Against Quantum Computers   

    From Quanta: “The Argument Against Quantum Computers” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2018
    Katia Moskvitch

    The mathematician Gil Kalai believes that quantum computers can’t possibly work, even in principle.

    1
    David Vaaknin for Quanta Magazine.

    Sixteen years ago, on a cold February day at Yale University, a poster caught Gil Kalai’s eye. It advertised a series of lectures by Michel Devoret, a well-known expert on experimental efforts in quantum computing. The talks promised to explore the question “Quantum Computer: Miracle or Mirage?” Kalai expected a vigorous discussion of the pros and cons of quantum computing. Instead, he recalled, “the skeptical direction was a little bit neglected.” He set out to explore that skeptical view himself.

    Today, Kalai, a mathematician at Hebrew University in Jerusalem, is one of the most prominent of a loose group of mathematicians, physicists and computer scientists arguing that quantum computing, for all its theoretical promise, is something of a mirage. Some argue that there exist good theoretical reasons why the innards of a quantum computer — the “qubits” — will never be able to consistently perform the complex choreography asked of them. Others say that the machines will never work in practice, or that if they are built, their advantages won’t be great enough to make up for the expense.

    Kalai has approached the issue from the perspective of a mathematician and computer scientist. He has analyzed the issue by looking at computational complexity and, critically, the issue of noise. All physical systems are noisy, he argues, and qubits kept in highly sensitive “superpositions” will inevitably be corrupted by any interaction with the outside world. Getting the noise down isn’t just a matter of engineering, he says. Doing so would violate certain fundamental theorems of computation.

    Kalai knows that his is a minority view. Companies like IBM, Intel and Microsoft have invested heavily in quantum computing; venture capitalists are funding quantum computing startups (such as Quantum Circuits, a firm set up by Devoret and two of his Yale colleagues). Other nations — most notably China — are pouring billions of dollars into the sector.

    Quanta Magazine recently spoke with Kalai about quantum computing, noise and the possibility that a decade of work will be proven wrong within a matter of weeks. A condensed and edited version of that conversation follows.

    When did you first have doubts about quantum computers?

    At first, I was quite enthusiastic, like everybody else. But at a lecture in 2002 by Michel Devoret called “Quantum Computer: Miracle or Mirage,” I had a feeling that the skeptical direction was a little bit neglected. Unlike the title, the talk was very much the usual rhetoric about how wonderful quantum computing is. The side of the mirage was not well-presented.
    And so you began to research the mirage.

    Only in 2005 did I decide to work on it myself. I saw a scientific opportunity and some possible connection with my earlier work from 1999 with Itai Benjamini and Oded Schramm on concepts called noise sensitivity and noise stability.

    What do you mean by “noise”?

    By noise I mean the errors in a process, and sensitivity to noise is a measure of how likely the noise — the errors — will affect the outcome of this process. Quantum computing is like any similar process in nature — noisy, with random fluctuations and errors. When a quantum computer executes an action, in every computer cycle there is some probability that a qubit will get corrupted.

    And so this corruption is the key problem?

    We need what’s known as quantum error correction. But this will require 100 or even 500 “physical” qubits to represent a single “logical” qubit of very high quality. And then to build and use such quantum error-correcting codes, the amount of noise has to go below a certain level, or threshold.

    To determine the required threshold mathematically, we must effectively model the noise. I thought it would be an interesting challenge.

    What exactly did you do?

    I tried to understand what happens if the errors due to noise are correlated — or connected. There is a Hebrew proverb that says that trouble comes in clusters. In English you would say: When it rains, it pours. In other words, interacting systems will have a tendency for errors to be correlated. There will be a probability that errors will affect many qubits all at once.

    So over the past decade or so, I’ve been studying what kind of correlations emerge from complicated quantum computations and what kind of correlations will cause a quantum computer to fail.

    In my earlier work on noise we used a mathematical approach called Fourier analysis, which says that it’s possible to break down complex waveforms into simpler components. We found that if the frequencies of these broken-up waves are low, the process is stable, and if they are high, the process is prone to error.

    That previous work brought me to my more recent paper that I wrote in 2014 with a Hebrew University computer scientist, Guy Kindler. Our calculations suggest that the noise in a quantum computer will kill all the high-frequency waves in the Fourier decomposition. If you think about the computational process as a Beethoven symphony, the noise will allow us to hear only the basses, but not the cellos, violas and violins.

    These results also give good reasons to think that noise levels cannot be sufficiently reduced; they will still be much higher than what is needed to demonstrate quantum supremacy and quantum error correction.

    Why can’t we push the noise level below this threshold?

    Many researchers believe that we can go beyond the threshold, and that constructing a quantum computer is merely an engineering challenge of lowering it. However, our first result shows that the noise level cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.

    What do your critics say to that?

    Critics point out that my work with Kindler deals with a restricted form of quantum computing and argue that our model for noise is not physical, but a mathematical simplification of an actual physical situation. I’m quite certain that what we have demonstrated for our simplified model is a real and general phenomenon.

    My critics also point to two things that they find strange in my analysis: The first is my attempt to draw conclusions about engineering of physical devices from considerations about computation. The second is drawing conclusions about small-scale quantum systems from insights of the theory of computation that are usually applied to large systems. I agree that these are unusual and perhaps even strange lines of analysis.

    And finally, they argue that these engineering difficulties are not fundamental barriers, and that with sufficient hard work and resources, the noise can be driven down to as close to zero as needed. But I think that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.

    How can you be certain?

    I am pretty certain, while a little nervous to be proven wrong. Our results state that noise will corrupt the computation, and that the noisy outcomes will be very easy to simulate on a classical computer. This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice. For quantum computers of the kind Google and IBM are building, when you run, as they plan to do, certain computational processes, they expect robust outcomes that are increasingly hard to simulate on a classical computer. Well, I expect very different outcomes. So I don’t need to be certain, I can simply wait and see.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    Advertisements
     
  • richardmitnick 10:43 am on February 7, 2018 Permalink | Reply
    Tags: , , Quanta Magazine,   

    From The Atlantic Magazine: “The Big Bang May Have Been One of Many” 

    Atlantic Magazine

    The Atlantic Magazine

    Feb 6, 2018
    Natalie Wolchover

    1
    davidope / Quanta Magazine

    Our universe could be expanding and contracting eternally.

    Humans have always entertained two basic theories about the origin of the universe. “In one of them, the universe emerges in a single instant of creation (as in the Jewish-Christian and the Brazilian Carajás cosmogonies),” the cosmologists Mario Novello and Santiago Perez Bergliaffa noted in 2008. In the other, “the universe is eternal, consisting of an infinite series of cycles (as in the cosmogonies of the Babylonians and Egyptians).” The division in modern cosmology “somehow parallels that of the cosmogonic myths,” Novello and Perez Bergliaffa wrote.

    In recent decades, it hasn’t seemed like much of a contest. The Big Bang theory, standard stuff of textbooks and television shows, enjoys strong support among today’s cosmologists. The rival eternal-universe picture had the edge a century ago, but it lost ground as astronomers observed that the cosmos is expanding and that it was small and simple about 14 billion years ago. In the most popular modern version of the theory, the Big Bang began with an episode called “cosmic inflation”—a burst of exponential expansion during which an infinitesimal speck of space-time ballooned into a smooth, flat, macroscopic cosmos, which expanded more gently thereafter.

    With a single initial ingredient (the “inflaton field”), inflationary models reproduce many broad-brush features of the cosmos today. But as an origin story, inflation is lacking; it raises questions about what preceded it and where that initial, inflaton-laden speck came from. Undeterred, many theorists think the inflaton field must fit naturally into a more complete, though still unknown, theory of time’s origin.

    But in the past few years, a growing number of cosmologists have cautiously revisited the alternative. They say the Big Bang might instead have been a Big Bounce. Some cosmologists favor a picture in which the universe expands and contracts cyclically like a lung, bouncing each time it shrinks to a certain size, while others propose that the cosmos only bounced once—that it had been contracting, before the bounce, since the infinite past, and that it will expand forever after. In either model, time continues into the past and future without end.

    With modern science, there’s hope of settling this ancient debate. In the years ahead, telescopes could find definitive evidence for cosmic inflation. During the primordial growth spurt—if it happened—quantum ripples in the fabric of space-time would have become stretched and later imprinted as subtle swirls in the polarization of ancient light called the cosmic microwave background [CMB].

    CMB per ESA/Planck


    ESA/Planck

    Current and future telescope experiments are hunting for these swirls. If they aren’t seen in the next couple of decades, this won’t entirely disprove inflation (the telltale swirls could simply be too faint to make out), but it will strengthen the case for bounce cosmology, which doesn’t predict the swirl pattern.

    Already, several groups are making progress at once. Most significantly, in the last year, physicists have come up with two new ways that bounces could conceivably occur. One of the models, described in a paper that will appear in the Journal of Cosmology and Astroparticle Physics, comes from Anna Ijjas of Columbia University, extending earlier work with her former adviser, the Princeton University professor and high-profile bounce cosmologist Paul Steinhardt. More surprisingly, the other new bounce solution, accepted for publication in Physical Review D, was proposed by Peter Graham, David Kaplan, and Surjeet Rajendran, a well-known trio of collaborators who mainly focus on particle-physics questions and have no previous connection to the bounce-cosmology community. It’s a noteworthy development in a field that’s highly polarized on the bang-vs.-bounce question.

    The question gained renewed significance in 2001, when Steinhardt and three other cosmologists argued that a period of slow contraction in the history of the universe could explain its exceptional smoothness and flatness, as witnessed today, even after a bounce—with no need for a period of inflation.

    The universe’s impeccable plainness, the fact that no region of sky contains significantly more matter than any other and that space is breathtakingly flat as far as telescopes can see, is a mystery. To match its present uniformity, experts infer that the cosmos, when it was one centimeter across, must have had the same density everywhere to within one part in 100,000. But as it grew from an even smaller size, matter and energy ought to have immediately clumped together and contorted space-time. Why don’t our telescopes see a universe wrecked by gravity?

    “Inflation was motivated by the idea that that was crazy to have to assume the universe came out so smooth and not curved,” says the cosmologist Neil Turok, the director of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and a coauthor of the 2001 paper [Physical Review D] on cosmic contraction with Steinhardt, Justin Khoury, and Burt Ovrut.

    In the inflation scenario, the centimeter-size region results from the exponential expansion of a much smaller region—an initial speck measuring no more than a trillionth of a trillionth of a centimeter across. As long as that speck was infused with an inflaton field that was smooth and flat, meaning its energy concentration didn’t fluctuate across time or space, the speck would have inflated into a huge, smooth universe like ours. Raman Sundrum, a theoretical physicist at the University of Maryland, says the thing he appreciates about inflation is that “it has a kind of fault tolerance built in.” If, during this explosive growth phase, there was a buildup of energy that bent space-time in a certain place, the concentration would have quickly inflated away. “You make small changes against what you see in the data and you see the return to the behavior that the data suggests,” Sundrum says.

    However, where exactly that infinitesimal speck came from, and why it came out so smooth and flat itself to begin with, no one knows. Theorists have found many possible ways to embed the inflaton field into string theory., a candidate for the underlying quantum theory of gravity. So far, there’s no evidence for or against these ideas.

    Cosmic inflation also has a controversial consequence. The theory—which was pioneered in the 1980s by Alan Guth, Andrei Linde, Aleksei Starobinsky, and (of all people) Steinhardt, almost automatically leads to the hypothesis that our universe is a random bubble in an infinite, frothing multiverse sea. Once inflation starts, calculations suggest that it keeps going forever, only stopping in local pockets that then blossom into bubble universes like ours. The possibility of an eternally inflating multiverse suggests that our particular bubble might never be fully understandable on its own terms, since everything that can possibly happen in a multiverse happens infinitely many times. The subject evokes gut-level disagreement among experts. Many have reconciled themselves to the idea that our universe could be just one of many; Steinhardt calls the multiverse “hogwash.”

    This sentiment partly motivated his and other researchers’ about-face on bounces. “The bouncing models don’t have a period of inflation,” Turok says. Instead, they add a period of contraction before a Big Bounce to explain our uniform universe. “Just as the gas in the room you’re sitting in is completely uniform because the air molecules are banging around and equilibrating,” he says, “if the universe was quite big and contracting slowly, that gives plenty of time for the universe to smooth itself out.”

    Although the first contracting-universe models were convoluted and flawed, many researchers became convinced of the basic idea that slow contraction can explain many features of our expanding universe. “Then the bottleneck became literally the bottleneck—the bounce itself,” Steinhardt says. As Ijjas puts it, “The bounce has been the showstopper for these scenarios. People would agree that it’s very interesting if you can do a contraction phase, but not if you can’t get to an expansion phase.”

    Bouncing isn’t easy. In the 1960s, the British physicists Roger Penrose and Stephen Hawking proved a set of so-called “singularity theorems” showing that, under very general conditions, contracting matter and energy will unavoidably crunch into an immeasurably dense point called a singularity. These theorems make it hard to imagine how a contracting universe in which space-time, matter, and energy are all rushing inward could possibly avoid collapsing all the way down to a singularity—a point where Albert Einstein’s classical theory of gravity and space-time breaks down and the unknown quantum-gravity theory rules. Why shouldn’t a contracting universe share the same fate as a massive star, which dies by shrinking to the singular center of a black hole?

    Both of the newly proposed bounce models exploit loopholes in the singularity theorems—ones that, for many years, seemed like dead ends. Bounce cosmologists have long recognized that bounces might be possible if the universe contained a substance with negative energy (or other sources of negative pressure), which would counteract gravity and essentially push everything apart. They’ve been trying to exploit this loophole since the early 2000s, but they always found that adding negative-energy ingredients made their models of the universe unstable, because positive- and negative-energy quantum fluctuations could spontaneously arise together, unchecked, out of the zero-energy vacuum of space. In 2016, the Russian cosmologist Valery Rubakov and colleagues even proved a “no-go” [JCAP] theorem that seemed to rule out a huge class of bounce mechanisms on the grounds that they caused these so-called “ghost” instabilities.

    Then Ijjas found a bounce mechanism that evades the no-go theorem. The key ingredient in her model is a simple entity called a “scalar field,” which, according to the idea, would have kicked into gear as the universe contracted and energy became highly concentrated. The scalar field would have braided itself into the gravitational field in a way that exerted negative pressure on the universe, reversing the contraction and driving space-time apart—without destabilizing everything. Ijjas’ paper “is essentially the best attempt at getting rid of all possible instabilities and making a really stable model with this special type of matter,” says Jean-Luc Lehners, a theoretical cosmologist at the Max Planck Institute for Gravitational Physics in Germany who has also worked on bounce proposals.

    What’s especially interesting about the two new bounce models is that they are “non-singular,” meaning the contracting universe bounces and starts expanding again before ever shrinking to a point. These bounces can therefore be fully described by the classical laws of gravity, requiring no speculations about gravity’s quantum nature.

    Graham, Kaplan, and Rajendran, of Stanford University, Johns Hopkins University and UC Berkeley, respectively, reported their non-singular bounce idea on the scientific preprint site ArXiv.org in September 2017. They found their way to it after wondering whether a previous contraction phase in the history of the universe could be used to explain the value of the cosmological constant—a mystifyingly tiny number that defines the amount of dark energy infused in the space-time fabric, energy that drives the accelerating expansion of the universe.

    In working out the hardest part—the bounce—the trio exploited a second, largely forgotten loophole in the singularity theorems. They took inspiration from a characteristically strange model of the universe proposed by the logician Kurt Gödel in 1949, when he and Einstein were walking companions and colleagues at the Institute for Advanced Study in Princeton, New Jersey. Gödel used the laws of general relativity to construct the theory of a rotating universe, whose spinning keeps it from gravitationally collapsing in much the same way that Earth’s orbit prevents it from falling into the sun. Gödel especially liked the fact that his rotating universe permitted “closed time-like curves,” essentially loops in time, which raised all sorts of Gödelian riddles. To his dying day, he eagerly awaited evidence that the universe really is rotating in the manner of his model. Researchers now know it isn’t; otherwise, the cosmos would exhibit alignments and preferred directions. But Graham and company wondered about small, curled-up spatial dimensions that might exist in space, such as the six extra dimensions postulated by string theory. Could a contracting universe spin in those directions?

    magine there’s just one of these curled-up extra dimensions, a tiny circle found at every point in space. As Graham puts it, “At each point in space there’s an extra direction you can go in, a fourth spatial direction, but you can only go a tiny little distance and then you come back to where you started.” If there are at least three extra compact dimensions, then, as the universe contracts, matter and energy can start spinning inside them, and the dimensions themselves will spin with the matter and energy. The vorticity in the extra dimensions can suddenly initiate a bounce. “All that stuff that would have been crunching into a singularity, because it’s spinning in the extra dimensions, it misses—sort of like a gravitational slingshot,” Graham says. “All the stuff should have been coming to a single point, but instead it misses and flies back out again.”

    he paper has attracted attention beyond the usual circle of bounce cosmologists. Sean Carroll, a theoretical physicist at the California Institute of Technology, is skeptical but called the idea “very clever.” He says it’s important to develop alternatives to the conventional inflation story, if only to see how much better inflation appears by comparison—especially when next-generation telescopes come online in the early 2020s looking for the telltale swirl pattern in the sky caused by inflation. “Even though I think inflation has a good chance of being right, I wish there were more competitors,” Carroll says. Sundrum, the Maryland physicist, feels similarly. “There are some questions I consider so important that even if you have only a 5 percent chance of succeeding, you should throw everything you have at it and work on them,” he says. “And that’s how I feel about this paper.”

    As Graham, Kaplan, and Rajendran explore their bounce and its possible experimental signatures, the next step for Ijjas and Steinhardt, working with Frans Pretorius of Princeton, is to develop computer simulations. (Their collaboration is supported by the Simons Foundation, which also funds Quanta Magazine.) Both bounce mechanisms also need to be integrated into more complete, stable cosmological models that would describe the entire evolutionary history of the universe.

    Beyond these non-singular bounce solutions, other researchers are speculating about what kind of bounce might occur when a universe contracts all the way to a singularity—a bounce orchestrated by the unknown quantum laws of gravity, which replace the usual understanding of space and time at extremely high energies. In forthcoming work, Turok and collaborators plan to propose a model in which the universe expands symmetrically into the past and future away from a central, singular bounce. Turok contends that the existence of this two-lobed universe is equivalent to the spontaneous creation of electron-positron pairs, which constantly pop in and out of the vacuum. “Richard Feynman pointed out that you can look at the positron as an electron going backward in time,” he says. “They’re two particles, but they’re really the same; at a certain moment in time they merge and annihilate.” He added, “The idea is a very, very deep one, and most likely the Big Bang will turn out to be similar, where a universe and its anti-universe were drawn out of nothing, if you like, by the presence of matter.”

    It remains to be seen whether this universe/anti-universe bounce model can accommodate all observations of the cosmos, but Turok likes how simple it is. Most cosmological models are far too complicated in his view. The universe “looks extremely ordered and symmetrical and simple,” he says. “That’s very exciting for theorists, because it tells us there may be a simple—even if hard-to-discover—theory waiting to be discovered, which might explain the most paradoxical features of the universe.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 1:55 pm on February 3, 2018 Permalink | Reply
    Tags: , , Job One for Quantum Computers: Boost Artificial Intelligence, Quanta Magazine   

    From Quanta: “Job One for Quantum Computers: Boost Artificial Intelligence” 

    Quanta Magazine
    Quanta Magazine

    January 29, 2018
    George Musser

    1
    Josef Bsharah for Quanta Magazine.

    In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”

    Today the mashup of the two seems the most natural thing in the world. Neural networks and other machine-learning systems have become the most disruptive technology of the 21st century. They out-human humans, beating us not just at tasks most of us were never really good at, such as chess and data-mining, but also at the very types of things our brains evolved for, such as recognizing faces, translating languages and negotiating four-way stops. These systems have been made possible by vast computing power, so it was inevitable that tech companies would seek out computers that were not just bigger, but a new class of machine altogether.

    Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. That’s still another decade off, at least. But even today’s rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and don’t choke on incomplete or uncertain data. “There is a natural combination between the intrinsic statistical nature of quantum computing … and machine learning,” said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

    If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. “‘Machine learning’ is becoming a buzzword,” said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. “When you mix that with ‘quantum,’ it becomes a mega-buzzword.”

    Yet nothing with the word “quantum” in it is ever quite what it seems. Although you might think a quantum machine-learning system should be powerful, it suffers from a kind of locked-in syndrome. It operates on quantum states, not on human-readable data, and translating between the two can negate its apparent advantages. It’s like an iPhone X that, for all its impressive specs, ends up being just as slow as your old phone, because your network is as awful as ever. For a few special cases, physicists can overcome this input-output bottleneck, but whether those cases arise in practical machine-learning tasks is still unknown. “We don’t have clear answers yet,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who is always the voice of sobriety when it comes to quantum computing. “People have often been very cavalier about whether these algorithms give a speedup.”

    Quantum Neurons

    The main job of a neural network, be it classical or quantum, is to recognize patterns. Inspired by the human brain, it is a grid of basic computing units — the “neurons.” Each can be as simple as an on-off device. A neuron monitors the output of multiple other neurons, as if taking a vote, and switches on if enough of them are on. Typically, the neurons are arranged in layers. An initial layer accepts input (such as image pixels), intermediate layers create various combinations of the input (representing structures such as edges and geometric shapes) and a final layer produces output (a high-level description of the image content).

    ___________________________________________________
    3
    Lucy Reading-Ikkanda/Quanta Magazine
    ___________________________________________________

    Crucially, the wiring is not fixed in advance, but adapts in a process of trial and error. The network might be fed images labeled “kitten” or “puppy.” For each image, it assigns a label, checks whether it was right, and tweaks the neuronal connections if not. Its guesses are random at first, but get better; after perhaps 10,000 examples, it knows its pets. A serious neural network can have a billion interconnections, all of which need to be tuned.

    On a classical computer, all these interconnections are represented by a ginormous matrix of numbers, and running the network means doing matrix algebra. Conventionally, these matrix operations are outsourced to a specialized chip such as a graphics processing unit. But nothing does matrices like a quantum computer. “Manipulation of large matrices and large vectors are exponentially faster on a quantum computer,” said Seth Lloyd, a physicist at the Massachusetts Institute of Technology and a quantum-computing pioneer.

    For this task, quantum computers are able to take advantage of the exponential nature of a quantum system. The vast bulk of a quantum system’s information storage capacity resides not in its individual data units — its qubits, the quantum counterpart of classical computer bits — but in the collective properties of those qubits. Two qubits have four joint states: both on, both off, on/off, and off/on. Each has a certain weighting, or “amplitude,” that can represent a neuron. If you add a third qubit, you can represent eight neurons; a fourth, 16. The capacity of the machine grows exponentially. In effect, the neurons are smeared out over the entire system. When you act on a state of four qubits, you are processing 16 numbers at a stroke, whereas a classical computer would have to go through those numbers one by one.

    Lloyd estimates that 60 qubits would be enough to encode an amount of data equivalent to that produced by humanity in a year, and 300 could carry the classical information content of the observable universe. (The biggest quantum computers at the moment, built by IBM, Intel and Google, have 50-ish qubits.) And that’s assuming each amplitude is just a single classical bit. In fact, amplitudes are continuous quantities (and, indeed, complex numbers) and, for a plausible experimental precision, one might store as many as 15 bits, Aaronson said.

    But a quantum computer’s ability to store information compactly doesn’t make it faster. You need to be able to use those qubits. In 2008, Lloyd, the physicist Aram Harrow of MIT and Avinatan Hassidim, a computer scientist at Bar-Ilan University in Israel, showed how to do the crucial algebraic operation of inverting a matrix. They broke it down into a sequence of logic operations that can be executed on a quantum computer. Their algorithm works for a huge variety of machine-learning techniques. And it doesn’t require nearly as many algorithmic steps as, say, factoring a large number does. A computer could zip through a classification task before noise — the big limiting factor with today’s technology — has a chance to foul it up. “You might have a quantum advantage before you have a fully universal, fault-tolerant quantum computer,” said Kristan Temme of IBM’s Thomas J. Watson Research Center.

    Let Nature Solve the Problem

    So far, though, machine learning based on quantum matrix algebra has been demonstrated only on machines with just four qubits. Most of the experimental successes of quantum machine learning to date have taken a different approach, in which the quantum system does not merely simulate the network; it is the network. Each qubit stands for one neuron. Though lacking the power of exponentiation, a device like this can avail itself of other features of quantum physics.

    The largest such device, with some 2,000 qubits, is the quantum processor manufactured by D-Wave Systems, based near Vancouver, British Columbia. It is not what most people think of as a computer. Instead of starting with some input data, executing a series of operations and displaying the output, it works by finding internal consistency. Each of its qubits is a superconducting electric loop that acts as a tiny electromagnet oriented up, down, or up and down — a superposition. Qubits are “wired” together by allowing them to interact magnetically.

    4
    Processors made by D-Wave Systems are being used for machine learning applications. Mwjohnson0.

    To run the system, you first impose a horizontal magnetic field, which initializes the qubits to an equal superposition of up and down — the equivalent of a blank slate. There are a couple of ways to enter data. In some cases, you fix a layer of qubits to the desired input values; more often, you incorporate the input into the strength of the interactions. Then you let the qubits interact. Some seek to align in the same direction, some in the opposite direction, and under the influence of the horizontal field, they flip to their preferred orientation. In so doing, they might trigger other qubits to flip. Initially that happens a lot, since so many of them are misaligned. Over time, though, they settle down, and you can turn off the horizontal field to lock them in place. At that point, the qubits are in a pattern of up and down that ensures the output follows from the input.

    It’s not at all obvious what the final arrangement of qubits will be, and that’s the point. The system, just by doing what comes naturally, is solving a problem that an ordinary computer would struggle with. “We don’t need an algorithm,” explained Hidetoshi Nishimori, a physicist at the Tokyo Institute of Technology who developed the principles on which D-Wave machines operate. “It’s completely different from conventional programming. Nature solves the problem.”

    The qubit-flipping is driven by quantum tunneling, a natural tendency that quantum systems have to seek out their optimal configuration, rather than settle for second best. You could build a classical network that worked on analogous principles, using random jiggling rather than tunneling to get bits to flip, and in some cases it would actually work better. But, interestingly, for the types of problems that arise in machine learning, the quantum network seems to reach the optimum faster.

    The D-Wave machine has had its detractors. It is extremely noisy and, in its current incarnation, can perform only a limited menu of operations. Machine-learning algorithms, though, are noise-tolerant by their very nature. They’re useful precisely because they can make sense of a messy reality, sorting kittens from puppies against a backdrop of red herrings. “Neural networks are famously robust to noise,” Behrman said.

    In 2009 a team led by Hartmut Neven, a computer scientist at Google who pioneered augmented reality — he co-founded the Google Glass project — and then took up quantum information processing, showed how an early D-Wave machine could do a respectable machine-learning task. They used it as, essentially, a single-layer neural network that sorted images into two classes: “car” or “no car” in a library of 20,000 street scenes. The machine had only 52 working qubits, far too few to take in a whole image. (Remember: the D-Wave machine is of a very different type than in the state-of-the-art 50-qubit systems coming online in 2018.) So Neven’s team combined the machine with a classical computer, which analyzed various statistical quantities of the images and calculated how sensitive these quantities were to the presence of a car — usually not very, but at least better than a coin flip. Some combination of these quantities could, together, spot a car reliably, but it wasn’t obvious which. It was the network’s job to find out.

    The team assigned a qubit to each quantity. If that qubit settled into a value of 1, it flagged the corresponding quantity as useful; 0 meant don’t bother. The qubits’ magnetic interactions encoded the demands of the problem, such as including only the most discriminating quantities, so as to keep the final selection as compact as possible. The result was able to spot a car.

    Last year a group led by Maria Spiropulu, a particle physicist at the California Institute of Technology, and Daniel Lidar, a physicist at USC, applied the algorithm to a practical physics problem: classifying proton collisions as “Higgs boson” or “no Higgs boson.” Limiting their attention to collisions that spat out photons, they used basic particle theory to predict which photon properties might betray the fleeting existence of the Higgs, such as momentum in excess of some threshold. They considered eight such properties and 28 combinations thereof, for a total of 36 candidate signals, and let a late-model D-Wave at the University of Southern California find the optimal selection. It identified [Nature]16 of the variables as useful and three as the absolute best. The quantum machine needed less data than standard procedures to perform an accurate identification. “Provided that the training set was small, then the quantum approach did provide an accuracy advantage over traditional methods used in the high-energy physics community,” Lidar said.

    6
    Maria Spiropulu, a physicist at the California Institute of Technology, used quantum machine learning to find Higgs bosons. Courtesy of Maria Spiropulu

    In December, Rigetti demonstrated a way to automatically group objects using a general-purpose quantum computer with 19 qubits. The researchers did the equivalent of feeding the machine a list of cities and the distances between them, and asked it to sort the cities into two geographic regions. What makes this problem hard is that the designation of one city depends on the designation of all the others, so you have to solve the whole system at once.

    The Rigetti team effectively assigned each city a qubit, indicating which group it was assigned to. Through the interactions of the qubits (which, in Rigetti’s system, are electrical rather than magnetic), each pair of qubits sought to take on opposite values — their energy was minimized when they did so. Clearly, for any system with more than two qubits, some pairs of qubits had to consent to be assigned to the same group. Nearby cities assented more readily since the energetic cost for them to be in the same group was lower than for more-distant cities.

    To drive the system to its lowest energy, the Rigetti team took an approach similar in some ways to the D-Wave annealer. They initialized the qubits to a superposition of all possible cluster assignments. They allowed qubits to interact briefly, which biased them toward assuming the same or opposite values. Then they applied the analogue of a horizontal magnetic field, allowing the qubits to flip if they were so inclined, pushing the system a little way toward its lowest-energy state. They repeated this two-step process — interact then flip — until the system minimized its energy, thus sorting the cities into two distinct regions.

    These classification tasks are useful but straightforward. The real frontier of machine learning is in generative models, which do not simply recognize puppies and kittens, but can generate novel archetypes — animals that never existed, but are every bit as cute as those that did. They might even figure out the categories of “kitten” and “puppy” on their own, or reconstruct images missing a tail or paw. “These techniques are very powerful and very useful in machine learning, but they are very hard,” said Mohammad Amin, the chief scientist at D-Wave. A quantum assist would be most welcome.

    D-Wave and other research teams have taken on this challenge. Training such a model means tuning the magnetic or electrical interactions among qubits so the network can reproduce some sample data. To do this, you combine the network with an ordinary computer. The network does the heavy lifting — figuring out what a given choice of interactions means for the final network configuration — and its partner computer uses this information to adjust the interactions. In one demonstration last year, Alejandro Perdomo-Ortiz, a researcher at NASA’s Quantum Artificial Intelligence Lab, and his team exposed a D-Wave system to images of handwritten digits. It discerned that there were 10 categories, matching the digits 0 through 9, and generated its own scrawled numbers.

    Bottlenecks Into the Tunnels

    Well, that’s the good news. The bad is that it doesn’t much matter how awesome your processor is if you can’t get your data into it. In matrix-algebra algorithms, a single operation may manipulate a matrix of 16 numbers, but it still takes 16 operations to load the matrix. “State preparation — putting classical data into a quantum state — is completely shunned, and I think this is one of the most important parts,” said Maria Schuld, a researcher at the quantum-computing startup Xanadu and one of the first people to receive a doctorate in quantum machine learning. Machine-learning systems that are laid out in physical form face parallel difficulties of how to embed a problem in a network of qubits and get the qubits to interact as they should.

    Once you do manage to enter your data, you need to store it in such a way that a quantum system can interact with it without collapsing the ongoing calculation. Lloyd and his colleagues have proposed a quantum RAM that uses photons, but no one has an analogous contraption for superconducting qubits or trapped ions, the technologies found in the leading quantum computers. “That’s an additional huge technological problem beyond the problem of building a quantum computer itself,” Aaronson said. “The impression I get from the experimentalists I talk to is that they are frightened. They have no idea how to begin to build this.”

    And finally, how do you get your data out? That means measuring the quantum state of the machine, and not only does a measurement return only a single number at a time, drawn at random, it collapses the whole state, wiping out the rest of the data before you even have a chance to retrieve it. You’d have to run the algorithm over and over again to extract all the information.

    Yet all is not lost. For some types of problems, you can exploit quantum interference. That is, you can choreograph the operations so that wrong answers cancel themselves out and right ones reinforce themselves; that way, when you go to measure the quantum state, it won’t give you just any random value, but the desired answer. But only a few algorithms, such as brute-force search, can make good use of interference, and the speedup is usually modest.

    In some cases, researchers have found shortcuts to getting data in and out. In 2015 Lloyd, Silvano Garnerone of the University of Waterloo in Canada, and Paolo Zanardi at USC showed that, for some kinds of statistical analysis, you don’t need to enter or store the entire data set. Likewise, you don’t need to read out all the data when a few key values would suffice. For instance, tech companies use machine learning to suggest shows to watch or things to buy based on a humongous matrix of consumer habits. “If you’re Netflix or Amazon or whatever, you don’t actually need the matrix written down anywhere,” Aaronson said. “What you really need is just to generate recommendations for a user.”

    All this invites the question: If a quantum machine is powerful only in special cases, might a classical machine also be powerful in those cases? This is the major unresolved question of the field. Ordinary computers are, after all, extremely capable. The usual method of choice for handling large data sets — random sampling — is actually very similar in spirit to a quantum computer, which, whatever may go on inside it, ends up returning a random result. Schuld remarked: “I’ve done a lot of algorithms where I felt, ‘This is amazing. We’ve got this speedup,’ and then I actually, just for fun, write a sampling technique for a classical computer, and I realize you can do the same thing with sampling.”

    If you look back at the successes that quantum machine learning has had so far, they all come with asterisks. Take the D-Wave machine. When classifying car images and Higgs bosons, it was no faster than a classical machine. “One of the things we do not talk about in this paper is quantum speedup,” said Alex Mott, a computer scientist at Google DeepMind who was a member of the Higgs research team. Matrix-algebra approaches such as the Harrow-Hassidim-Lloyd algorithm show a speedup only if the matrices are sparse — mostly filled with zeroes. “No one ever asks, are sparse data sets actually interesting in machine learning?” Schuld noted.

    Quantum Intelligence

    On the other hand, even the occasional incremental improvement over existing techniques would make tech companies happy. “These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

    Schuld also sees scope for innovation on the software side. Machine learning is more than a bunch of calculations. It is a complex of problems that have their own particular structure. “The algorithms that people construct are removed from the things that make machine learning interesting and beautiful,” she said. “This is why I started to work the other way around and think: If have this quantum computer already — these small-scale ones — what machine-learning model actually can it generally implement? Maybe it is a model that has not been invented yet.” If physicists want to impress machine-learning experts, they’ll need to do more than just make quantum versions of existing models.

    Just as many neuroscientists now think that the structure of human thought reflects the requirements of having a body, so, too, are machine-learning systems embodied. The images, language and most other data that flow through them come from the physical world and reflect its qualities. Quantum machine learning is similarly embodied — but in a richer world than ours. The one area where it will undoubtedly shine is in processing data that is already quantum. When the data is not an image, but the product of a physics or chemistry experiment, the quantum machine will be in its element. The input problem goes away, and classical computers are left in the dust.

    In a neatly self-referential loop, the first quantum machine-learning systems may help to design their successors. “One way we might actually want to use these systems is to build quantum computers themselves,” Wiebe said. “For some debugging tasks, it’s the only approach that we have.” Maybe they could even debug us. Leaving aside whether the human brain is a quantum computer — a highly contentious question — it sometimes acts as if it were one. Human behavior is notoriously contextual; our preferences are formed by the choices we are given, in ways that defy logic. In this, we are like quantum particles. “The way you ask questions and the ordering matters, and that is something that is very typical in quantum data sets,” Perdomo-Ortiz said. So a quantum machine-learning system might be a natural way to study human cognitive biases.

    Neural networks and quantum processors have one thing in common: It is amazing they work at all. It was never obvious that you could train a network, and for decades most people doubted it would ever be possible. Likewise, it is not obvious that quantum physics could ever be harnessed for computation, since the distinctive effects of quantum physics are so well hidden from us. And yet both work — not always, but more often than we had any right to expect. On this precedent, it seems likely that their union will also find its place.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 1:00 pm on January 14, 2018 Permalink | Reply
    Tags: , , Physicists Aim to Classify All Possible Phases of Matter, , Quanta Magazine, The Haah Code   

    From Quanta: “Physicists Aim to Classify All Possible Phases of Matter” 

    Quanta Magazine
    Quanta Magazine

    January 3, 2018
    Natalie Wolchover

    1
    Olena Shmahalo/Quanta Magazine

    In the last three decades, condensed matter physicists have discovered a wonderland of exotic new phases of matter: emergent, collective states of interacting particles that are nothing like the solids, liquids and gases of common experience.

    The phases, some realized in the lab and others identified as theoretical possibilities, arise when matter is chilled almost to absolute-zero temperature, hundreds of degrees below the point at which water freezes into ice. In these frigid conditions, particles can interact in ways that cause them to shed all traces of their original identities. Experiments in the 1980s revealed that in some situations electrons split en masse into fractions of particles that make braidable trails through space-time; in other cases, they collectively whip up massless versions of themselves. A lattice of spinning atoms becomes a fluid of swirling loops or branching strings; crystals that began as insulators start conducting electricity over their surfaces. One phase that shocked experts when recognized as a mathematical possibility [Phys. Rev. A] in 2011 features strange, particle-like “fractons” that lock together in fractal patterns.

    Now, research groups at Microsoft and elsewhere are racing to encode quantum information in the braids and loops of some of these phases for the purpose of developing a quantum computer. Meanwhile, condensed matter theorists have recently made major strides in understanding the pattern behind the different collective behaviors that can arise, with the goal of enumerating and classifying all possible phases of matter. If a complete classification is achieved, it would not only account for all phases seen in nature so far, but also potentially point the way toward new materials and technologies.

    Led by dozens of top theorists, with input from mathematicians, researchers have already classified a huge swath of phases that can arise in one or two spatial dimensions by relating them to topology: the math that describes invariant properties of shapes like the sphere and the torus. They’ve also begun to explore the wilderness of phases that can arise near absolute zero in 3-D matter.

    1
    Xie Chen, a condensed matter theorist at the California Institute of Technology, says the “grand goal” of the classification program is to enumerate all phases that can possibly arise from particles of any given type. Max Gerber, courtesy of Caltech Development and Institute Relations.

    “It’s not a particular law of physics” that these scientists seek, said Michael Zaletel, a condensed matter theorist at Princeton University. “It’s the space of all possibilities, which is a more beautiful or deeper idea in some ways.” Perhaps surprisingly, Zaletel said, the space of all consistent phases is itself a mathematical object that “has this incredibly rich structure that we think ends up, in 1-D and 2-D, in one-to-one correspondence with these beautiful topological structures.”

    In the landscape of phases, there is “an economy of options,” said Ashvin Vishwanath of Harvard University. “It all seems comprehensible” — a stroke of luck that mystifies him. Enumerating phases of matter could have been “like stamp collecting,” Vishwanath said, “each a little different, and with no connection between the different stamps.” Instead, the classification of phases is “more like a periodic table. There are many elements, but they fall into categories and we can understand the categories.”

    While classifying emergent particle behaviors might not seem fundamental, some experts, including Xiao-Gang Wen of the Massachusetts Institute of Technology, say the new rules of emergent phases show how the elementary particles themselves might arise from an underlying network of entangled bits of quantum information, which Wen calls the “qubit ocean.” For example, a phase called a “string-net liquid” that can emerge in a three-dimensional system of qubits has excitations that look like all the known elementary particles. “A real electron and a real photon are maybe just fluctuations of the string-net,” Wen said.

    A New Topological Order

    Before these zero-temperature phases cropped up, physicists thought they had phases all figured out. By the 1950s, they could explain what happens when, for example, water freezes into ice, by describing it as the breaking of a symmetry: Whereas liquid water has rotational symmetry at the atomic scale (it looks the same in every direction), the H20 molecules in ice are locked in crystalline rows and columns.

    Things changed in 1982 with the discovery of phases called fractional quantum Hall states in an ultracold, two-dimensional gas of electrons. These strange states of matter feature emergent particles with fractions of an electron’s charge that take fractions of steps in a one-way march around the perimeter of the system. “There was no way to use different symmetry to distinguish those phases,” Wen said.

    A new paradigm was needed. In 1989, Wen imagined phases like the fractional quantum Hall states arising not on a plane, but on different topological manifolds — connected spaces such as the surface of a sphere or a torus. Topology concerns global, invariant properties of such spaces that can’t be changed by local deformations. Famously, to a topologist, you can turn a doughnut into a coffee cup by simply deforming its surface, since both surfaces have one hole and are therefore equivalent topologically. You can stretch and squeeze all you like, but even the most malleable doughnut will refuse to become a pretzel.

    Wen found that new properties of the zero-temperature phases were revealed in the different topological settings, and he coined the term “topological order” to describe the essence of these phases. Other theorists were also uncovering links to topology. With the discovery of many more exotic phases — so many that researchers say they can barely keep up — it became clear that topology, together with symmetry, offers a good organizing schema.

    The topological phases only show up near absolute zero, because only at such low temperatures can systems of particles settle into their lowest-energy quantum “ground state.” In the ground state, the delicate interactions that correlate particles’ identities — effects that are destroyed at higher temperatures — link up particles in global patterns of quantum entanglement. Instead of having individual mathematical descriptions, particles become components of a more complicated function that describes all of them at once, often with entirely new particles emerging as the excitations of the global phase. The long-range entanglement patterns that arise are topological, or impervious to local changes, like the number of holes in a manifold.

    2
    Lucy Reading-Ikkanda/Quanta Magazine

    Consider the simplest topological phase in a system — called a “quantum spin liquid” — that consists of a 2-D lattice of “spins,” or particles that can point up, down, or some probability of each simultaneously. At zero temperature, the spin liquid develops strings of spins that all point down, and these strings form closed loops. As the directions of spins fluctuate quantum-mechanically, the pattern of loops throughout the material also fluctuates: Loops of down spins merge into bigger loops and divide into smaller loops. In this quantum-spin-liquid phase, the system’s ground state is the quantum superposition of all possible loop patterns.

    To understand this entanglement pattern as a type of topological order, imagine, as Wen did, that the quantum spin liquid is spilling around the surface of a torus, with some loops winding around the torus’s hole. Because of these hole windings, instead of having a single ground state associated with the superposition of all loop patterns, the spin liquid will now exist in one of four distinct ground states, tied to four different superpositions of loop patterns. One state consists of all possible loop patterns with an even number of loops winding around the torus’s hole and an even number winding through the hole. Another state has an even number of loops around the hole and an odd number through the hole; the third and fourth ground states correspond to odd and even, and odd and odd, numbers of hole windings, respectively.

    Which of these ground states the system is in stays fixed, even as the loop pattern fluctuates locally. If, for instance, the spin liquid has an even number of loops winding around the torus’s hole, two of these loops might touch and combine, suddenly becoming a loop that doesn’t wrap around the hole at all. Long-way loops decrease by two, but the number remains even. The system’s ground state is a topologically invariant property that withstands local changes.

    Future quantum computers could take advantage of this invariant quality. Having four topological ground states that aren’t affected by local deformations or environmental error “gives you a way to store quantum information, because your bit could be what ground state it’s in,” explained Zaletel, who has studied the topological properties of spin liquids and other quantum phases. Systems like spin liquids don’t really need to wrap around a torus to have topologically protected ground states. A favorite playground of researchers is the toric code, a phase theoretically constructed by the condensed matter theorist Alexei Kitaev of the California Institute of Technology in 1997 and demonstrated in experiments over the past decade. The toric code can live on a plane and still maintain the multiple ground states of a torus. (Loops of spins are essentially able to move off the edge of the system and re-enter on the opposite side, allowing them to wind around the system like loops around a torus’s hole.) “We know how to translate between the ground-state properties on a torus and what the behavior of the particles would be,” Zaletel said.

    Spin liquids can also enter other phases, in which spins, instead of forming closed loops, sprout branching networks of strings. This is the string-net liquid phase [Phys.Rev. B] that, according to Wen, “can produce the Standard Model” of particle physics starting from a 3-D qubit ocean.

    The Universe of Phases

    Research by several groups in 2009 and 2010 completed the classification of “gapped” phases of matter in one dimension, such as in chains of particles. A gapped phase is one with a ground state: a lowest-energy configuration sufficiently removed or “gapped” from higher-energy states that the system stably settles into it. Only gapped quantum phases have well-defined excitations in the form of particles. Gapless phases are like swirling matter miasmas or quantum soups and remain largely unknown territory in the landscape of phases.

    For a 1-D chain of bosons — particles like photons that have integer values of quantum spin, which means they return to their initial quantum states after swapping positions — there is only one gapped topological phase. In this phase, first studied by the Princeton theorist Duncan Haldane, who, along with David Thouless and J. Michael Kosterlitz, won the 2016 Nobel Prize for decades of work on topological phases, the spin chain gives rise to half-spin particles on both ends. Two gapped topological phases exist for chains of fermions — particles like electrons and quarks that have half-integer values of spin, meaning their states become negative when they switch positions. The topological order in all these 1-D chains stems not from long-range quantum entanglement, but from local symmetries acting between neighboring particles. Called “symmetry-protected topological phases,” they correspond to “cocycles of the cohomology group,” mathematical objects related to invariants like the number of holes in a manifold.

    3
    Lucy Reading-Ikkanda/Quanta Magazine, adapted from figure by Xiao-Gang Wen

    Two-dimensional phases are more plentiful and more interesting. They can have what some experts consider “true” topological order: the kind associated with long-range patterns of quantum entanglement, like the fluctuating loop patterns in a spin liquid. In the last few years, researchers have shown that these entanglement patterns correspond to topological structures called tensor categories, which enumerate the different ways that objects can possibly fuse and braid around one another. “The tensor categories give you a way [to describe] particles that fuse and braid in a consistent way,” said David Pérez-García of Complutense University of Madrid.

    Researchers like Pérez-García are working to mathematically prove that the known classes of 2-D gapped topological phases are complete. He helped close the 1-D case in 2010 [Phys. Rev. B], at least under the widely-held assumption that these phases are always well-approximated by quantum field theories — mathematical descriptions that treat the particles’ environments as smooth. “These tensor categories are conjectured to cover all 2-D phases, but there is no mathematical proof yet,” Pérez-García said. “Of course, it would be much more interesting if one can prove that this is not all. Exotic things are always interesting because they have new physics, and they’re maybe useful.”

    Gapless quantum phases represent another kingdom of possibilities to explore, but these impenetrable fogs of matter resist most theoretical methods. “The language of particles is not useful, and there are supreme challenges that we are starting to confront,” said Senthil Todadri, a condensed matter theorist at MIT. Gapless phases present the main barrier in the quest to understand high-temperature superconductivity, for instance. And they hinder quantum gravity researchers in the “it from qubit” movement, who believe that not only elementary particles, but also space-time and gravity, arise from patterns of entanglement in some kind of underlying qubit ocean. “In it from qubit, we spend much of our time on gapless states because this is where one gets gravity, at least in our current understanding,” said Brian Swingle, a theoretical physicist at the University of Maryland. Some researchers try to use mathematical dualities to convert the quantum-soup picture into an equivalent particle description in one higher dimension. “It should be viewed in the spirit of exploring,” Todadri said.

    Even more enthusiastic exploration is happening in 3-D. What’s already clear is that, when spins and other particles spill from their chains and flatlands and fill the full three spatial dimensions of reality, unimaginably strange patterns of quantum entanglement can emerge. “In 3-D, there are things that escape, so far, this tensor-category picture,” said Pérez-García. “The excitations are very wild.”

    The Haah Code

    The very wildest of the 3-D phases appeared seven years ago. A talented Caltech graduate student named Jeongwan Haah discovered the phase in a computer search while looking for what’s known as the “dream code”: a quantum ground state so robust that it can be used to securely store quantum memory, even at room temperature.

    For this, Haah had to turn to 3-D matter. In 2-D topological phases like the toric code, a significant source of error is “stringlike operators”: perturbations to the system that cause new strings of spins to accidentally form. These strings will sometimes wind new loops around the torus’s hole, bumping the number of windings from even to odd or vice versa and converting the toric code to one of its three other quantum ground states. Because strings grow uncontrollably and wrap around things, experts say there cannot be good quantum memories in 2-D.

    4
    eongwan Haah, a condensed matter theorist now working at Microsoft Research in Redmond, Washington, discovered a bizarre 3-D phase of matter with fractal properties. Jeremy Mashburn.

    Haah wrote an algorithm to search for 3-D phases that avoid the usual kinds of stringlike operators. The computer coughed up 17 exact solutions that he then studied by hand. Four of the phases were confirmed to be free of stringlike operators; the one with the highest symmetry was what’s now known as the Haah code.

    As well as being potentially useful for storing quantum memory, the Haah code was also profoundly weird. Xie Chen, a condensed matter theorist at Caltech, recalled hearing the news as a graduate student in 2011, within a month or two of Haah’s disorienting discovery. “Everyone was totally shocked,” she said. “We didn’t know anything we could do about it. And now, that’s been the situation for many years.”

    The Haah code is relatively simple on paper: It’s the solution of a two-term energy formula, describing spins that interact with their eight nearest neighbors in a cubic lattice. But the resulting phase “strains our imaginations,” Todadri said.

    The code features particle-like entities called fractons that, unlike the loopy patterns in, say, a quantum spin liquid, are nonliquid and locked in place; the fractons can only hop between positions in the lattice if those positions are operated upon in a fractal pattern. That is, you have to inject energy into the system at each corner of, say, a tetrahedron connecting four fractons in order to make them switch positions, but when you zoom in, you see that what you treated as a point-like corner was actually the four corners of a smaller tetrahedron, and you have to inject energy into the corners of that one as well. At a finer scale, you see an even smaller tetrahedron, and so on, all the way down to the finest scale of the lattice. This fractal behavior means that the Haah code never forgets the underlying lattice it comes from, and it can never be approximated by a smoothed-out description of the lattice, as in a quantum field theory. What’s more, the number of ground states in the Haah code grows with the size of the underlying lattice — a decidedly non-topological property. (Stretch a torus, and it’s still a torus.)

    The quantum state of the Haah code is extraordinarily secure, since a “fractal operator” that perfectly hits all the marks is unlikely to come along at random. Experts say a realizable version of the code would be of great technological interest.

    Haah’s phase has also generated a surge of theoretical speculation. Haah helped matters along in 2015 when he and two collaborators at MIT discovered [Phys. Rev. B] many examples of a class of phases now known as “fracton models” that are simpler cousins of the Haah code. (The first model in this family was introduced [Physical Review Letters] by Claudio Chamon of Boston University in 2005.) Chen and others have since been studying [the topology of these fracton systems, some of which permit particles to move along lines or sheets within a 3-D volume and might aid conceptual understanding or be easier to realize experimentally [Physical Review Letters]. “It’s opening the door to many more exotic things,” Chen said of the Haah code. “It’s an indication about how little we know about 3-D and higher dimensions. And because we don’t yet have a systematic picture of what is going on, there might be a lot of things lying out there waiting to be explored.”

    No one knows yet where the Haah code and its cousins belong in the landscape of possible phases, or how much bigger this space of possibilities might be. According to Todadri, the community has made progress in classifying the simplest gapped 3-D phases, but more exploration is needed in 3-D before a program of complete classification can begin there. What’s clear, he said, is that “when the classification of gapped phases of matter is taken up in 3-D, it will have to confront these weird possibilities that Haah first discovered.”

    Many researchers think new classifying concepts, and even whole new frameworks, might be necessary to capture the Haah code’s fractal nature and reveal the full scope of possibilities for 3-D quantum matter. Wen said, “You need a new type of theory, new thinking.” Perhaps, he said, we need a new picture of nonliquid patterns of long-range entanglement. “We have some vague ideas but don’t have a very systematic mathematics to do them,” he said. “We have some feeling what it looks like. The detailed systematics are still lacking. But that’s exciting.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:49 pm on January 13, 2018 Permalink | Reply
    Tags: , , Quanta Magazine, Quantum theory of gravity?   

    From Quanta Magazine: “Why an Old Theory of Everything Is Gaining New Life” 

    Quanta Magazine
    Quanta Magazine

    January 8, 2018
    Sabine Hossenfelder

    For decades, physicists have struggled to create a quantum theory of gravity. Now an approach that dates to the 1970s is attracting newfound attention.

    1
    James O’Brien for Quanta Magazine

    Twenty-five particles and four forces. That description — the Standard Model of particle physics — constitutes physicists’ best current explanation for everything.

    Standard Model of Particle Physics from Symmetry Magazine

    It’s neat and it’s simple, but no one is entirely happy with it. What irritates physicists most is that one of the forces — gravity — sticks out like a sore thumb on a four-fingered hand. Gravity is different.

    Unlike the electromagnetic force and the strong and weak nuclear forces, gravity is not a quantum theory. This isn’t only aesthetically unpleasing, it’s also a mathematical headache. We know that particles have both quantum properties and gravitational fields, so the gravitational field should have quantum properties like the particles that cause it. But a theory of quantum gravity has been hard to come by.

    In the 1960s, Richard Feynman and Bryce DeWitt set out to quantize gravity using the same techniques that had successfully transformed electromagnetism into the quantum theory called quantum electrodynamics. Unfortunately, when applied to gravity, the known techniques resulted in a theory that, when extrapolated to high energies, was plagued by an infinite number of infinities. This quantization of gravity was thought incurably sick, an approximation useful only when gravity is weak.

    Since then, physicists have made several other attempts at quantizing gravity in the hope of finding a theory that would also work when gravity is strong. String theory, loop quantum gravity, causal dynamical triangulation and a few others have been aimed toward that goal. So far, none of these theories has experimental evidence speaking for it. Each has mathematical pros and cons, and no convergence seems in sight. But while these approaches were competing for attention, an old rival has caught up.

    The theory called asymptotically (as-em-TOT-ick-lee) safe gravity was proposed in 1978 by Steven Weinberg.

    Steven Weinberg, U Texas

    Weinberg, who would only a year later share the Nobel Prize with Sheldon Lee Glashow and Abdus Salam for unifying the electromagnetic and weak nuclear force, realized that the troubles with the naive quantization of gravity are not a death knell for the theory. Even though it looks like the theory breaks down when extrapolated to high energies, this breakdown might never come to pass. But to be able to tell just what happens, researchers had to wait for new mathematical methods that have only recently become available.

    In quantum theories, all interactions depend on the energy at which they take place, which means the theory changes as some interactions become more relevant, others less so. This change can be quantified by calculating how the numbers that enter the theory — collectively called “parameters” — depend on energy. The strong nuclear force, for example, becomes weak at high energies as a parameter known as the coupling constant approaches zero. This property is known as “asymptotic freedom,” and it was worth another Nobel Prize, in 2004, to Frank Wilczek, David Gross and David Politzer.

    A theory that is asymptotically free is well behaved at high energies; it makes no trouble. The quantization of gravity is not of this type, but, as Weinberg observed, a weaker criterion would do: For quantum gravity to work, researchers must be able to describe the theory at high energies using only a finite number of parameters. This is opposed to the situation they face in the naive extrapolation, which requires an infinite number of unspecifiable parameters. Furthermore, none of the parameters should themselves become infinite. These two requirements — that the number of parameters be finite and the parameters themselves be finite — make a theory “asymptotically safe.”

    In other words, gravity would be asymptotically safe if the theory at high energies remains equally well behaved as the theory at low energies. In and of itself, this is not much of an insight. The insight comes from realizing that this good behavior does not necessarily contradict what we already know about the theory at low energies (from the early works of DeWitt and Feynman).

    While the idea that gravity may be asymptotically safe has been around for four decades, it was only in the late 1990s, through research by Christof Wetterich, a physicist at the University of Heidelberg, and Martin Reuter, a physicist at the University of Mainz, that asymptotically safe gravity caught on. The works of Wetterich and Reuter provided the mathematical formalism necessary to calculate what happens with the quantum theory of gravity at higher energies. The strategy of the asymptotic safety program, then, is to start with the theory at low energies and use the new mathematical methods to explore how to reach asymptotic safety.

    So, is gravity asymptotically safe? No one has proven it, but researchers use several independent arguments to support the idea. First, studies of gravitational theories in lower-dimensional space-times, which are much simpler to do, find that in these cases, gravity is asymptotically safe. Second, approximate calculations support the possibility. Third, researchers have applied the general method to studies of simpler, nongravitational theories and found it to be reliable.

    The major problem with the approach is that calculations in the full (infinite dimensional!) theory space are not possible. To make the calculations feasible, researchers study a small part of the space, but the results obtained then yield only a limited level of knowledge. Therefore, even though the existing calculations are consistent with asymptotic safety, the situation has remained inconclusive. And there is another question that has remained open. Even if the theory is asymptotically safe, it might become physically meaningless at high energies because it might break some essential elements of quantum theory.

    Even still, physicists can already put the ideas behind asymptotic safety to the test. If gravity is asymptotically safe — that is, if the theory is well behaved at high energies — then that restricts the number of fundamental particles that can exist. This constraint puts asymptotically safe gravity at odds with some of the pursued approaches to grand unification. For example, the simplest version of supersymmetry — a long-popular theory that predicts a sister particle for each known particle — is not asymptotically safe. The simplest version of supersymmetry has meanwhile been ruled out by experiments at the LHC, as have a few other proposed extensions of the Standard Model. But had physicists studied the asymptotic behavior in advance, they could have concluded that these ideas were not promising.

    Another study [Phys. Lett. B] recently showed that asymptotic safety also constrains the masses of particles. It implies that the difference in mass between the top and bottom quark must not be larger than a certain value. If we had not already measured the mass of the top quark, this could have been used as a prediction.

    These calculations rely on approximations that might turn out to be not entirely justified, but the results demonstrate the power of the method. The most important implication is that the physics at energies where the forces may be unified — usually thought to be hopelessly out of reach — is intricately related to the physics at low energies; the requirement of asymptotic safety connects them.

    Whenever I speak to colleagues who do not themselves work on asymptotically safe gravity, they refer to the approach as “disappointing.” This comment, I believe, is born out of the thought that asymptotic safety means there isn’t anything new to learn from quantum gravity, that it’s the same story all the way down, just more quantum field theory, business as usual.

    But not only does asymptotic safety provide a link between testable low energies and inaccessible high energies — as the above examples demonstrate — the approach is also not necessarily in conflict with other ways of quantizing gravity. That’s because the extrapolation central to asymptotic safety does not rule out that a more fundamental description of space-time — for example, with strings or networks — emerges at high energies. Far from being disappointing, asymptotic safety might allow us to finally connect the known universe to the quantum behavior of space-time.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:06 pm on January 6, 2018 Permalink | Reply
    Tags: , , , , Neutrinos Suggest Solution to Mystery of Universe’s Existence, , , Quanta Magazine, T2K Experiment/Super-Kamiokande Collaboration   

    From Quanta: “Neutrinos Suggest Solution to Mystery of Universe’s Existence” 

    Quanta Magazine
    Quanta Magazine

    December 12, 2017
    Katia Moskvitch

    1
    A neutrino passing through the Super-Kamiokande experiment creates a telltale light pattern on the detector walls. T2K Experiment/Super-Kamiokande Collaboration, Institute for Cosmic Ray Research, University of Tokyo

    T2K Experiment, Tokai to Kamioka, Japan

    T2K Experiment, Tokai to Kamioka, Japan

    From above, you might mistake the hole in the ground for a gigantic elevator shaft. Instead, it leads to an experiment that might reveal why matter didn’t disappear in a puff of radiation shortly after the Big Bang.

    I’m at the Japan Proton Accelerator Research Complex, or J-PARC — a remote and well-guarded government facility in Tokai, about an hour’s train ride north of Tokyo.

    J-PARC Facility Japan Proton Accelerator Research Complex , located in Tokai village, Ibaraki prefecture, on the east coast of Japan

    The experiment here, called T2K (for Tokai-to-Kamioka) produces a beam of the subatomic particles called neutrinos. The beam travels through 295 kilometers of rock to the Super-Kamiokande (Super-K) detector, a gigantic pit buried 1 kilometer underground and filled with 50,000 tons (about 13 million gallons) of ultrapure water. During the journey, some of the neutrinos will morph from one “flavor” into another.

    In this ongoing experiment, the first results of which were reported last year, scientists at T2K are studying the way these neutrinos flip in an effort to explain the predominance of matter over antimatter in the universe. During my visit, physicists explained to me that an additional year’s worth of data was in, and that the results are encouraging.

    According to the Standard Model of particle physics, every particle has a mirror-image particle that carries the opposite electrical charge — an antimatter particle.

    Standard Model of Particle Physics from Symmetry Magazine

    When matter and antimatter particles collide, they annihilate in a flash of radiation. Yet scientists believe that the Big Bang should have produced equal amounts of matter and antimatter, which would imply that everything should have vanished fairly quickly. But it didn’t. A very small fraction of the original matter survived and went on to form the known universe.

    Researchers don’t know why. “There must be some particle reactions that happen differently for matter and antimatter,” said Morgan Wascko, a physicist at Imperial College London. Antimatter might decay in a way that differs from how matter decays, for example. If so, it would violate an idea called charge-parity (CP) symmetry, which states that the laws of physics shouldn’t change if matter particles swap places with their antiparticles (charge) while viewed in a mirror (parity). The symmetry holds for most particles, though not all. (The subatomic particles known as quarks violate CP symmetry, but the deviations are so small that they can’t explain why matter so dramatically outnumbers antimatter in the universe.)

    Last year, the T2K collaboration announced the first evidence that neutrinos might break CP symmetry, thus potentially explaining why the universe is filled with matter. “If there is CP violation in the neutrino sector, then this could easily account for the matter-antimatter difference,” said Adrian Bevan, a particle physicist at Queen Mary University of London.

    Researchers check for CP violations by studying differences between the behavior of matter and antimatter. In the case of neutrinos, the T2K scientists explore how neutrinos and antineutrinos oscillate, or change, as the particles make their way to the Super-K detector. In 2016, 32 muon neutrinos changed to electron neutrinos on their way to Super-K. When the researchers sent muon antineutrinos, only four became electron antineutrinos.

    That result got the community excited — although most physicists were quick to point out that with such a small sample size, there was still a 10 percent chance that the difference was merely a random fluctuation. (By comparison, the 2012 Higgs boson discovery had less than a 1-in-1 million probability that the signal was due to chance.)

    This year, researchers collected nearly twice the amount of neutrino data as last year. Super-K captured 89 electron neutrinos, significantly more than the 67 it should have found if there was no CP violation. And the experiment spotted only seven electron antineutrinos, two fewer than expected.

    3
    Lucy Reading-Ikkanda for Quanta Magazine

    Researchers aren’t claiming a discovery just yet. Because there are still so few data points, “there’s still a 1-in-20 chance it’s just a statistical fluke and there isn’t even any violation of CP symmetry,” said Phillip Litchfield, a physicist at Imperial College London. For the results to become truly significant, he added, the experiment needs to get down to about a 3-in-1000 chance, which researchers hope to reach by the mid-2020s.

    But the improvement on last year’s data, while modest, is “in a very interesting direction,” said Tom Browder, a physicist at the University of Hawaii. The hints of new physics haven’t yet gone away, as we might expect them to do if the initial results were due to chance. Results are also trickling in from another experiment, the 810-kilometer-long NOvA at the Fermi National Accelerator Laboratory outside Chicago.

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    Last year it released its first set of neutrino data, with antineutrino results expected next summer. And although these first CP-violation results will also not be statistically significant, if the NOvA and T2K experiments agree, “the consistency of all these early hints” will be intriguing, said Mark Messier, a physicist at Indiana University.

    A planned upgrade of the Super-K detector might give the researchers a boost. Next summer, the detector will be drained for the first time in over a decade, then filled again with ultrapure water. This water will be mixed with gadolinium sulfate, a type of salt that should make the instrument much more sensitive to electron antineutrinos. “The gadolinium doping will make the electron antineutrino interaction easily detectable,” said Browder. That is, the salt will help the researchers to separate antineutrino interactions from neutrino interactions, improving their ability to search for CP violations.

    “Right now, we are probably willing to bet that CP is violated in the neutrino sector, but we won’t be shocked if it is not,” said André de Gouvêa, a physicist at Northwestern University. Wascko is a bit more optimistic. “The 2017 T2K result has not yet clarified our understanding of CP violation, but it shows great promise for our ability to measure it precisely in the future,” he said. “And perhaps the future is not as far away as we might have thought last year.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:30 pm on December 31, 2017 Permalink | Reply
    Tags: “The universe is inevitable” he declared. “The universe is impossible.”Nima Arkani-Hamed, , Complications in Physics - "Is Nature Unnatural?", , , Nima Arkani-Hamed of the Institute for Advanced Study, , , Quanta Magazine, The universe might not make sense   

    From Quanta Magazine: Complications in Physics – “Is Nature Unnatural?” 2013 

    Quanta Magazine
    Quanta Magazine

    May 24, 2013 [Just brought forward in social media.]
    Natalie Wolchover

    Decades of confounding experiments have physicists considering a startling possibility: The universe might not make sense.

    1
    Is the universe natural or do we live in an atypical bubble in a multiverse? Recent results at the Large Hadron Collider have forced many physicists to confront the latter possibility. Illustration by Giovanni Villadoro.

    On an overcast afternoon in late April, physics professors and students crowded into a wood-paneled lecture hall at Columbia University for a talk by Nima Arkani-Hamed, a high-profile theorist visiting from the Institute for Advanced Study in nearby Princeton, N.J.

    6
    Nima Arkani-Hamed, Institute for Advanced Study Princeton, N.J., USA
    With his dark, shoulder-length hair shoved behind his ears, Arkani-Hamed laid out the dual, seemingly contradictory implications of recent experimental results at the Large Hadron Collider in Europe.

    3
    “The universe is impossible,” said Nima Arkani-Hamed, 41, of the Institute for Advanced Study, during a recent talk at Columbia University. Natalie Wolchover/Quanta Magazine

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    “The universe is inevitable,” he declared. “The universe is impossible.”

    The spectacular discovery of the Higgs boson in July 2012 confirmed a nearly 50-year-old theory of how elementary particles acquire mass, which enables them to form big structures such as galaxies and humans.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    “The fact that it was seen more or less where we expected to find it is a triumph for experiment, it’s a triumph for theory, and it’s an indication that physics works,” Arkani-Hamed told the crowd.

    However, in order for the Higgs boson to make sense with the mass (or equivalent energy) it was determined to have, the LHC needed to find a swarm of other particles, too. None turned up.

    With the discovery of only one particle, the LHC experiments deepened a profound problem in physics that had been brewing for decades. Modern equations seem to capture reality with breathtaking accuracy, correctly predicting the values of many constants of nature and the existence of particles like the Higgs. Yet a few constants — including the mass of the Higgs boson — are exponentially different from what these trusted laws indicate they should be, in ways that would rule out any chance of life, unless the universe is shaped by inexplicable fine-tunings and cancellations.

    In peril is the notion of “naturalness,” Albert Einstein’s dream that the laws of nature are sublimely beautiful, inevitable and self-contained. Without it, physicists face the harsh prospect that those laws are just an arbitrary, messy outcome of random fluctuations in the fabric of space and time.

    The LHC will resume smashing protons in 2015 in a last-ditch search for answers. But in papers, talks and interviews, Arkani-Hamed and many other top physicists are already confronting the possibility that the universe might be unnatural. (There is wide disagreement, however, about what it would take to prove it.)

    “Ten or 20 years ago, I was a firm believer in naturalness,” said Nathan Seiberg, a theoretical physicist at the Institute, where Einstein taught from 1933 until his death in 1955. “Now I’m not so sure. My hope is there’s still something we haven’t thought about, some other mechanism that would explain all these things. But I don’t see what it could be.”

    Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam. According to a popular but polarizing framework called string theory, the number of possible types of universes that can bubble up in a multiverse is around 10^500. In a few of them, chance cancellations would produce the strange constants we observe.

    In such a picture, not everything about this universe is inevitable, rendering it unpredictable. Edward Witten, a string theorist at the Institute, said by email, “I would be happy personally if the multiverse interpretation is not correct, in part because it potentially limits our ability to understand the laws of physics. But none of us were consulted when the universe was created.”

    “Some people hate it,” said Raphael Bousso, a physicist at the University of California at Berkeley who helped develop the multiverse scenario. “But I just don’t think we can analyze it on an emotional basis. It’s a logical possibility that is increasingly favored in the absence of naturalness at the LHC.”

    What the LHC does or doesn’t discover in its next run is likely to lend support to one of two possibilities: Either we live in an overcomplicated but stand-alone universe, or we inhabit an atypical bubble in a multiverse.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    “We will be a lot smarter five or 10 years from today because of the LHC,” Seiberg said. “So that’s exciting. This is within reach.

    Cosmic Coincidence

    Einstein once wrote that for a scientist, “religious feeling takes the form of a rapturous amazement at the harmony of natural law” and that “this feeling is the guiding principle of his life and work.” Indeed, throughout the 20th century, the deep-seated belief that the laws of nature are harmonious — a belief in “naturalness” — has proven a reliable guide for discovering truth.

    “Naturalness has a track record,” Arkani-Hamed said in an interview. In practice, it is the requirement that the physical constants (particle masses and other fixed properties of the universe) emerge directly from the laws of physics, rather than resulting from improbable cancellations. Time and again, whenever a constant appeared fine-tuned, as if its initial value had been magically dialed to offset other effects, physicists suspected they were missing something. They would seek and inevitably find some particle or feature that materially dialed the constant, obviating a fine-tuned cancellation.

    This time, the self-healing powers of the universe seem to be failing. The Higgs boson has a mass of 126 giga-electron-volts, but interactions with the other known particles should add about 10,000,000,000,000,000,000 giga-electron-volts to its mass. This implies that the Higgs’ “bare mass,” or starting value before other particles affect it, just so happens to be the negative of that astronomical number, resulting in a near-perfect cancellation that leaves just a hint of Higgs behind: 126 giga-electron-volts.

    Physicists have gone through three generations of particle accelerators searching for new particles, posited by a theory called supersymmetry, that would drive the Higgs mass down exactly as much as the known particles drive it up. But so far they’ve come up empty-handed.

    The upgraded LHC will explore ever-higher energy scales in its next run, but even if new particles are found, they will almost definitely be too heavy to influence the Higgs mass in quite the right way. The Higgs will still seem at least 10 or 100 times too light. Physicists disagree about whether this is acceptable in a natural, stand-alone universe. “Fine-tuned a little — maybe it just happens,” said Lisa Randall, a professor at Harvard University. But in Arkani-Hamed’s opinion, being “a little bit tuned is like being a little bit pregnant. It just doesn’t exist.”

    If no new particles appear and the Higgs remains astronomically fine-tuned, then the multiverse hypothesis will stride into the limelight. “It doesn’t mean it’s right,” said Bousso, a longtime supporter of the multiverse picture, “but it does mean it’s the only game in town.”

    A few physicists — notably Joe Lykken of Fermi National Accelerator Laboratory in Batavia, Ill., and Alessandro Strumia of the University of Pisa in Italy — see a third option. They say that physicists might be misgauging the effects of other particles on the Higgs mass and that when calculated differently, its mass appears natural. This “modified naturalness” falters when additional particles, such as the unknown constituents of dark matter, are included in calculations — but the same unorthodox path could yield other ideas. “I don’t want to advocate, but just to discuss the consequences,” Strumia said during a talk earlier this month at Brookhaven National Laboratory.


    4
    Brookhaven Forum 2013 David Curtin, left, a postdoctoral researcher at Stony Brook University, and Alessandro Strumia, a physicist at the National Institute for Nuclear Physics in Italy, discussing Strumia’s “modified naturalness” idea, which questions longstanding assumptions about how to calculate the natural value of the Higgs boson mass. Thomas Lin/Quanta Magazine.

    However, modified naturalness cannot fix an even bigger naturalness problem that exists in physics: The fact that the cosmos wasn’t instantly annihilated by its own energy the moment after the Big Bang.

    Dark Dilemma

    The energy built into the vacuum of space (known as vacuum energy, dark energy or the cosmological constant) is a baffling trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times smaller than what is calculated to be its natural, albeit self-destructive, value. No theory exists about what could naturally fix this gargantuan disparity. But it’s clear that the cosmological constant has to be enormously fine-tuned to prevent the universe from rapidly exploding or collapsing to a point. It has to be fine-tuned in order for life to have a chance.

    To explain this absurd bit of luck, the multiverse idea has been growing mainstream in cosmology circles over the past few decades. It got a credibility boost in 1987 when the Nobel Prize-winning physicist Steven Weinberg, now a professor at the University of Texas at Austin, calculated that the cosmological constant of our universe is expected in the multiverse scenario [Physical Review Letters].

    5
    Steven Weinberg, University of Texas at Austin

    Of the possible universes capable of supporting life — the only ones that can be observed and contemplated in the first place — ours is among the least fine-tuned. “If the cosmological constant were much larger than the observed value, say by a factor of 10, then we would have no galaxies,” explained Alexander Vilenkin, a cosmologist and multiverse theorist at Tufts University. “It’s hard to imagine how life might exist in such a universe.”

    Most particle physicists hoped that a more testable explanation for the cosmological constant problem would be found. None has. Now, physicists say, the unnaturalness of the Higgs makes the unnaturalness of the cosmological constant more significant. Arkani-Hamed thinks the issues may even be related. “We don’t have an understanding of a basic extraordinary fact about our universe,” he said. “It is big and has big things in it.”

    The multiverse turned into slightly more than just a hand-waving argument in 2000, when Bousso and Joe Polchinski, a professor of theoretical physics at the University of California at Santa Barbara, found a mechanism that could give rise to a panorama of parallel universes. String theory, a hypothetical “theory of everything” that regards particles as invisibly small vibrating lines, posits that space-time is 10-dimensional. At the human scale, we experience just three dimensions of space and one of time, but string theorists argue that six extra dimensions are tightly knotted at every point in the fabric of our 4-D reality. Bousso and Polchinski calculated that there are around 10500 different ways for those six dimensions to be knotted (all tying up varying amounts of energy), making an inconceivably vast and diverse array of universes possible. In other words, naturalness is not required. There isn’t a single, inevitable, perfect universe.

    “It was definitely an aha-moment for me,” Bousso said. But the paper sparked outrage.

    “Particle physicists, especially string theorists, had this dream of predicting uniquely all the constants of nature,” Bousso explained. “Everything would just come out of math and pi and twos. And we came in and said, ‘Look, it’s not going to happen, and there’s a reason it’s not going to happen. We’re thinking about this in totally the wrong way.’ ”

    Life in a Multiverse

    The Big Bang, in the Bousso-Polchinski multiverse scenario, is a fluctuation. A compact, six-dimensional knot that makes up one stitch in the fabric of reality suddenly shape-shifts, releasing energy that forms a bubble of space and time. The properties of this new universe are determined by chance: the amount of energy unleashed during the fluctuation. The vast majority of universes that burst into being in this way are thick with vacuum energy; they either expand or collapse so quickly that life cannot arise in them. But some atypical universes, in which an improbable cancellation yields a tiny value for the cosmological constant, are much like ours.

    In a paper posted last month to the physics preprint website arXiv.org, Bousso and a Berkeley colleague, Lawrence Hall, argue that the Higgs mass makes sense in the multiverse scenario, too. They found that bubble universes that contain enough visible matter (compared to dark matter) to support life most often have supersymmetric particles beyond the energy range of the LHC, and a fine-tuned Higgs boson. Similarly, other physicists showed in 1997 that if the Higgs boson were five times heavier than it is, this would suppress the formation of atoms other than hydrogen, resulting, by yet another means, in a lifeless universe.

    Despite these seemingly successful explanations, many physicists worry that there is little to be gained by adopting the multiverse worldview. Parallel universes cannot be tested for; worse, an unnatural universe resists understanding. “Without naturalness, we will lose the motivation to look for new physics,” said Kfir Blum, a physicist at the Institute for Advanced Study. “We know it’s there, but there is no robust argument for why we should find it.” That sentiment is echoed again and again: “I would prefer the universe to be natural,” Randall said.

    But theories can grow on physicists. After spending more than a decade acclimating himself to the multiverse, Arkani-Hamed now finds it plausible — and a viable route to understanding the ways of our world. “The wonderful point, as far as I’m concerned, is basically any result at the LHC will steer us with different degrees of force down one of these divergent paths,” he said. “This kind of choice is a very, very big deal.”

    Naturalness could pull through. Or it could be a false hope in a strange but comfortable pocket of the multiverse.

    As Arkani-Hamed told the audience at Columbia, “stay tuned.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:27 pm on December 19, 2017 Permalink | Reply
    Tags: 20 “loading” molecules called aminoacyl-tRNA synthetases, , Charles Carter, , Kurt Gödel's Theorem and the Chemistry of Life, Peter Wills, protein-like molecules rather than RNA may have been the planet’s first self-replicators, Quanta Magazine,   

    From Quanta: “The End of the RNA World Is Near, Biochemists Argue” 

    Quanta Magazine
    Quanta Magazine

    December 19, 2017
    Jordana Cepelewicz

    1
    A popular theory holds that life emerged from a rich chemical soup in which RNA was the original self-replicator. But a combination of peptides and RNA might have been more effective.
    Novikov Aleksey

    Four billion years ago, the first molecular precursors to life emerged, swirling about in Earth’s primordial soup of chemicals. Although the identity of these molecules remains a subject of fractious debate, scientists agree that the molecules would have had to perform two major functions: storing information and catalyzing chemical reactions. The modern cell assigns these responsibilities to its DNA and its proteins, respectively — but according to the narrative that dominates origin-of-life research and biology-textbook descriptions today, RNA was the first to play that role, paving the way for DNA and proteins to take over later.

    This hypothesis, proposed in the 1960s and dubbed the “RNA world” two decades later, is usually viewed as the most likely explanation for how life got its start. Alternative “worlds” abound, but they’re often seen as fallback theories, flights of fancy or whimsical thought experiments.

    That’s mainly because, theorizing aside, the RNA world is fortified by much more experimental evidence than any of its competitors have accumulated. Last month, Quanta Magazine reported on an alternative theory suggesting that protein-like molecules, rather than RNA, may have been the planet’s first self-replicators. But its findings were purely computational; the researchers have only just begun experiments to seek support for their claims.

    Now, a pair of researchers has put forth another theory — this time involving the coevolution of RNA and peptides — that they hope will shake the RNA world’s hold.

    Recent papers published in Biosystems and Molecular Biology and Evolution delineated why the RNA world hypothesis does not provide a sufficient foundation for the evolutionary events that followed. Instead, said Charles Carter, a structural biologist at the University of North Carolina, Chapel Hill, who co-authored the papers, the model represents “an expedient proposal.” “There’s no way that a single polymer could carry out all of the necessary processes we now characterize as part of life,” he added.

    And that single polymer certainly couldn’t be RNA, according to his team’s studies. The main objection to the molecule concerns catalysis: Some research has shown that for life to take hold, the mystery polymer would have had to coordinate the rates of chemical reactions that could differ in speed by as much as 20 orders of magnitude. Even if RNA could somehow do this in the prebiotic world, its capabilities as a catalyst would have been adapted to the searing temperatures — around 100 degrees Celsius — that abounded on early Earth. Once the planet started to cool, Carter claims, RNA wouldn’t have been able to evolve and keep up the work of synchronization. Before long, the symphony of chemical reactions would have fallen into disarray.

    Perhaps most importantly, an RNA-only world could not explain the emergence of the genetic code, which nearly all living organisms today use to translate genetic information into proteins. The code takes each of the 64 possible three-nucleotide RNA sequences and maps them to one of the 20 amino acids used to build proteins. Finding a set of rules robust enough to do that would take far too long with RNA alone, said Peter Wills, Carter’s co-author at the University of Auckland in New Zealand — if the RNA world could even reach that point, which he deemed highly unlikely. In Wills’ view, RNA might have been able to catalyze its own formation, making it “chemically reflexive,” but it lacked what he called “computational reflexivity.”

    “A system that uses information the way organisms use genetic information — to synthesize their own components — must contain reflexive information,” Wills said. He defined reflexive information as information that, “when decoded by the system, makes the components that perform exactly that particular decoding.” The RNA of the RNA world hypothesis, he added, is just chemistry because it has no means of controlling its chemistry. “The RNA world doesn’t tell you anything about genetics,” he said.

    Nature had to find a different route, a better shortcut to the genetic code. Carter and Wills think they’ve uncovered that shortcut. It depends on a tight feedback loop — one that would not have developed from RNA alone but instead from a peptide-RNA complex.

    Bringing Peptides Into the Mix

    Carter found hints of that complex in the mid-1970s, when he learned in graduate school that certain structures seen in most proteins are “right-handed.” That is, the atoms in the structures could have two equivalent mirror-image arrangements, but the structures all use just one. Most of the nucleic acids and sugars that make up DNA and RNA are right-handed, too. Carter began to think of RNA and polypeptides as complementary structures, and he modeled a complex in which “they were made for each other, like a hand in a glove.”

    This implied an elementary kind of coding, a basis for the exchange of information between the RNA and the polypeptide. He was on his way to sketching what that might have looked like, working backward from the far more sophisticated modern genetic code. When the RNA world, coined in 1986, rose to prominence, Carter admitted, “I was pretty ticked off.” He felt that his peptide-RNA world, proposed a decade earlier, had been totally ignored.

    Since then, he, Wills and others have collaborated on a theory that circles back to that research. Their main goal was to figure out the very simple genetic code that preceded today’s more specific and complicated one. And so they turned not just to computation but also to genetics.

    At the center of their theory are 20 “loading” molecules called aminoacyl-tRNA synthetases. These catalytic enzymes allow RNA to bond with specific amino acids in keeping with the rules of the genetic code. “In a sense, the genetic code is ‘written’ in the specificity of the active sites” of those enzymes, said Jannie Hofmeyr, a biochemist at Stellenbosch University in South Africa, who was not involved in the study.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    Previous research showed that the 20 enzymes could be divided evenly into two groups of 10 based on their structure and sequence. These two enzyme classes, it turned out, have certain sequences that code for mutually exclusive amino acids — meaning that the enzymes had to have arisen from complementary strands of the same ancient gene. Carter, Wills and their colleagues found that in this scenario, RNA coded for peptides using a set of just two rules (or, in other words, using just two types of amino acids). The resulting peptide products ended up enforcing the very rules that governed the translation process, thus forming the tight feedback loop the researchers knew would be the linchpin of the theory.

    Gödel’s Theorem and the Chemistry of Life

    Carter sees strong parallels between this kind of loop and the mathematical one described by the philosopher and mathematician Kurt Gödel, whose “incompleteness” theorem states that in any logical system that can represent itself, statements will inevitably arise that cannot be shown to be true or false within that system. “I believe that the analogy to Gödel’s theorem furnishes a quite strong argument for inevitability,” Carter said.

    In their recent papers, Carter and Wills show that their peptide-RNA world solves gaps in origin-of-life history that RNA alone can’t explain. “They provide solid theoretical and experimental evidence that peptides and RNA were jointly involved in the origin of the genetic code right from the start,” Hofmeyr said, “and that metabolism, construction through transcription and translation, and replication must have coevolved.”

    Of course, the Carter-Wills model begins with the genetic code, the existence of which presupposes complex chemical reactions involving molecules like transfer RNA and the loading enzymes. The researchers claim that the events leading up to their proposed scenario involved RNA and peptides interacting (in the complex that Carter described in the 1970s, for example). Yet that suggestion still leaves many open questions about how that chemistry began and what it looked like.

    To answer these questions, theories abound that move far beyond the RNA world. In fact, some scientists take an approach precisely opposite to that of Carter and Wills: They think instead that the earliest stages of life did not need to begin with anything resembling the kind of chemistry seen today. Doron Lancet, a genomics researcher at the Weizmann Institute of Science in Israel, posits an alternative theory that rests on assemblies of lipids that catalyze the entrance and exit of various molecules. Information is carried not by genetic sequences, but rather by the lipid composition of such assemblies.

    Just like the model proposed by Carter and Wills, Lancet’s ideas involve not one type of molecule but a huge variety of them. “More and more bits of evidence are accumulating,” Lancet said, “that can make an alternative hypothesis be right.” The jury is still out on what actually transpired at life’s origins, but the tide seems to be turning away from a story dedicated solely to RNA.

    “We should put only a few of our eggs in the RNA world basket,” Hofmeyr said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
    • stewarthoughblog 2:02 am on December 20, 2017 Permalink | Reply

      It is about time that RNA nonsense comes to an end. There is admittedly considerable scientific knowledge that has and can still be gained by studying RNA macromolecules, but the desperation of naturalists proposing it either as the source of first life is intellectually insulting. RNA’s complexity may be assemblable in intelligent design highly managed labs environments, but ridiculous to consider possible in any geochemically relevant primordial environment. RNA is easily mutated, highly reactive, and only an intermediate macromolecule restricted by the protein catch-22.

      Frustratingly, the desperation continues with propositions of lipid collective assembly, but at least science appears to coming to its senses about RNA.
      .

      Like

  • richardmitnick 8:40 pm on December 17, 2017 Permalink | Reply
    Tags: , , Atacama Desert of Chile so important for Optical Astonomy, , Carnegie Institution for Science Las Campanas Observatory, , , Earliest Black Hole Gives Rare Glimpse of Ancient Universe, Quanta Magazine, , ,   

    From Quanta: “Earliest Black Hole Gives Rare Glimpse of Ancient Universe” 

    Quanta Magazine
    Quanta Magazine

    December 6, 2017 [Today in social media]
    Joshua Sokol

    1
    Olena Shmahalo/Quanta Magazine

    2
    The two Carnegie Magellan telescopes: Baade (left) and Clay (right)

    Astronomers have at least two gnawing questions about the first billion years of the universe, an era steeped in literal fog and figurative mystery. They want to know what burned the fog away: stars, supermassive black holes, or both in tandem? And how did those behemoth black holes grow so big in so little time?

    Now the discovery of a supermassive black hole smack in the middle of this period is helping astronomers resolve both questions. “It’s a dream come true that all of these data are coming along,” said Avi Loeb, the chair of the astronomy department at Harvard University.

    The black hole, announced today in the journal Nature, is the most distant ever found. It dates back to 690 million years after the Big Bang. Analysis of this object reveals that reionization, the process that defogged the universe like a hair dryer on a steamy bathroom mirror, was about half complete at that time.

    First Stars and Reionization Era, Caltech

    The researchers also show that the black hole already weighed a hard-to-explain 780 million times the mass of the sun.

    A team led by Eduardo Bañados, an astronomer at the Carnegie Institution for Science in Pasadena, found the new black hole by searching through old data for objects with the right color to be ultradistant quasars — the visible signatures of supermassive black holes swallowing gas. The team went through a preliminary list of candidates, observing each in turn with a powerful telescope at Las Campanas Observatory in Chile.

    4
    Carnegie Institution for Science Las Campanas Observatory telescopes in the southern Atacama Desert of Chile approximately 100 kilometres (62 mi) northeast of the city of La Serena,near the southern end and over 2,500 m (8,200 ft) high.

    On March 9, Bañados observed a faint dot in the southern sky for just 10 minutes. A glance at the raw, unprocessed data confirmed it was a quasar — not a nearer object masquerading as one — and that it was perhaps the oldest ever found. “That night I couldn’t even sleep,” he said.

    3
    Eduardo Bañados at the Las Campanas Observatory in Chile, where the new quasar was discovered. Courtesy of Eduardo Bañados. Baade and Clay in the background.

    The new black hole’s mass, calculated after more observations, adds to an existing problem. Black holes grow when cosmic matter falls into them. But this process generates light and heat. At some point, the radiation released by material as it falls into the black hole carries out so much momentum that it blocks new gas from falling in and disrupts the flow. This tug-of-war creates an effective speed limit for black hole growth called the Eddington rate. If this black hole began as a star-size object and grew as fast as theoretically possible, it couldn’t have reached its estimated mass in time.

    Other quasars share this kind of precocious heaviness, too. The second-farthest one known, reported on in 2011, tipped the scales at an estimated 2 billion solar masses after 770 million years of cosmic time.

    These objects are too young to be so massive. “They’re rare, but they’re very much there, and we need to figure out how they form,” said Priyamvada Natarajan, an astrophysicist at Yale University who was not part of the research team. Theorists have spent years learning how to bulk up a black hole in computer models, she said. Recent work suggests that these black holes could have gone through episodic growth spurts during which they devoured gas well over the Eddington rate.

    Bañados and colleagues explored another possibility: If you start at the new black hole’s current mass and rewind the tape, sucking away matter at the Eddington rate until you approach the Big Bang, you see it must have initially formed as an object heavier than 1,000 times the mass of the sun. In this approach, collapsing clouds in the early universe gave birth to overgrown baby black holes that weighed thousands or tens of thousands of solar masses. Yet this scenario requires exceptional conditions that would have allowed gas clouds to condense all together into a single object instead of splintering into many stars, as is typically the case.

    Cosmic Dark Ages

    2
    Cosmic Dark Ages. ESO.

    Even earlier in the early universe, before any stars or black holes existed, the chaotic scramble of naked protons and electrons came together to make hydrogen atoms. These neutral atoms then absorbed the bright ultraviolet light coming from the first stars. After hundreds of millions of years, young stars or quasars emitted enough light to strip the electrons back off these atoms, dissipating the cosmic fog like mist at dawn.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    Astronomers have known that reionization was largely complete by around a billion years after the Big Bang.

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex Mittelmann Cold creation

    At that time, only traces of neutral hydrogen remained. But the gas around the newly discovered quasar is about half neutral, half ionized, which indicates that, at least in this part of the universe, reionization was only half finished. “This is super interesting, to really map the epoch of reionization,” said Volker Bromm, an astrophysicist at the University of Texas.

    When the light sources that powered reionization first switched on, they must have carved out the opaque cosmos like Swiss cheese.

    Inflationary Universe. NASA/WMAP

    But what these sources were, when it happened, and how patchy or homogeneous the process was are all debated. The new quasar shows that reionization took place relatively late. That scenario squares with what the known population of early galaxies and their stars could have done, without requiring astronomers to hunt for even earlier sources to accomplish it quicker, said study coauthor Bram Venemans of the Max Planck Institute for Astronomy in Heidelberg.

    More data points may be on the way. For radio astronomers, who are gearing up to search for emissions from the neutral hydrogen itself, this discovery shows that they are looking in the right time period. “The good news is that there will be neutral hydrogen for them to see,” said Loeb. “We were not sure about that.”

    The team also hopes to identify more quasars that date back to the same time period but in different parts of the early universe. Bañados believes that there are between 20 and 100 such very distant, very bright objects across the entire sky. The current discovery comes from his team’s searches in the southern sky; next year, they plan to begin searching in the northern sky as well.

    “Let’s hope that pans out,” said Bromm. For years, he said, the baton has been handed off between different classes of objects that seem to give the best glimpses at early cosmic time, with recent attention often going to faraway galaxies or fleeting gamma-ray bursts. “People had almost given up on quasars,” he said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 5:52 pm on December 17, 2017 Permalink | Reply
    Tags: , , Dheeraj Roy, Existing theories about memory formation and storage are wrong or at least incomplete, Light-Triggered Genes Reveal the Hidden Workings of Memory, Nobel laureate Susumu Tonegawa, , Quanta Magazine, The brain creates multiple copies of memories at once — even though it hides the long-term copy from our awareness at first, Tracking Memories Cell by Cell   

    From Quanta: “Light-Triggered Genes Reveal the Hidden Workings of Memory” 

    Quanta Magazine
    Quanta Magazine

    December 14, 2017
    Elizabeth Svoboda

    1
    Eero Lampinen for Quanta Magazine

    Neuroscientists gained several surprising insights into memory this year, including the discovery that the brain creates multiple copies of memories at once — even though it hides the long-term copy from our awareness at first.

    Nobel laureate Susumu Tonegawa’s lab is overturning old assumptions about how memories form, how recall works and whether lost memories might be restored from “silent engrams.”

    Susumu Tonegawa’s presence announces itself as soon as you walk through the door of the Massachusetts Institute of Technology’s Picower Institute for Learning and Memory. A three-foot-high framed photograph of Tonegawa stands front and center in the high-ceilinged lobby, flanked by a screen playing a looping rainbow-hued clip of recent research highlights.

    The man in the portrait, however, is anything but a spotlight-seeker. Most days, he’s ensconced in the impenetrable warren of labs and offices that make up Picower’s fifth floor. His hair, thick and dark in the photo, is now a subdued silver, and today, a loosely draped blue cardigan replaces the impeccable suit jacket. His accommodating, soft-spoken manner belies his reputation as a smasher of established dogma, or at least as a poker of deep and abiding holes.

    Along with his MIT neuroscientist colleague Dheeraj Roy and others, Tonegawa is upending basic assumptions in brain science. Early this year, he reported that memory storage and retrieval happen on two different brain circuits, not on the same one as was long thought. His team also showed that memories of an event form at the same time in the brain’s short-term and long-term storage areas, rather than moving to long-term storage later on. Most recently (and tantalizingly), his lab demonstrated what could someday be a way to bring currently irretrievable memories back into conscious awareness.

    Tonegawa, now MIT’s Picower Professor of Biology and Neuroscience, first carved out his maverick identity back in the 1980s. While at the Basel Institute for Immunology in Switzerland, he published a theory — first seen as heretical, then brilliant — that immune cells reshuffle their DNA to create millions of different antibodies from a small number of genes. His discovery won him the Nobel Prize in 1987, which explains the oversized lobby portrait. Most researchers would have stayed in the field and basked in the attention, but Tonegawa left immunology behind entirely. He spent the next couple of decades reinventing himself as a master of memory’s workings at the cellular level.

    Despite his professional stature, Tonegawa is no TED-circuit regular or fount of startup concepts. Instead of selling his ideas or his persona, he prefers to let his data speak for themselves. And they do, perhaps more loudly than some of his colleagues would like. “The way he continues to disrupt and innovate is really striking,” said Sheena Josselyn, a neuroscientist at Toronto’s Hospital for Sick Children who also studies memory formation. “He tackles the tough questions. He doesn’t do something that is easy and expected.”

    Tracking Memories Cell by Cell

    Upon meeting Tonegawa, I sensed that he considers his fame a slightly cumbersome side effect of his vocation. The day I visited his office, he was immersed in research banter with a colleague, breaking away only reluctantly to revisit his own journey. The whole immunology sideline, he told me, was something of an accident — his real love has always been molecular biology, and immunology was a fascinating expression of that. He ended up at Basel mostly because his U.S. work permit had run out. “Immunology was a transient interest for me,” he said. “I wanted to do something new.”

    2
    After making Nobel Prize-winning contributions to immunology, Susumu Tonegawa, now a professor of biology and neuroscience at the Massachusetts Institute of Technology, focused his passion for molecular biology on the brain. Tonegawa Lab.

    That “something” turned out to be neuroscience, which Francis Crick and other well-known biologists were touting as the wave of the future. In the late 1980s and early ’90s, researchers knew relatively little about how the cellular and molecular workings of the brain underpin its capabilities, and nothing excited Tonegawa more than mapping unexplored territory.

    Tonegawa’s venture into brain science wasn’t a complete turnabout, though, because he brought some of his investigative techniques with him. He had been using transgenic (genetically modified) mice in his immunology studies, knocking out particular genes and observing the physical effects, and he used a similar approach to uncover the biological basis of learning and memory. In an early MIT study, he bred mice that did not produce a particular enzyme thought to be important in cementing long-term memories. Although the behavior of the mutant mice seemed mostly normal, further testing showed that they had deficiencies in spatial learning, confirming the enzyme’s key role in that process.

    With that high-profile result, Tonegawa was off and running. About 10 years ago, he was able to take his work to a new level of precision in part by employing a technique called optogenetics. Developed by the Stanford University bioengineer Karl Deisseroth and others, the technique involves modifying the genes of lab animals so that their cells express a light-sensitive protein called channelrhodopsin, derived from green algae. Researchers can then activate these cells by shining light on them through optical fibers. Tonegawa and his colleagues use optogenetics to generate neural activity on command in specified regions of the brain.

    This method has allowed Tonegawa to show that existing theories about memory formation and storage are wrong, or at least incomplete. This past summer, along with Roy and other colleagues, he reported that — contrary to neuroscience dogma — the neural circuit in the brain structure called the hippocampus that makes a particular memory is not the same circuit [Cell] that recalls the memory later. Instead, retrieving a memory requires what the scientists call a “detour circuit” in the hippocampus’s subiculum, located just off the main memory-formation circuit.

    To illustrate the discovery for me, Roy called up an image of a magnified brain slice in the lab. “What you’re looking at is the hippocampus section of a mouse,” he said. He gestured to a dense cloud of glowing green neurons in the upper right — the subiculum itself — and explained that his team had genetically engineered the mouse to produce channelrhodopsin only in the subiculum’s neurons. He and his team could then activate or deactivate these subiculum neurons with piped-in laser light, leaving the surrounding neurons unaffected.

    4
    Studies have shown that the hippocampus (red) is essential for creating new memories. But short-term recall of those memories depends on a “detour circuit” involving a specialized area called the subiculum (green). Dheeraj Roy/Tonegawa Lab, MIT.

    Armed with this biological switch, the researchers turned the subiculum neurons on and off to see what would happen. To their surprise, they saw that mice trained to be afraid when inside a certain cage stopped showing that fear when the subiculum neurons were turned off. The mice were unable to dredge up the fearful memory, which meant that the subiculum was needed for recall. But if the researchers turned off the subiculum neurons only while teaching the fearful association, the mice later recalled the memory with ease. A separate part of the hippocampus must therefore have encoded the memory. Similarly, when the team turned the main hippocampal circuit on and off, they found that it was responsible for memory formation, but not for recall.

    To explain why the brain would form and recall memories using different circuits, Roy framed it in part as a matter of expediency. “We think these parallel circuits help us quickly update memories,” he said. If the same hippocampal circuit were used for both storage and retrieval, encoding a new memory would take hundreds of milliseconds. But if one circuit adds new information while the detour circuit simultaneously calls up similar memories, it’s possible to apply past knowledge to your current situation much more quickly. “Now you can update on the order of tens of milliseconds,” Roy said.

    That difference might prove crucial to creatures in danger, for whom a few hundred milliseconds could mean the difference between getting away from a predator scot-free and becoming its dinner. The parallel circuits may also help us integrate present information with older memories just as speedily: Memories of a new conversation with your friend Shannon, for instance, can be added seamlessly to your existing memories of Shannon.

    Reassessing How Memories Form

    In addition to revealing that different mechanisms control memory formation and recall, Tonegawa, Roy and their colleague Takashi Kitamura (who recently moved from MIT to the University of Texas Southwestern Medical Center) have shown that memory formation itself is unexpectedly complex. Their work concerned the brain changes involved in the transformation of short-term memories to long-term memories. (In mouse experiments, short-term memory refers to recollections of events from within the past few days — what is sometimes called recent memory to distinguish it from more transient neural impressions that flicker out after only minutes or hours. Long-term memory holds events that happened on the order of two weeks or more ago.)

    For decades in neuroscience, the most widely accepted model posited that short-term memories form rapidly in the hippocampus and are later transferred to the prefrontal cortex near the brain’s surface for long-term storage. But Tonegawa’s team recently reported in Science that new memories form at both locations at the same time.

    The road to that discovery started back in 2012, when Tonegawa’s lab came up with a way to highlight brain cells known as engram cells, which hold a unique memory. He knew that when mice take in new surroundings, certain genes activate in their brains. His team therefore linked the expression of these “experiential-learning” genes in the mice to a channelrhodopsin gene, so that the precise cells that activated during a learning event would glow. “You can demonstrate those are the cells really holding this memory,” Tonegawa said, “because if you reactivate only those neurons with laser light, the animal behaves as if recalling that memory.”

    5
    In this magnified slice of brain tissue enhanced with an optogenetic protein, the green glow shows which engram cells in the hippocampus stored a short-term memory. Dheeraj Roy, Tonegawa Lab/MIT.

    In the new Science study, the team used this technique to create mice whose learning cells would respond to light. They herded each mouse into a special cage and delivered a mild electric shock to its foot, leading the mouse to form a fearful memory of the cage. A day later, they returned each mouse to the cage and illuminated its brain to activate the brain cells storing the memory.

    As expected, hippocampal cells involved in short-term memory responded to the laser light. But surprisingly, a handful of cells in the prefrontal cortex responded as well. Cortical cells had formed memories of the foot shock almost right away, well ahead of the anticipated schedule.

    Yet the researchers noticed that even though the cortical cells could be activated early on with laser light, they did not fire spontaneously when the mice returned to the cage where the foot shock happened. The researchers called these cortical cells “silent engrams” because they contained the memory but did not respond to a natural recall cue. Over the next couple of weeks, however, these cells seemingly matured and became integral for recalling the memory.

    “The dynamic is, the hippocampal engram is active [at first] and goes down, and the prefrontal-cortex engram is silent at the beginning and slowly becomes active,” Tonegawa said. This detailed understanding of how memories are laid down and stored could inform the development of drugs that aid formation of new memories.

    7
    Lucy Ikkanda-Reading/Quanta Magazine

    Some in the neuroscience community, however, think it’s prudent to be cautious in interpreting the significance of findings like these. Last year, Tonegawa’s MIT colleagues Andrii Rudenko and Li-Huei Tsai emphasized that engram science is still so new that we don’t know exactly how engram cells might work together, nor which cells contain which parts of memories. “In these early days of functional memory engram investigation,” they wrote BMC Biology, “we still do not have satisfactory answers to many important questions.”

    Tonegawa has asserted that brains contain silent engrams that could potentially be externally activated — an idea that strikes a few neuroscientists as overblown even as it excites others, according to Josselyn. “It really forces the scientific community to either update our thinking or try experiments to challenge that,” she said.

    Bringing Silent Memories to Life

    Despite the uncertainty that surrounds it, the silent-engram concept offers us the fascinating prospect of gaining access to hidden memories — a prospect that Roy, in particular, continues to explore. In October, he published a paper with Tonegawa [PNAS]that generated a flurry of excited emails from scientists and nonscientists alike. One of the paper’s blockbuster findings was that, at least in mice, it was possible to awaken silent engrams without using a laser light or optical fibers.

    8
    Dheeraj Roy, a postdoctoral associate at MIT, has collaborated with Tonegawa on several recent studies that have overturned old ideas about how memory works. Vicky Roy.

    The question the team asked themselves, Roy said, was whether they could make hidden memories permanently active with a noninvasive treatment. A cellular protein called PAK1 stimulates the growth of dendritic spines, or protrusions, that allow communication between neurons, and Roy had a hunch that this protein — when transported into brain cells — might help bring silent engrams back into direct awareness. “Can we artificially put [in] more of one gene that would make more protrusions?” he asked, excitedly noting that this approach might be simpler than optogenetics.

    To test this possibility, the researchers first gave mild shocks to mice in a cage while also suppressing their ability to make the proteins that normally cement long-term memories. When these mice returned to the same cage later on, they showed no fear, indicating that they did not naturally recall the shock in response to a cue. Yet laser light could still switch on the mice’s fearful response, which meant the memory was still there in silent-engram form.

    When the team injected these mice with the PAK1 gene to make them overproduce the protein, the animals froze up spontaneously when entering the dreaded cage. They were recalling the memory of the cage all on their own: The silent engram was coming to life. When PAK1 is administered, “you just wait four days, [and] they recover it with natural cues,” Roy said. In the future, he added, a therapeutic injection of PAK1 molecules that enter the brain’s memory cells could awaken people’s silent memories as well.

    “So it would just be an injected protein?” I asked.

    “That’s right — one molecular transporter that has one protein. People already have ways to put proteins into brain cells. I don’t think we’re that far [away] anymore.”

    It’s amazing to think that all of our minds hold hundreds or thousands of silent memories that are just waiting for the right activation to re-emerge into conscious awareness. If Roy’s findings hold true in humans, the retrieval of hidden memories might someday be as easy to initiate as getting a flu shot. “What would happen if you did that to a normal person? What would come flooding back?” I asked. “What would that experience be like?”

    “Very sci-fi, even for me,” Roy said. “My family says, ‘Is this all real?’ I say, ‘Yeah, I’m not lying to you!’”

    A few minutes later, back in Tonegawa’s office, I posed more or less the same question to him. Reactivating silent engrams could allow people with memory issues — like Alzheimer’s sufferers, soldiers who have survived explosive blasts and concussed athletes in contact sports — to regain memories that have become inaccessible. (To be sure, these people would often need to get such treatments early, before their conditions progressed and too many brain cells died.) Roy and Tonegawa’s past research [PubMed] suggests that people with cognitive difficulties have many stored memories that they simply can’t recall. But what about the rest of us who just want to mine our memories, to excavate what’s buried deep within?

    Tonegawa paused to consider. “It could be these silent memories could come out,” he said. “If you artificially increase the spine density, inject enzymes which promote spine formation, then the silent engram can be converted to active engram.”

    When I pressed him further, though, he exuded caution. It was as if he was used to hearing people like me run away with the possibilities and wanted to tamp down my expectations. Even though his lab successfully reactivated mice’s silent engrams after a few days, that’s no guarantee that silent engrams last very long, he said. And once the cells that encode particular memories die off from old age or dementia, it might be game over, no matter what kind of proteins you inject. Tonegawa pointed to Roy, who was sitting across from him. “I won’t remember his name.”

    His patience seemed to be running out. The contrarian in him, I could tell, wanted to assert that he was a student of the essential nature of things, not a pursuer of drug patents or quick cures or even the ideal of perfect recall. “I know a joke,” he said cryptically. “Not injecting protein or genes, but I keep an external brain. I hold the information in that brain.” He pointed to Roy again — the person he counts on to remember things he can’t. “The only thing I have to do is have a relationship with that person,” he explained. It’s comforting, in a way, to know that the wizard of tracing and unlocking memories also believes that no brain is an island. “It’s better,” he said, “not to memorize everything.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: