Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:22 pm on March 18, 2018 Permalink | Reply
    Tags: Albert Einstein's theory of general relativity, , Mathematics vs Physics, , , Quanta Magazine, Shake a Black Hole, , The black hole stability conjecture   

    From Quanta: “To Test Einstein’s Equations, Poke a Black Hole” 

    Quanta Magazine
    Quanta Magazine

    mathematical physics

    March 8, 2018
    Kevin Hartnett

    Fantastic animation. Olena Shmahalo/Quanta Magazine

    In November 1915, in a lecture before the Prussian Academy of Sciences, Albert Einstein described an idea that upended humanity’s view of the universe. Rather than accepting the geometry of space and time as fixed, Einstein explained that we actually inhabit a four-dimensional reality called space-time whose form fluctuates in response to matter and energy.

    Einstein elaborated this dramatic insight in several equations, referred to as his “field equations,” that form the core of his theory of general relativity. That theory has been vindicated by every experimental test thrown at it in the century since.

    Yet even as Einstein’s theory seems to describe the world we observe, the mathematics underpinning it remain largely mysterious. Mathematicians have been able to prove very little about the equations themselves. We know they work, but we can’t say exactly why. Even Einstein had to fall back on approximations, rather than exact solutions, to see the universe through the lens he’d created.

    Over the last year, however, mathematicians have brought the mathematics of general relativity into sharper focus. Two groups have come up with proofs related to an important problem in general relativity called the black hole stability conjecture. Their work proves that Einstein’s equations match a physical intuition for how space-time should behave: If you jolt it, it shakes like Jell-O, then settles down into a stable form like the one it began with.

    “If these solutions were unstable, that would imply they’re not physical. They’d be a mathematical ghost that exists mathematically and has no significance from a physical point of view,” said Sergiu Klainerman, a mathematician at Princeton University and co-author, with Jérémie Szeftel, of one of the two new results [https://arxiv.org/abs/1711.07597].

    To complete the proofs, the mathematicians had to resolve a central difficulty with Einstein’s equations. To describe how the shape of space-time evolves, you need a coordinate system — like lines of latitude and longitude — that tells you which points are where. And in space-time, as on Earth, it’s hard to find a coordinate system that works everywhere.

    Shake a Black Hole

    General relativity famously describes space-time as something like a rubber sheet. Absent any matter, the sheet is flat. But start dropping balls onto it — stars and planets — and the sheet deforms. The balls roll toward one another. And as the objects move around, the shape of the rubber sheet changes in response.

    Einstein’s field equations describe the evolution of the shape of space-time. You give the equations information about curvature and energy at each point, and the equations tell you the shape of space-time in the future. In this way, Einstein’s equations are like equations that model any physical phenomenon: This is where the ball is at time zero, this is where it is five seconds later.

    “They’re a mathematically precise quantitative version of the statement that space-time curves in the presence of matter,” said Peter Hintz, a Clay research fellow at the University of California, Berkeley, and co-author, with András Vasy, of the other recent result [https://arxiv.org/abs/1606.04014].

    In 1916, almost immediately after Einstein released his theory of general relativity, the German physicist Karl Schwarzschild found an exact solution to the equations that describes what we now know as a black hole (the term wouldn’t be invented for another five decades). Later, physicists found exact solutions that describe a rotating black hole and one with an electrical charge.

    These remain the only exact solutions that describe a black hole. If you add even a second black hole, the interplay of forces becomes too complicated for present-day mathematical techniques to handle in all but the most special situations.

    Yet you can still ask important questions about this limited group of solutions. One such question developed out of work in 1952 by the French mathematician Yvonne Choquet-Bruhat. It asks, in effect: What happens when you shake a black hole?

    Lucy Reading-Ikkanda/Quanta Magazine

    This problem is now known as the black hole stability conjecture. The conjecture predicts that solutions to Einstein’s equations will be “stable under perturbation.” Informally, this means that if you wiggle a black hole, space-time will shake at first, before eventually settling down into a form that looks a lot like the form you started with. “Roughly, stability means if I take special solutions and perturb them a little bit, change data a little bit, then the resulting dynamics will be very close to the original solution,” Klainerman said.

    So-called “stability” results are an important test of any physical theory. To understand why, it’s useful to consider an example that’s more familiar than a black hole.

    Imagine a pond. Now imagine that you perturb the pond by tossing in a stone. The pond will slosh around for a bit and then become still again. Mathematically, the solutions to whatever equations you use to describe the pond (in this case, the Navier-Stokes equations) should describe that basic physical picture. If the initial and long-term solutions don’t match, you might question the validity of your equations.

    “This equation might have whatever properties, it might be perfectly fine mathematically, but if it goes against what you expect physically, it can’t be the right equation,” Vasy said.

    For mathematicians working on Einstein’s equations, stability proofs have been even harder to find than solutions to the equations themselves. Consider the case of flat, empty Minkowski space — the simplest of all space-time configurations. This solution to Einstein’s equations was found in 1908 in the context of Einstein’s earlier theory of special relativity. Yet it wasn’t until 1993 that mathematicians managed to prove that if you wiggle flat, empty space-time, you eventually get back flat, empty space-time. That result, by Klainerman and Demetrios Christodoulou, is a celebrated work in the field.

    One of the main difficulties with stability proofs has to do with keeping track of what is going on in four-dimensional space-time as the solution evolves. You need a coordinate system that allows you to measure distances and identify points in space-time, just as lines of latitude and longitude allow us to define locations on Earth. But it’s not easy to find a coordinate system that works at every point in space-time and then continues to work as the shape of space-time evolves.

    “We don’t know of a one-size-fits-all way to do this,” Hintz wrote in an email. “After all, the universe does not hand you a preferred coordinate system.”

    The Measurement Problem

    The first thing to recognize about coordinate systems is that they’re a human invention. The second is that not every coordinate system works to identify every point in a space.

    Take lines of latitude and longitude: They’re arbitrary. Cartographers could have anointed any number of imaginary lines to be 0 degrees longitude.


    And while latitude and longitude work to identify just about every location on Earth, they stop making sense at the North and South poles. If you knew nothing about Earth itself, and only had access to latitude and longitude readings, you might wrongly conclude there’s something topologically strange going on at those points.

    This possibility — of drawing wrong conclusions about the properties of physical space because the coordinate system used to describe it is inadequate — is at the heart of why it’s hard to prove the stability of space-time.

    “It could be the case that stability is true, but you’re using coordinates that are not stable and thus you miss the fact that stability is true,” said Mihalis Dafermos, a mathematician at the University of Cambridge and a leading figure in the study of Einstein’s equations.

    In the context of the black hole stability conjecture, whatever coordinate system you’re using has to evolve as the shape of space-time evolves — like a snugly fitting glove adjusting as the hand it encloses changes shape. The fit between the coordinate system and space-time has to be good at the start and remain good throughout. If it doesn’t, there are two things that can happen that would defeat efforts to prove stability.

    First, your coordinate system might change shape in a way that makes it break down at certain points, just as latitude and longitude fail at the poles. Such points are called “coordinate singularities” (to distinguish them from physical singularities, like an actual black hole). They are undefined points in your coordinate system that make it impossible to follow an evolving solution all the way through.

    Second, a poorly fitting coordinate system might disguise the underlying physical phenomena it’s meant to measure. To prove that solutions to Einstein’s equations settle down into a stable state after being perturbed, mathematicians must keep careful track of the ripples in space-time that are set in motion by the perturbation. To see why, it’s worth considering the pond again. A rock thrown into a pond generates waves. The long-term stability of the pond results from the fact that those waves decay over time — they grow smaller and smaller until there’s no sign they were ever there.

    The situation is similar for space-time. A perturbation will set off a cascade of gravitational waves, and proving stability requires proving that those gravitational waves decay. And proving decay requires a coordinate system — referred to as a “gauge” — that allows you to measure the size of the waves. The right gauge allows mathematicians to see the waves flatten and eventually disappear altogether.

    “The decay has to be measured relative to something, and it’s here where the gauge issue shows up,” Klainerman said. “If I’m not in the right gauge, even though in principle I have stability, I can’t prove it because the gauge will just not allow me to see that decay. If I don’t have decay rates of waves, I can’t prove stability.”

    The trouble is, while the coordinate system is crucial, it’s not obvious which one to choose. “You have a lot of freedom about what this gauge condition can be,” Hintz said. “Most of these choices are going to be bad.”

    Partway There

    A full proof of the black hole stability conjecture requires proving that all known black hole solutions to Einstein’s equations (with the spin of the black hole below a certain threshold) are stable after being perturbed. These known solutions include the Schwarzschild solution, which describes space-time with a nonrotating black hole, and the Kerr family of solutions, which describe configurations of space-time empty of everything save a single rotating black hole (where the properties of that rotating black hole — its mass and angular momentum — vary within the family of solutions).

    Both of the new results make partial progress toward a proof of the full conjecture.

    Hintz and Vasy, in a paper posted to the scientific preprint site arxiv.org in 2016 [see above 1606.04014], proved that slowly rotating black holes are stable. But their work did not cover black holes rotating above a certain threshold.

    Their proof also makes some assumptions about the nature of space-time. The original conjecture is in Minkowski space, which is not just flat and empty but also fixed in size. Hintz and Vasy’s proof takes place in what’s called de Sitter space, where space-time is accelerating outward, just like in the actual universe. This change of setting makes the problem simpler from a technical point of view, which is easy enough to appreciate at a conceptual level: If you drop a rock into an expanding pond, the expansion is going to stretch the waves and cause them to decay faster than they would have if the pond were not expanding.

    “You’re looking at a universe undergoing an accelerated expansion,” Hintz said. “This makes the problem a little easier as it appears to dilute the gravitational waves.”

    Klainerman and Szeftel’s work has a slightly different flavor. Their proof, the first part of which was posted online last November [see above 1711.07597], takes place in Schwarzschild space-time — closer to the original, more difficult setting for the problem. They prove the stability of a nonrotating black hole, but they do not address solutions in which the black hole is spinning. Moreover, they only prove the stability of black hole solutions for a narrow class of perturbations — where the gravitational waves generated by those perturbations are symmetric in a certain way.

    Both results involve new techniques for finding the right coordinate system for the problem. Hintz and Vasy start with an approximate solution to the equations, based on an approximate coordinate system, and gradually increase the precision of their answer until they arrive at exact solutions and well-behaved coordinates. Klainerman and Szeftel take a more geometric approach to the challenge.

    The two teams are now trying to build on their respective methods to find a proof of the full conjecture. Some expert observers think the day might not be far off.

    “I really think things are now at the stage that the remaining difficulties are just technical,” Dafermos said. “Somehow one doesn’t need new ideas to solve this problem.” He emphasized that a final proof could come from any one of the large number of mathematicians currently working on the problem.

    For 100 years Einstein’s equations have served as a reliable experimental guide to the universe. Now mathematicians may be getting closer to demonstrating exactly why they work so well.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:30 pm on March 4, 2018 Permalink | Reply
    Tags: , , , , Quanta Magazine,   

    From Quanta Magazine: “Elusive Higgs-Like State Created in Exotic Materials” 

    Quanta Magazine
    Quanta Magazine

    February 28, 2018
    Sophia Chen

    Two teams of physicists have created the “Higgs mode” – a link between particle physics and the physics of matter. The work could help researchers understand the strange behavior of deeply quantum systems.

    Camille Chew for Quanta Magazine

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    If you want to understand the personality of a material, study its electrons. Table salt forms cubic crystals because its atoms share electrons in that configuration; silver shines because its electrons absorb visible light and reradiate it back. Electron behavior causes nearly all material properties: hardness, conductivity, melting temperature.

    Of late, physicists are intrigued by the way huge numbers of electrons can display collective quantum-mechanical behavior. In some materials, a trillion trillion electrons within a crystal can act as a unit, like fire ants clumping into a single mass to survive a flood. Physicists want to understand this collective behavior because of the potential link to exotic properties such as superconductivity, in which electricity can flow without any resistance.

    Last year, two independent research groups designed crystals, known as two-dimensional antiferromagnets, whose electrons can collectively imitate the Higgs boson. By precisely studying this behavior, the researchers think they can better understand the physical laws that govern materials — and potentially discover new states of matter. It was the first time that researchers have been able to induce such “Higgs modes” in these materials. “You’re creating a little mini universe,” said David Alan Tennant, a physicist at Oak Ridge National Laboratory who led one of the groups along with Tao Hong, his colleague there.

    Both groups induced electrons into Higgs-like activity by pelting their material with neutrons. During these tiny collisions, the electrons’ magnetic fields begin to fluctuate in a patterned way that mathematically resembles the Higgs boson.

    A crystal made of copper bromide was used to construct the Oak Ridge team’s two-dimensional antiferromagnet. Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    The Higgs mode is not simply a mathematical curiosity. When a crystal’s structure permits its electrons to behave this way, the material most likely has other interesting properties, said Bernhard Keimer, a physicist at the Max Planck Institute for Solid State Research who coleads the other group.

    That’s because when you get the Higgs mode to appear, the material should be on the brink of a so-called quantum phase transition. Its properties are about to change drastically, like a snowball on a sunny spring day. The Higgs can help you understand the character of the quantum phase transition, says Subir Sachdev, a physicist at Harvard University. These quantum effects often portend bizarre new material properties.

    For example, physicists think that quantum phase transitions play a role in certain materials, known as topological insulators, that conduct electricity only on their surface and not in their interior. Researchers have also observed quantum phase transitions in high-temperature superconductors, although the significance of the phase transitions is still unclear. Whereas conventional superconductors need to be cooled to near absolute zero to observe such effects, high-temperature superconductors work at the relatively balmy conditions of liquid nitrogen, which is dozens of degrees higher.

    Over the past few years, physicists have created the Higgs mode in other superconductors, but they can’t always understand exactly what’s going on. The typical materials used to study the Higgs mode have a complicated crystal structure that increases the difficulty of understanding the physics at work.

    So both Keimer’s and Tennant’s groups set out to induce the Higgs mode in simpler systems. Their antiferromagnets were so-called two-dimensional materials: While each crystal exists as a 3-D chunk, those chunks are built out of stacked two-dimensional layers of atoms that act more or less independently. Somewhat paradoxically, it’s a harder experimental challenge to induce the Higgs mode in these two-dimensional materials. Physicists were unsure if it could be done.

    Yet the successful experiments showed that it was possible to use existing theoretical tools to explain the evolution of the Higgs mode. Keimer’s group found that the Higgs mode parallels the behavior of the Higgs boson. Inside a particle accelerator like the Large Hadron Collider, a Higgs boson will quickly decay into other particles, such as photons. In Keimer’s antiferromagnet, the Higgs mode morphs into different collective-electron motion that resembles particles called Goldstone bosons. The group experimentally confirmed that the Higgs mode evolves according to their theoretical predictions.

    Tennant’s group discovered how to make their material produce a Higgs mode that doesn’t die out. That knowledge could help them determine how to turn on other quantum properties, like superconductivity, in other materials. “What we want to understand is how to keep quantum behavior in systems,” said Tennant.

    Both groups hope to go beyond the Higgs mode. Keimer aims to actually observe a quantum phase transition in his antiferromagnet, which may be accompanied by additional weird phenomena. “That happens quite a lot,” he said. “You want to study a particular quantum phase transition, and then something else pops up.”

    They also just want to explore. They expect that more weird properties of matter are associated with the Higgs mode — potentially ones not yet envisioned. “Our brains don’t have a natural intuition for quantum systems,” said Tennant. “Exploring nature is full of surprises because it’s full of things we never imagined.”

    No science papers cited in this article.

    See the full article here .
    Re-released at Wired, Sophia Chen 3.4.18 Science.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 7:24 am on March 4, 2018 Permalink | Reply
    Tags: , Barbara Engelhardt, , , , GTEx-Genotype-Tissue Expression Consortium, , , Quanta Magazine   

    From Quanta Magazine: “A Statistical Search for Genomic Truths” 

    Quanta Magazine
    Quanta Magazine

    February 27, 2018
    Jordana Cepelewicz

    Barbara Engelhardt, a Princeton University computer scientist, wants to strengthen the foundation of biological knowledge in machine-learning approaches to genomic analysis. Sarah Blesener for Quanta Magazine.

    We don’t have much ground truth in biology.” According to Barbara Engelhardt, a computer scientist at Princeton University, that’s just one of the many challenges that researchers face when trying to prime traditional machine-learning methods to analyze genomic data. Techniques in artificial intelligence and machine learning are dramatically altering the landscape of biological research, but Engelhardt doesn’t think those “black box” approaches are enough to provide the insights necessary for understanding, diagnosing and treating disease. Instead, she’s been developing new statistical tools that search for expected biological patterns to map out the genome’s real but elusive “ground truth.”

    Engelhardt likens the effort to detective work, as it involves combing through constellations of genetic variation, and even discarded data, for hidden gems. In research published last October [Nature], for example, she used one of her models to determine how mutations relate to the regulation of genes on other chromosomes (referred to as distal genes) in 44 human tissues. Among other findings, the results pointed to a potential genetic target for thyroid cancer therapies. Her work has similarly linked mutations and gene expression to specific features found in pathology images.

    The applications of Engelhardt’s research extend beyond genomic studies. She built a different kind of machine-learning model, for instance, that makes recommendations to doctors about when to remove their patients from a ventilator and allow them to breathe on their own.

    She hopes her statistical approaches will help clinicians catch certain conditions early, unpack their underlying mechanisms, and treat their causes rather than their symptoms. “We’re talking about solving diseases,” she said.

    To this end, she works as a principal investigator with the Genotype-Tissue Expression (GTEx) Consortium, an international research collaboration studying how gene regulation, expression and variation contribute to both healthy phenotypes and disease.


    Right now, she’s particularly interested in working on neuropsychiatric and neurodegenerative diseases, which are difficult to diagnose and treat.

    Quanta Magazine recently spoke with Engelhardt about the shortcomings of black-box machine learning when applied to biological data, the methods she’s developed to address those shortcomings, and the need to sift through “noise” in the data to uncover interesting information. The interview has been condensed and edited for clarity.

    What motivated you to focus your machine-learning work on questions in biology?

    I’ve always been excited about statistics and machine learning. In graduate school, my adviser, Michael Jordan [at the University of California, Berkeley], said something to the effect of: “You can’t just develop these methods in a vacuum. You need to think about some motivating applications.” I very quickly turned to biology, and ever since, most of the questions that drive my research are not statistical, but rather biological: understanding the genetics and underlying mechanisms of disease, hopefully leading to better diagnostics and therapeutics. But when I think about the field I am in — what papers I read, conferences I attend, classes I teach and students I mentor — my academic focus is on machine learning and applied statistics.

    We’ve been finding many associations between genomic markers and disease risk, but except in a few cases, those associations are not predictive and have not allowed us to understand how to diagnose, target and treat diseases. A genetic marker associated with disease risk is often not the true causal marker of the disease — one disease can have many possible genetic causes, and a complex disease might be caused by many, many genetic markers possibly interacting with the environment. These are all challenges that someone with a background in statistical genetics and machine learning, working together with wet-lab scientists and medical doctors, can begin to address and solve. Which would mean we could actually treat genetic diseases — their causes, not just their symptoms.

    You’ve spoken before about how traditional statistical approaches won’t suffice for applications in genomics and health care. Why not?

    First, because of a lack of interpretability. In machine learning, we often use “black-box” methods — [classification algorithms called] random forests, or deeper learning approaches. But those don’t really allow us to “open” the box, to understand which genes are differentially regulated in particular cell types or which mutations lead to a higher risk of a disease. I’m interested in understanding what’s going on biologically. I can’t just have something that gives an answer without explaining why.

    The goal of these methods is often prediction, but given a person’s genotype, it is not particularly useful to estimate the probability that they’ll get Type 2 diabetes. I want to know how they’re going to get Type 2 diabetes: which mutation causes the dysregulation of which gene to lead to the development of the condition. Prediction is not sufficient for the questions I’m asking.

    A second reason has to do with sample size. Most of the driving applications of statistics assume that you’re working with a large and growing number of data samples — say, the number of Netflix users or emails coming into your inbox — with a limited number of features or observations that have interesting structure. But when it comes to biomedical data, we don’t have that at all. Instead, we have a limited number of patients in the hospital, a limited number of genotypes we can sequence — but a gigantic set of features or observations for any one person, including all the mutations in their genome. Consequently, many theoretical and applied approaches from statistics can’t be used for genomic data.

    What makes the genomic data so challenging to analyze?

    The most important signals in biomedical data are often incredibly small and completely swamped by technical noise. It’s not just about how you model the real, biological signal — the questions you’re trying to ask about the data — but also how you model that in the presence of this incredibly heavy-handed noise that’s driven by things you don’t care about, like which population the individuals came from or which technician ran the samples in the lab. You have to get rid of that noise carefully. And we often have a lot of questions that we would like to answer using the data, and we need to run an incredibly large number of statistical tests — literally trillions — to figure out the answers. For example, to identify an association between a mutation in a genome and some trait of interest, where that trait might be the expression levels of a specific gene in a tissue. So how can we develop rigorous, robust testing mechanisms where the signals are really, really small and sometimes very hard to distinguish from noise? How do we correct for all this structure and noise that we know is going to exist?

    So what approach do we need to take instead?

    My group relies heavily on what we call sparse latent factor models, which can sound quite mathematically complicated. The fundamental idea is that these models partition all the variation we observed in the samples, with respect to only a very small number of features. One of these partitions might include 10 genes, for example, or 20 mutations. And then as a scientist, I can look at those 10 genes and figure out what they have in common, determine what this given partition represents in terms of a biological signal that affects sample variance.

    So I think of it as a two-step process: First, build a model that separates all the sources of variation as carefully as possible. Then go in as a scientist to understand what all those partitions represent in terms of a biological signal. After this, we can validate those conclusions in other data sets and think about what else we know about these samples (for instance, whether everyone of the same age is included in one of these partitions).

    When you say “go in as a scientist,” what do you mean?

    I’m trying to find particular biological patterns, so I build these models with a lot of structure and include a lot about what kinds of signals I’m expecting. I establish a scaffold, a set of parameters that will tell me what the data say, and what patterns may or may not be there. The model itself has only a certain amount of expressivity, so I’ll only be able to find certain types of patterns. From what I’ve seen, existing general models don’t do a great job of finding signals we can interpret biologically: They often just determine the biggest influencers of variance in the data, as opposed to the most biologically impactful sources of variance. The scaffold I build instead represents a very structured, very complex family of possible patterns to describe the data. The data then fill in that scaffold to tell me which parts of that structure are represented and which are not.

    So instead of using general models, my group and I carefully look at the data, try to understand what’s going on from the biological perspective, and tailor our models based on what types of patterns we see.

    How does the latent factor model work in practice?

    We applied one of these latent factor models to pathology images [pictures of tissue slices under a microscope], which are often used to diagnose cancer. For every image, we also had data about the set of genes expressed in those tissues. We wanted to see how the images and the corresponding gene expression levels were coordinated.

    We developed a set of features describing each of the images, using a deep-learning method to identify not just pixel-level values but also patterns in the image. We pulled out over a thousand features from each image, give or take, and then applied a latent factor model and found some pretty exciting things.

    For example, we found sets of genes and features in one of these partitions that described the presence of immune cells in the brain. You don’t necessarily see these cells on the pathology images, but when we looked at our model, we saw a component there that represented only genes and features associated with immune cells, not brain cells. As far as I know, no one’s seen this kind of signal before. But it becomes incredibly clear when we look at these latent factor components.

    Video: Barbara Engelhardt, a computer scientist at Princeton University, explains why traditional machine-learning techniques have often fallen short for genomic analysis, and how researchers are overcoming that challenge. Sarah Blesener for Quanta Magazine

    You’ve worked with dozens of human tissue types to unpack how specific genetic variations help shape complex traits. What insights have your methods provided?

    We had 44 tissues, donated from 449 human cadavers, and their genotypes (sequences of their whole genomes). We wanted to understand more about the differences in how those genotypes expressed their genes in all those tissues, so we did more than 3 trillion tests, one by one, comparing every mutation in the genome with every gene expressed in each tissue. (Running that many tests on the computing clusters we’re using now takes about two weeks; when we move this iteration of GTEx to the cloud as planned, we expect it to take around two hours.) We were trying to figure out whether the [mutant] genotype was driving distal gene expression. In other words, we were looking for mutations that weren’t located on the same chromosome as the genes they were regulating. We didn’t find very much: a little over 600 of these distal associations. Their signals were very low.

    But one of the signals was strong: an exciting thyroid association, in which a mutation appeared to distally regulate two different genes. We asked ourselves: How is this mutation affecting expression levels in a completely different part of the genome? In collaboration with Alexis Battle’s lab at Johns Hopkins University, we looked near the mutation on the genome and found a gene called FOXE1, for a transcription factor that regulates the transcription of genes all over the genome. The FOXE1 gene is only expressed in thyroid tissues, which was interesting. But we saw no association between the mutant genotype and the expression levels of FOXE1. So we had to look at the components of the original signal we’d removed before — everything that had appeared to be a technical artifact — to see if we could detect the effects of the FOXE1 protein broadly on the genome.

    We found a huge impact of FOXE1 in the technical artifacts we’d removed. FOXE1, it seems, regulates a large number of genes only in the thyroid. Its variation is driven by the mutant genotype we found. And that genotype is also associated with thyroid cancer risk. We went back to the thyroid cancer samples — we had about 500 from the Cancer Genome Atlas — and replicated the distal association signal. These things tell a compelling story, but we wouldn’t have learned it unless we had tried to understand the signal that we’d removed.

    What are the implications of such an association?

    Now we have a particular mechanism for the development of thyroid cancer and the dysregulation of thyroid cells. If FOXE1 is a druggable target — if we can go back and think about designing drugs to enhance or suppress the expression of FOXE1 — then we can hope to prevent people at high thyroid cancer risk from getting it, or to treat people with thyroid cancer more effectively.

    The signal from broad-effect transcription factors like FOXE1 actually looks a lot like the effects we typically remove as part of the noise: population structure, or the batches the samples were run in, or the effects of age or sex. A lot of those technical influences are going to affect approximately similar numbers of genes — around 10 percent — in a similar way. That’s why we usually remove signals that have that pattern. In this case, though, we had to understand the domain we were working in. As scientists, we looked through all the signals we’d gotten rid of, and this allowed us to find the effects of FOXE1 showing up so strongly in there. It involved manual labor and insights from a biological background, but we’re thinking about how to develop methods to do it in a more automated way.

    So with traditional modeling techniques, we’re missing a lot of real biological effects because they look too similar to noise?

    Yes. There are a ton of cases in which the interesting pattern and the noise look similar. Take these distal effects: Pretty much all of them, if they are broad effects, are going to look like the noise signal we systematically get rid of. It’s methodologically challenging. We have to think carefully about how to characterize when a signal is biologically relevant or just noise, and how to distinguish the two. My group is working fairly aggressively on figuring that out.

    Why are those relationships so difficult to map, and why look for them?

    There are so many tests we have to do; the threshold for the statistical significance of a discovery has to be really, really high. That creates problems for finding these signals, which are often incredibly small; if our threshold is that high, we’re going to miss a lot of them. And biologically, it’s not clear that there are many of these really broad-effect distal signals. You can imagine that natural selection would eliminate the kinds of mutations that affect 10 percent of genes — that we wouldn’t want that kind of variability in the population for so many genes.

    But I think there’s no doubt that these distal associations play an enormous role in disease, and that they may be considered as druggable targets. Understanding their role broadly is incredibly important for human health.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 8:23 am on February 28, 2018 Permalink | Reply
    Tags: , , , , , , Quanta Magazine   

    From Quanta: “Deathblow Dealt to Dark Matter Disks” 

    Quanta Magazine
    Quanta Magazine

    November 17, 2017 [Just found it.]
    Natalie Wolchover

    A projection showing how the positions of some 2 million stars measured by the Gaia satellite are expected to evolve in the future. Copyright: ESA/Gaia/DPAC, CC BY-SA 3.0 IGO

    Eighty years after the discovery of dark matter, physicists remain totally stumped about the nature of this nonreflective stuff that, judging by its gravitational effects, pervades the cosmos in far greater abundance than all the matter we can see. From axions to WIMPs (or “weakly interacting massive particles”), many candidates have been proposed as dark matter’s identity — and sought to no avail in dozens of experiments. One enticing speculation to have emerged in recent years imagines that dark matter isn’t a single, monolithic thing. Rather, there might exist a whole family of dark particles that interact with one another via unknown dark forces, much as ordinary matter consists of quarks, electrons and a bustling zoo of other light-sensitive quanta.

    The existence of a rich “dark sector” of particles could have consequences on galactic scales. Whereas dark matter of a single, inert type such as an axion would enshroud galaxies in spherical clouds called halos, particles in a dark sector might interact with one another in ways that release energy, enabling them to cool and settle into a lower-energy configuration. Namely, these cooling dark matter particles would collapse into rotating disks, just as stars and gas settle into pancake-shaped rotating galaxies and planets orbit their stars in a plane. In recent years, Lisa Randall, a theoretical physicist at Harvard University, has championed the idea that there might be just such a disk of dark matter coursing through the plane of the Milky Way.

    Randall and collaborators say this hypothetical dark disk could explain several observations, including a possible uptick of asteroid and comet impacts and associated mass extinctions on Earth every 35-or-so million years. As Randall discusses in her 2015 book, Dark Matter and the Dinosaurs, the subtle periodicity might happen because space objects get destabilized each time our solar system passes through the dark disk while bobbing up and down on its way around the galaxy.

    However, when I reported on the dark disk hypothesis in April 2016, the disk and all it would imply about the nature of dark matter were already in trouble. Inventories of the Milky Way showed that the mass of stars and gas in the galactic plane and the bobbing motions of stars circumnavigating the galaxy match up gravitationally, leaving only a smidgen of wiggle room in which to fit an invisible dark disk. At that time, only an extremely thin disk could exist, accounting for no more than 2 percent of the total amount of dark matter in the galaxy, with the rest being of the inert, halo-forming kind.

    Still, the presence of any dark disk at all, even one made of a minority of dark matter particles, would revolutionize physics. It would prove that there are multiple kinds of interacting dark matter particles and enable physicists to learn the properties of these particles from the features of the disk. And so researchers have awaited a more precise inventory of the Milky Way to see if a thin disk is needed to exactly match the mass of stuff in the galactic plane to the motions of stars around it. Now, with the numbers in, some say the disk is dead.

    Katelin Schutz, a graduate student at the University of California, Berkeley, led a new analysis that disfavors the presence of a dark matter disk in the galaxy. Chris Akers

    Katelin Schutz, a cosmology graduate student at the University of California, Berkeley, and coauthors have checked for a dark disk using the first data release from the Gaia satellite, a European spacecraft endeavoring to measure the speeds and locations of one billion Milky Way stars.

    ESA/GAIA satellite

    In a paper posted online Nov. 9 and soon to be submitted to Physical Review Letters, Schutz and collaborators analyzed a subset of the stars measured by Gaia (representing 20 times more stars than had been previously analyzed). Their findings excluded the presence of any dark disk denser than about four-thousandths the mass of the sun per cubic light-year at the midplane of the galaxy. A disk roughly twice as dense would be needed to explain the comet impact periodicity and other original justifications for the dark disk idea. “Our new limits disfavor the presence of a thin dark matter disk,” Schutz and coauthors wrote — and that’s the case, she added by email, even though “we have tried to be quite generous and conservative with our estimations of systematic uncertainty.”

    “I think it really is dead!” said David Hogg, an astrophysicist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), and a leading expert in astronomical data analysis. “It is sad, of course.”

    However, Randall and Eric Kramer, her student-collaborator on a 2016 paper The Astrophysical Journal that found room for a thin dark disk, aren’t prepared to admit defeat. “It’s a good solid analysis, but I don’t think it rules out a dark disk,” Randall said. In particular, she and Kramer question the authors’ assumption that the stars they analyzed were in equilibrium rather than oscillating or ringing. According to Randall, there is some evidence that “the more straightforward equilibrium analysis might not be adequate.”

    “I think you’re never going to convince everyone, and science is always a conversation rather than a declaration,” Schutz said. Still, the Milky Way inventory will become even more precise, and any nonequilibrium effects can be teased out, as more Gaia data become available.

    Even if there’s no dark disk, it’s still possible that there might be a dark sector. It would have to consist of particles that — unlike particles in the light sector — don’t interact and combine in ways that give off significant amounts of energy. The possibilities for dark matter are virtually endless, given the stunning absence of experimental hints about its nature. That dearth of clues is why the dark disk “would have been awesome,” Hogg said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:46 pm on February 19, 2018 Permalink | Reply
    Tags: , Edward Witten, , Quanta Magazine   

    From Quanta Magazine: “A Physicist’s Physicist Ponders the Nature of Reality” Edward Witten 

    Quanta Magazine
    Quanta Magazine


    November 28, 2017 [Just found this. Did I miss it in November? I would not have skipped it.]
    Natalie Wolchover

    Edward Witten in his office at the Institute for Advanced Study in Princeton, New Jersey.

    Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

    During a visit this fall, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

    Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

    Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

    When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

    Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

    People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

    What are some of these newfound facets of dualities?

    It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

    That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

    Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

    Given this web of relationships and the issue of how hard it is to characterize all duality, do you feel that this reflects a lack of understanding of the structure, or is it that we’re seeing the structure, only it’s very complicated?

    I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

    There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

    Jean Sweep for Quanta Magazine

    What do you see as the relationship between math and physics?

    I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

    I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

    You can’t imagine it at all?

    No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties.

    I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

    Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement.

    Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

    What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

    The latter.

    Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

    What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

    Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

    Are you speaking of M-theory?

    M-theory is the candidate for the better description.

    You proposed M-theory 22 years ago. What are its prospects today?

    Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

    Jean Sweep for Quanta Magazine

    What’s an example of something else we might need?

    Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

    Are you willing to speculate about how it would be different?

    I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

    The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

    You’re referring to Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” Why were you reading it?

    I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

    Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

    Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

    I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

    Do you consider Wheeler a hero?

    I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

    Why do you have more patience for such things now?

    I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

    Do you ever take your mind off physics and math?

    My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

    In contrast to Wheeler, it seems like your working style is to come to the insights through the calculations, rather than chasing a vague vision.

    In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

    And he was talking about explaining how physics arises from information.

    Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

    I see. Does he have any hypotheses?

    No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

    Do you have any ideas about the meaning of existence?

    No. [Laughs.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:07 pm on February 16, 2018 Permalink | Reply
    Tags: , , , , Discoveries Fuel Fight Over Universe’s First Light, , , Quanta Magazine   

    From Quanta: “Discoveries Fuel Fight Over Universe’s First Light” 

    Quanta Magazine
    Quanta Magazine

    Light from the first galaxies clears the universe. ESO/L. Calçada.

    May 19, 2017 n[Just put up in social media.]
    Ashley Yeager

    Not long after the Big Bang, all went dark. The hydrogen gas that pervaded the early universe would have snuffed out the light of the universe’s first stars and galaxies. For hundreds of millions of years, even a galaxy’s worth of stars — or unthinkably bright beacons such as those created by supermassive black holes — would have been rendered all but invisible.

    Eventually this fog burned off as high-energy ultraviolet light broke the atoms apart in a process called reionization. But the questions of exactly how this happened — which celestial objects powered the process and how many of them were needed — have consumed astronomers for decades.

    Now, in a series of studies, researchers have looked further into the early universe than ever before. They’ve used galaxies and dark matter as a giant cosmic lens to see some of the earliest galaxies known, illuminating how these galaxies could have dissipated the cosmic fog. In addition, an international team of astronomers has found dozens of supermassive black holes — each with the mass of millions of suns — lighting up the early universe. Another team has found evidence that supermassive black holes existed hundreds of millions of years before anyone thought possible. The new discoveries should make clear just how much black holes contributed to the reionization of the universe, even as they’ve opened up questions as to how such supermassive black holes were able to form so early in the universe’s history.

    First Light

    In the first years after the Big Bang, the universe was too hot to allow atoms to form. Protons and electrons flew about, scattering any light. Then after about 380,000 years, these protons and electrons cooled enough to form hydrogen atoms, which coalesced into stars and galaxies over the next few hundreds of millions of years.

    Starlight from these galaxies would have been bright and energetic, with lots of it falling in the ultraviolet part of the spectrum. As this light flew out into the universe, it ran into more hydrogen gas. These photons of light would break apart the hydrogen gas, contributing to reionization, but as they did so, the gas snuffed out the light.


    To find these stars, astronomers have to look for the non-ultraviolet part of their light and extrapolate from there. But this non-ultraviolet light is relatively dim and hard to see without help.

    A team led by Rachael Livermore, an astrophysicist at the University of Texas at Austin, found just the help needed in the form of a giant cosmic lens.

    Gravitational Lensing NASA/ESA

    These so-called gravitational lenses form when a galaxy cluster, filled with massive dark matter, bends space-time to focus and magnify any object on the other side of it. Livermore used this technique with images from the Hubble Space Telescope to spot extremely faint galaxies from as far back as 600 million years after the Big Bang — right in the thick of reionization.

    In a recent paper that appeared in The Astrophysical Journal, Livermore and colleagues also calculated that if you add galaxies like these to the previously known galaxies, then stars should be able to generate enough intense ultraviolet light to reionize the universe.

    Yet there’s a catch. Astronomers doing this work have to estimate how much of a star’s ultraviolet light escaped its home galaxy (which is full of light-blocking hydrogen gas) to go out into the wider universe and contribute to reionization writ large. That estimate — called the escape fraction — creates a huge uncertainty that Livermore is quick to acknowledge.

    In addition, not everyone believes Livermore’s results. Rychard Bouwens, an astrophysicist at Leiden University in the Netherlands, argues in a paper submitted to The Astrophysical Journal that Livermore didn’t properly subtract the light from the galaxy clusters that make up the gravitational lens. As a result, he said, the distant galaxies aren’t as faint as Livermore and colleagues claim, and astronomers have not found enough galaxies to conclude that stars ionized the universe.

    Supremacy of Supermassive Black Holes

    If stars couldn’t get the job done, perhaps supermassive black holes could. Beastly in size, up to a billion times the mass of the sun, supermassive black holes devour matter. They tug it toward them and heat it up, a process that emits lots of light and creates luminous objects that we call quasars. Because quasars emit way more ionizing radiation than stars do, they could in theory reionize the universe.

    The trick is finding enough quasars to do it. In a paper posted to the scientific preprint site arxiv.org last month, astronomers working with the Subaru Telescope announced the discovery of 33 quasars that are about a 10th as bright as ones identified before.

    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA,4,207 m (13,802 ft) above sea level

    With such faint quasars, the astronomers should be able to calculate just how much ultraviolet light these supermassive black holes emit, said Michael Strauss, an astrophysicist at Princeton University and a member of the team. The researchers haven’t done the analysis yet, but they expect to publish the results in the coming months.

    The oldest of these quasars dates back to around a billion years after the Big Bang, which seems about how long it would take ordinary black holes to devour enough matter to bulk up to supermassive status.

    This is why another recent discovery [The Astrophysical Journal] is so puzzling. A team of researchers led by Richard Ellis, an astronomer at the European Southern Observatory, was observing a bright, star-forming galaxy seen as it was just 600 million years after the Big Bang.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    The galaxy’s spectrum — a catalog of light by wavelength — appeared to contain a signature of ionized nitrogen. It’s hard to ionize ordinary hydrogen, and even harder to ionize nitrogen. It requires more higher-energy ultraviolet light than stars emit. So another strong source of ionizing radiation, possibly a supermassive black hole, had to exist at this time, Ellis said.

    One supermassive black hole at the center of an early star-forming galaxy might be an outlier. It doesn’t mean there were enough of them around to reionize the universe. So Ellis has started to look at other early galaxies. His team now has tentative evidence that supermassive black holes sat at the centers of other massive, star-forming galaxies in the early universe. Studying these objects could help clarify what reionized the universe and illuminate how supermassive black holes formed at all. “That is a very exciting possibility,” Ellis said.

    All this work is beginning to converge on a relatively straightforward explanation for what reionized the universe. The first population of young, hot stars probably started the process, then drove it forward for hundreds of millions of years. Over time, these stars died; the stars that replaced them weren’t quite so bright and hot. But by this point in cosmic history, supermassive black holes had enough time to grow and could start to take over. Researchers such as Steve Finkelstein, an astrophysicist at the University of Texas at Austin, are using the latest observational data and simulations of early galactic activity to test out the details of this scenario, such as how much stars and black holes contribute to the process at different times.

    His work — and all work involving the universe’s first billion years — will get a boost in the coming years after the 2018 launch of the James Webb Space Telescope, Hubble’s successor, which has been explicitly designed to find the first objects in the universe. Its findings will probably provoke many more questions, too.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 9:05 am on February 12, 2018 Permalink | Reply
    Tags: , Gil Kalai, Quanta Magazine, The Argument Against Quantum Computers   

    From Quanta: “The Argument Against Quantum Computers” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2018
    Katia Moskvitch

    The mathematician Gil Kalai believes that quantum computers can’t possibly work, even in principle.

    David Vaaknin for Quanta Magazine.

    Sixteen years ago, on a cold February day at Yale University, a poster caught Gil Kalai’s eye. It advertised a series of lectures by Michel Devoret, a well-known expert on experimental efforts in quantum computing. The talks promised to explore the question “Quantum Computer: Miracle or Mirage?” Kalai expected a vigorous discussion of the pros and cons of quantum computing. Instead, he recalled, “the skeptical direction was a little bit neglected.” He set out to explore that skeptical view himself.

    Today, Kalai, a mathematician at Hebrew University in Jerusalem, is one of the most prominent of a loose group of mathematicians, physicists and computer scientists arguing that quantum computing, for all its theoretical promise, is something of a mirage. Some argue that there exist good theoretical reasons why the innards of a quantum computer — the “qubits” — will never be able to consistently perform the complex choreography asked of them. Others say that the machines will never work in practice, or that if they are built, their advantages won’t be great enough to make up for the expense.

    Kalai has approached the issue from the perspective of a mathematician and computer scientist. He has analyzed the issue by looking at computational complexity and, critically, the issue of noise. All physical systems are noisy, he argues, and qubits kept in highly sensitive “superpositions” will inevitably be corrupted by any interaction with the outside world. Getting the noise down isn’t just a matter of engineering, he says. Doing so would violate certain fundamental theorems of computation.

    Kalai knows that his is a minority view. Companies like IBM, Intel and Microsoft have invested heavily in quantum computing; venture capitalists are funding quantum computing startups (such as Quantum Circuits, a firm set up by Devoret and two of his Yale colleagues). Other nations — most notably China — are pouring billions of dollars into the sector.

    Quanta Magazine recently spoke with Kalai about quantum computing, noise and the possibility that a decade of work will be proven wrong within a matter of weeks. A condensed and edited version of that conversation follows.

    When did you first have doubts about quantum computers?

    At first, I was quite enthusiastic, like everybody else. But at a lecture in 2002 by Michel Devoret called “Quantum Computer: Miracle or Mirage,” I had a feeling that the skeptical direction was a little bit neglected. Unlike the title, the talk was very much the usual rhetoric about how wonderful quantum computing is. The side of the mirage was not well-presented.
    And so you began to research the mirage.

    Only in 2005 did I decide to work on it myself. I saw a scientific opportunity and some possible connection with my earlier work from 1999 with Itai Benjamini and Oded Schramm on concepts called noise sensitivity and noise stability.

    What do you mean by “noise”?

    By noise I mean the errors in a process, and sensitivity to noise is a measure of how likely the noise — the errors — will affect the outcome of this process. Quantum computing is like any similar process in nature — noisy, with random fluctuations and errors. When a quantum computer executes an action, in every computer cycle there is some probability that a qubit will get corrupted.

    And so this corruption is the key problem?

    We need what’s known as quantum error correction. But this will require 100 or even 500 “physical” qubits to represent a single “logical” qubit of very high quality. And then to build and use such quantum error-correcting codes, the amount of noise has to go below a certain level, or threshold.

    To determine the required threshold mathematically, we must effectively model the noise. I thought it would be an interesting challenge.

    What exactly did you do?

    I tried to understand what happens if the errors due to noise are correlated — or connected. There is a Hebrew proverb that says that trouble comes in clusters. In English you would say: When it rains, it pours. In other words, interacting systems will have a tendency for errors to be correlated. There will be a probability that errors will affect many qubits all at once.

    So over the past decade or so, I’ve been studying what kind of correlations emerge from complicated quantum computations and what kind of correlations will cause a quantum computer to fail.

    In my earlier work on noise we used a mathematical approach called Fourier analysis, which says that it’s possible to break down complex waveforms into simpler components. We found that if the frequencies of these broken-up waves are low, the process is stable, and if they are high, the process is prone to error.

    That previous work brought me to my more recent paper that I wrote in 2014 with a Hebrew University computer scientist, Guy Kindler. Our calculations suggest that the noise in a quantum computer will kill all the high-frequency waves in the Fourier decomposition. If you think about the computational process as a Beethoven symphony, the noise will allow us to hear only the basses, but not the cellos, violas and violins.

    These results also give good reasons to think that noise levels cannot be sufficiently reduced; they will still be much higher than what is needed to demonstrate quantum supremacy and quantum error correction.

    Why can’t we push the noise level below this threshold?

    Many researchers believe that we can go beyond the threshold, and that constructing a quantum computer is merely an engineering challenge of lowering it. However, our first result shows that the noise level cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.

    What do your critics say to that?

    Critics point out that my work with Kindler deals with a restricted form of quantum computing and argue that our model for noise is not physical, but a mathematical simplification of an actual physical situation. I’m quite certain that what we have demonstrated for our simplified model is a real and general phenomenon.

    My critics also point to two things that they find strange in my analysis: The first is my attempt to draw conclusions about engineering of physical devices from considerations about computation. The second is drawing conclusions about small-scale quantum systems from insights of the theory of computation that are usually applied to large systems. I agree that these are unusual and perhaps even strange lines of analysis.

    And finally, they argue that these engineering difficulties are not fundamental barriers, and that with sufficient hard work and resources, the noise can be driven down to as close to zero as needed. But I think that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.

    How can you be certain?

    I am pretty certain, while a little nervous to be proven wrong. Our results state that noise will corrupt the computation, and that the noisy outcomes will be very easy to simulate on a classical computer. This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice. For quantum computers of the kind Google and IBM are building, when you run, as they plan to do, certain computational processes, they expect robust outcomes that are increasingly hard to simulate on a classical computer. Well, I expect very different outcomes. So I don’t need to be certain, I can simply wait and see.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 10:43 am on February 7, 2018 Permalink | Reply
    Tags: , , Quanta Magazine,   

    From The Atlantic Magazine: “The Big Bang May Have Been One of Many” 

    Atlantic Magazine

    The Atlantic Magazine

    Feb 6, 2018
    Natalie Wolchover

    davidope / Quanta Magazine

    Our universe could be expanding and contracting eternally.

    Humans have always entertained two basic theories about the origin of the universe. “In one of them, the universe emerges in a single instant of creation (as in the Jewish-Christian and the Brazilian Carajás cosmogonies),” the cosmologists Mario Novello and Santiago Perez Bergliaffa noted in 2008. In the other, “the universe is eternal, consisting of an infinite series of cycles (as in the cosmogonies of the Babylonians and Egyptians).” The division in modern cosmology “somehow parallels that of the cosmogonic myths,” Novello and Perez Bergliaffa wrote.

    In recent decades, it hasn’t seemed like much of a contest. The Big Bang theory, standard stuff of textbooks and television shows, enjoys strong support among today’s cosmologists. The rival eternal-universe picture had the edge a century ago, but it lost ground as astronomers observed that the cosmos is expanding and that it was small and simple about 14 billion years ago. In the most popular modern version of the theory, the Big Bang began with an episode called “cosmic inflation”—a burst of exponential expansion during which an infinitesimal speck of space-time ballooned into a smooth, flat, macroscopic cosmos, which expanded more gently thereafter.

    With a single initial ingredient (the “inflaton field”), inflationary models reproduce many broad-brush features of the cosmos today. But as an origin story, inflation is lacking; it raises questions about what preceded it and where that initial, inflaton-laden speck came from. Undeterred, many theorists think the inflaton field must fit naturally into a more complete, though still unknown, theory of time’s origin.

    But in the past few years, a growing number of cosmologists have cautiously revisited the alternative. They say the Big Bang might instead have been a Big Bounce. Some cosmologists favor a picture in which the universe expands and contracts cyclically like a lung, bouncing each time it shrinks to a certain size, while others propose that the cosmos only bounced once—that it had been contracting, before the bounce, since the infinite past, and that it will expand forever after. In either model, time continues into the past and future without end.

    With modern science, there’s hope of settling this ancient debate. In the years ahead, telescopes could find definitive evidence for cosmic inflation. During the primordial growth spurt—if it happened—quantum ripples in the fabric of space-time would have become stretched and later imprinted as subtle swirls in the polarization of ancient light called the cosmic microwave background [CMB].

    CMB per ESA/Planck


    Current and future telescope experiments are hunting for these swirls. If they aren’t seen in the next couple of decades, this won’t entirely disprove inflation (the telltale swirls could simply be too faint to make out), but it will strengthen the case for bounce cosmology, which doesn’t predict the swirl pattern.

    Already, several groups are making progress at once. Most significantly, in the last year, physicists have come up with two new ways that bounces could conceivably occur. One of the models, described in a paper that will appear in the Journal of Cosmology and Astroparticle Physics, comes from Anna Ijjas of Columbia University, extending earlier work with her former adviser, the Princeton University professor and high-profile bounce cosmologist Paul Steinhardt. More surprisingly, the other new bounce solution, accepted for publication in Physical Review D, was proposed by Peter Graham, David Kaplan, and Surjeet Rajendran, a well-known trio of collaborators who mainly focus on particle-physics questions and have no previous connection to the bounce-cosmology community. It’s a noteworthy development in a field that’s highly polarized on the bang-vs.-bounce question.

    The question gained renewed significance in 2001, when Steinhardt and three other cosmologists argued that a period of slow contraction in the history of the universe could explain its exceptional smoothness and flatness, as witnessed today, even after a bounce—with no need for a period of inflation.

    The universe’s impeccable plainness, the fact that no region of sky contains significantly more matter than any other and that space is breathtakingly flat as far as telescopes can see, is a mystery. To match its present uniformity, experts infer that the cosmos, when it was one centimeter across, must have had the same density everywhere to within one part in 100,000. But as it grew from an even smaller size, matter and energy ought to have immediately clumped together and contorted space-time. Why don’t our telescopes see a universe wrecked by gravity?

    “Inflation was motivated by the idea that that was crazy to have to assume the universe came out so smooth and not curved,” says the cosmologist Neil Turok, the director of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and a coauthor of the 2001 paper [Physical Review D] on cosmic contraction with Steinhardt, Justin Khoury, and Burt Ovrut.

    In the inflation scenario, the centimeter-size region results from the exponential expansion of a much smaller region—an initial speck measuring no more than a trillionth of a trillionth of a centimeter across. As long as that speck was infused with an inflaton field that was smooth and flat, meaning its energy concentration didn’t fluctuate across time or space, the speck would have inflated into a huge, smooth universe like ours. Raman Sundrum, a theoretical physicist at the University of Maryland, says the thing he appreciates about inflation is that “it has a kind of fault tolerance built in.” If, during this explosive growth phase, there was a buildup of energy that bent space-time in a certain place, the concentration would have quickly inflated away. “You make small changes against what you see in the data and you see the return to the behavior that the data suggests,” Sundrum says.

    However, where exactly that infinitesimal speck came from, and why it came out so smooth and flat itself to begin with, no one knows. Theorists have found many possible ways to embed the inflaton field into string theory., a candidate for the underlying quantum theory of gravity. So far, there’s no evidence for or against these ideas.

    Cosmic inflation also has a controversial consequence. The theory—which was pioneered in the 1980s by Alan Guth, Andrei Linde, Aleksei Starobinsky, and (of all people) Steinhardt, almost automatically leads to the hypothesis that our universe is a random bubble in an infinite, frothing multiverse sea. Once inflation starts, calculations suggest that it keeps going forever, only stopping in local pockets that then blossom into bubble universes like ours. The possibility of an eternally inflating multiverse suggests that our particular bubble might never be fully understandable on its own terms, since everything that can possibly happen in a multiverse happens infinitely many times. The subject evokes gut-level disagreement among experts. Many have reconciled themselves to the idea that our universe could be just one of many; Steinhardt calls the multiverse “hogwash.”

    This sentiment partly motivated his and other researchers’ about-face on bounces. “The bouncing models don’t have a period of inflation,” Turok says. Instead, they add a period of contraction before a Big Bounce to explain our uniform universe. “Just as the gas in the room you’re sitting in is completely uniform because the air molecules are banging around and equilibrating,” he says, “if the universe was quite big and contracting slowly, that gives plenty of time for the universe to smooth itself out.”

    Although the first contracting-universe models were convoluted and flawed, many researchers became convinced of the basic idea that slow contraction can explain many features of our expanding universe. “Then the bottleneck became literally the bottleneck—the bounce itself,” Steinhardt says. As Ijjas puts it, “The bounce has been the showstopper for these scenarios. People would agree that it’s very interesting if you can do a contraction phase, but not if you can’t get to an expansion phase.”

    Bouncing isn’t easy. In the 1960s, the British physicists Roger Penrose and Stephen Hawking proved a set of so-called “singularity theorems” showing that, under very general conditions, contracting matter and energy will unavoidably crunch into an immeasurably dense point called a singularity. These theorems make it hard to imagine how a contracting universe in which space-time, matter, and energy are all rushing inward could possibly avoid collapsing all the way down to a singularity—a point where Albert Einstein’s classical theory of gravity and space-time breaks down and the unknown quantum-gravity theory rules. Why shouldn’t a contracting universe share the same fate as a massive star, which dies by shrinking to the singular center of a black hole?

    Both of the newly proposed bounce models exploit loopholes in the singularity theorems—ones that, for many years, seemed like dead ends. Bounce cosmologists have long recognized that bounces might be possible if the universe contained a substance with negative energy (or other sources of negative pressure), which would counteract gravity and essentially push everything apart. They’ve been trying to exploit this loophole since the early 2000s, but they always found that adding negative-energy ingredients made their models of the universe unstable, because positive- and negative-energy quantum fluctuations could spontaneously arise together, unchecked, out of the zero-energy vacuum of space. In 2016, the Russian cosmologist Valery Rubakov and colleagues even proved a “no-go” [JCAP] theorem that seemed to rule out a huge class of bounce mechanisms on the grounds that they caused these so-called “ghost” instabilities.

    Then Ijjas found a bounce mechanism that evades the no-go theorem. The key ingredient in her model is a simple entity called a “scalar field,” which, according to the idea, would have kicked into gear as the universe contracted and energy became highly concentrated. The scalar field would have braided itself into the gravitational field in a way that exerted negative pressure on the universe, reversing the contraction and driving space-time apart—without destabilizing everything. Ijjas’ paper “is essentially the best attempt at getting rid of all possible instabilities and making a really stable model with this special type of matter,” says Jean-Luc Lehners, a theoretical cosmologist at the Max Planck Institute for Gravitational Physics in Germany who has also worked on bounce proposals.

    What’s especially interesting about the two new bounce models is that they are “non-singular,” meaning the contracting universe bounces and starts expanding again before ever shrinking to a point. These bounces can therefore be fully described by the classical laws of gravity, requiring no speculations about gravity’s quantum nature.

    Graham, Kaplan, and Rajendran, of Stanford University, Johns Hopkins University and UC Berkeley, respectively, reported their non-singular bounce idea on the scientific preprint site ArXiv.org in September 2017. They found their way to it after wondering whether a previous contraction phase in the history of the universe could be used to explain the value of the cosmological constant—a mystifyingly tiny number that defines the amount of dark energy infused in the space-time fabric, energy that drives the accelerating expansion of the universe.

    In working out the hardest part—the bounce—the trio exploited a second, largely forgotten loophole in the singularity theorems. They took inspiration from a characteristically strange model of the universe proposed by the logician Kurt Gödel in 1949, when he and Einstein were walking companions and colleagues at the Institute for Advanced Study in Princeton, New Jersey. Gödel used the laws of general relativity to construct the theory of a rotating universe, whose spinning keeps it from gravitationally collapsing in much the same way that Earth’s orbit prevents it from falling into the sun. Gödel especially liked the fact that his rotating universe permitted “closed time-like curves,” essentially loops in time, which raised all sorts of Gödelian riddles. To his dying day, he eagerly awaited evidence that the universe really is rotating in the manner of his model. Researchers now know it isn’t; otherwise, the cosmos would exhibit alignments and preferred directions. But Graham and company wondered about small, curled-up spatial dimensions that might exist in space, such as the six extra dimensions postulated by string theory. Could a contracting universe spin in those directions?

    magine there’s just one of these curled-up extra dimensions, a tiny circle found at every point in space. As Graham puts it, “At each point in space there’s an extra direction you can go in, a fourth spatial direction, but you can only go a tiny little distance and then you come back to where you started.” If there are at least three extra compact dimensions, then, as the universe contracts, matter and energy can start spinning inside them, and the dimensions themselves will spin with the matter and energy. The vorticity in the extra dimensions can suddenly initiate a bounce. “All that stuff that would have been crunching into a singularity, because it’s spinning in the extra dimensions, it misses—sort of like a gravitational slingshot,” Graham says. “All the stuff should have been coming to a single point, but instead it misses and flies back out again.”

    he paper has attracted attention beyond the usual circle of bounce cosmologists. Sean Carroll, a theoretical physicist at the California Institute of Technology, is skeptical but called the idea “very clever.” He says it’s important to develop alternatives to the conventional inflation story, if only to see how much better inflation appears by comparison—especially when next-generation telescopes come online in the early 2020s looking for the telltale swirl pattern in the sky caused by inflation. “Even though I think inflation has a good chance of being right, I wish there were more competitors,” Carroll says. Sundrum, the Maryland physicist, feels similarly. “There are some questions I consider so important that even if you have only a 5 percent chance of succeeding, you should throw everything you have at it and work on them,” he says. “And that’s how I feel about this paper.”

    As Graham, Kaplan, and Rajendran explore their bounce and its possible experimental signatures, the next step for Ijjas and Steinhardt, working with Frans Pretorius of Princeton, is to develop computer simulations. (Their collaboration is supported by the Simons Foundation, which also funds Quanta Magazine.) Both bounce mechanisms also need to be integrated into more complete, stable cosmological models that would describe the entire evolutionary history of the universe.

    Beyond these non-singular bounce solutions, other researchers are speculating about what kind of bounce might occur when a universe contracts all the way to a singularity—a bounce orchestrated by the unknown quantum laws of gravity, which replace the usual understanding of space and time at extremely high energies. In forthcoming work, Turok and collaborators plan to propose a model in which the universe expands symmetrically into the past and future away from a central, singular bounce. Turok contends that the existence of this two-lobed universe is equivalent to the spontaneous creation of electron-positron pairs, which constantly pop in and out of the vacuum. “Richard Feynman pointed out that you can look at the positron as an electron going backward in time,” he says. “They’re two particles, but they’re really the same; at a certain moment in time they merge and annihilate.” He added, “The idea is a very, very deep one, and most likely the Big Bang will turn out to be similar, where a universe and its anti-universe were drawn out of nothing, if you like, by the presence of matter.”

    It remains to be seen whether this universe/anti-universe bounce model can accommodate all observations of the cosmos, but Turok likes how simple it is. Most cosmological models are far too complicated in his view. The universe “looks extremely ordered and symmetrical and simple,” he says. “That’s very exciting for theorists, because it tells us there may be a simple—even if hard-to-discover—theory waiting to be discovered, which might explain the most paradoxical features of the universe.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

  • richardmitnick 1:55 pm on February 3, 2018 Permalink | Reply
    Tags: , , Job One for Quantum Computers: Boost Artificial Intelligence, Quanta Magazine   

    From Quanta: “Job One for Quantum Computers: Boost Artificial Intelligence” 

    Quanta Magazine
    Quanta Magazine

    January 29, 2018
    George Musser

    Josef Bsharah for Quanta Magazine.

    In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”

    Today the mashup of the two seems the most natural thing in the world. Neural networks and other machine-learning systems have become the most disruptive technology of the 21st century. They out-human humans, beating us not just at tasks most of us were never really good at, such as chess and data-mining, but also at the very types of things our brains evolved for, such as recognizing faces, translating languages and negotiating four-way stops. These systems have been made possible by vast computing power, so it was inevitable that tech companies would seek out computers that were not just bigger, but a new class of machine altogether.

    Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. That’s still another decade off, at least. But even today’s rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and don’t choke on incomplete or uncertain data. “There is a natural combination between the intrinsic statistical nature of quantum computing … and machine learning,” said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

    If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. “‘Machine learning’ is becoming a buzzword,” said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. “When you mix that with ‘quantum,’ it becomes a mega-buzzword.”

    Yet nothing with the word “quantum” in it is ever quite what it seems. Although you might think a quantum machine-learning system should be powerful, it suffers from a kind of locked-in syndrome. It operates on quantum states, not on human-readable data, and translating between the two can negate its apparent advantages. It’s like an iPhone X that, for all its impressive specs, ends up being just as slow as your old phone, because your network is as awful as ever. For a few special cases, physicists can overcome this input-output bottleneck, but whether those cases arise in practical machine-learning tasks is still unknown. “We don’t have clear answers yet,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who is always the voice of sobriety when it comes to quantum computing. “People have often been very cavalier about whether these algorithms give a speedup.”

    Quantum Neurons

    The main job of a neural network, be it classical or quantum, is to recognize patterns. Inspired by the human brain, it is a grid of basic computing units — the “neurons.” Each can be as simple as an on-off device. A neuron monitors the output of multiple other neurons, as if taking a vote, and switches on if enough of them are on. Typically, the neurons are arranged in layers. An initial layer accepts input (such as image pixels), intermediate layers create various combinations of the input (representing structures such as edges and geometric shapes) and a final layer produces output (a high-level description of the image content).

    Lucy Reading-Ikkanda/Quanta Magazine

    Crucially, the wiring is not fixed in advance, but adapts in a process of trial and error. The network might be fed images labeled “kitten” or “puppy.” For each image, it assigns a label, checks whether it was right, and tweaks the neuronal connections if not. Its guesses are random at first, but get better; after perhaps 10,000 examples, it knows its pets. A serious neural network can have a billion interconnections, all of which need to be tuned.

    On a classical computer, all these interconnections are represented by a ginormous matrix of numbers, and running the network means doing matrix algebra. Conventionally, these matrix operations are outsourced to a specialized chip such as a graphics processing unit. But nothing does matrices like a quantum computer. “Manipulation of large matrices and large vectors are exponentially faster on a quantum computer,” said Seth Lloyd, a physicist at the Massachusetts Institute of Technology and a quantum-computing pioneer.

    For this task, quantum computers are able to take advantage of the exponential nature of a quantum system. The vast bulk of a quantum system’s information storage capacity resides not in its individual data units — its qubits, the quantum counterpart of classical computer bits — but in the collective properties of those qubits. Two qubits have four joint states: both on, both off, on/off, and off/on. Each has a certain weighting, or “amplitude,” that can represent a neuron. If you add a third qubit, you can represent eight neurons; a fourth, 16. The capacity of the machine grows exponentially. In effect, the neurons are smeared out over the entire system. When you act on a state of four qubits, you are processing 16 numbers at a stroke, whereas a classical computer would have to go through those numbers one by one.

    Lloyd estimates that 60 qubits would be enough to encode an amount of data equivalent to that produced by humanity in a year, and 300 could carry the classical information content of the observable universe. (The biggest quantum computers at the moment, built by IBM, Intel and Google, have 50-ish qubits.) And that’s assuming each amplitude is just a single classical bit. In fact, amplitudes are continuous quantities (and, indeed, complex numbers) and, for a plausible experimental precision, one might store as many as 15 bits, Aaronson said.

    But a quantum computer’s ability to store information compactly doesn’t make it faster. You need to be able to use those qubits. In 2008, Lloyd, the physicist Aram Harrow of MIT and Avinatan Hassidim, a computer scientist at Bar-Ilan University in Israel, showed how to do the crucial algebraic operation of inverting a matrix. They broke it down into a sequence of logic operations that can be executed on a quantum computer. Their algorithm works for a huge variety of machine-learning techniques. And it doesn’t require nearly as many algorithmic steps as, say, factoring a large number does. A computer could zip through a classification task before noise — the big limiting factor with today’s technology — has a chance to foul it up. “You might have a quantum advantage before you have a fully universal, fault-tolerant quantum computer,” said Kristan Temme of IBM’s Thomas J. Watson Research Center.

    Let Nature Solve the Problem

    So far, though, machine learning based on quantum matrix algebra has been demonstrated only on machines with just four qubits. Most of the experimental successes of quantum machine learning to date have taken a different approach, in which the quantum system does not merely simulate the network; it is the network. Each qubit stands for one neuron. Though lacking the power of exponentiation, a device like this can avail itself of other features of quantum physics.

    The largest such device, with some 2,000 qubits, is the quantum processor manufactured by D-Wave Systems, based near Vancouver, British Columbia. It is not what most people think of as a computer. Instead of starting with some input data, executing a series of operations and displaying the output, it works by finding internal consistency. Each of its qubits is a superconducting electric loop that acts as a tiny electromagnet oriented up, down, or up and down — a superposition. Qubits are “wired” together by allowing them to interact magnetically.

    Processors made by D-Wave Systems are being used for machine learning applications. Mwjohnson0.

    To run the system, you first impose a horizontal magnetic field, which initializes the qubits to an equal superposition of up and down — the equivalent of a blank slate. There are a couple of ways to enter data. In some cases, you fix a layer of qubits to the desired input values; more often, you incorporate the input into the strength of the interactions. Then you let the qubits interact. Some seek to align in the same direction, some in the opposite direction, and under the influence of the horizontal field, they flip to their preferred orientation. In so doing, they might trigger other qubits to flip. Initially that happens a lot, since so many of them are misaligned. Over time, though, they settle down, and you can turn off the horizontal field to lock them in place. At that point, the qubits are in a pattern of up and down that ensures the output follows from the input.

    It’s not at all obvious what the final arrangement of qubits will be, and that’s the point. The system, just by doing what comes naturally, is solving a problem that an ordinary computer would struggle with. “We don’t need an algorithm,” explained Hidetoshi Nishimori, a physicist at the Tokyo Institute of Technology who developed the principles on which D-Wave machines operate. “It’s completely different from conventional programming. Nature solves the problem.”

    The qubit-flipping is driven by quantum tunneling, a natural tendency that quantum systems have to seek out their optimal configuration, rather than settle for second best. You could build a classical network that worked on analogous principles, using random jiggling rather than tunneling to get bits to flip, and in some cases it would actually work better. But, interestingly, for the types of problems that arise in machine learning, the quantum network seems to reach the optimum faster.

    The D-Wave machine has had its detractors. It is extremely noisy and, in its current incarnation, can perform only a limited menu of operations. Machine-learning algorithms, though, are noise-tolerant by their very nature. They’re useful precisely because they can make sense of a messy reality, sorting kittens from puppies against a backdrop of red herrings. “Neural networks are famously robust to noise,” Behrman said.

    In 2009 a team led by Hartmut Neven, a computer scientist at Google who pioneered augmented reality — he co-founded the Google Glass project — and then took up quantum information processing, showed how an early D-Wave machine could do a respectable machine-learning task. They used it as, essentially, a single-layer neural network that sorted images into two classes: “car” or “no car” in a library of 20,000 street scenes. The machine had only 52 working qubits, far too few to take in a whole image. (Remember: the D-Wave machine is of a very different type than in the state-of-the-art 50-qubit systems coming online in 2018.) So Neven’s team combined the machine with a classical computer, which analyzed various statistical quantities of the images and calculated how sensitive these quantities were to the presence of a car — usually not very, but at least better than a coin flip. Some combination of these quantities could, together, spot a car reliably, but it wasn’t obvious which. It was the network’s job to find out.

    The team assigned a qubit to each quantity. If that qubit settled into a value of 1, it flagged the corresponding quantity as useful; 0 meant don’t bother. The qubits’ magnetic interactions encoded the demands of the problem, such as including only the most discriminating quantities, so as to keep the final selection as compact as possible. The result was able to spot a car.

    Last year a group led by Maria Spiropulu, a particle physicist at the California Institute of Technology, and Daniel Lidar, a physicist at USC, applied the algorithm to a practical physics problem: classifying proton collisions as “Higgs boson” or “no Higgs boson.” Limiting their attention to collisions that spat out photons, they used basic particle theory to predict which photon properties might betray the fleeting existence of the Higgs, such as momentum in excess of some threshold. They considered eight such properties and 28 combinations thereof, for a total of 36 candidate signals, and let a late-model D-Wave at the University of Southern California find the optimal selection. It identified [Nature]16 of the variables as useful and three as the absolute best. The quantum machine needed less data than standard procedures to perform an accurate identification. “Provided that the training set was small, then the quantum approach did provide an accuracy advantage over traditional methods used in the high-energy physics community,” Lidar said.

    Maria Spiropulu, a physicist at the California Institute of Technology, used quantum machine learning to find Higgs bosons. Courtesy of Maria Spiropulu

    In December, Rigetti demonstrated a way to automatically group objects using a general-purpose quantum computer with 19 qubits. The researchers did the equivalent of feeding the machine a list of cities and the distances between them, and asked it to sort the cities into two geographic regions. What makes this problem hard is that the designation of one city depends on the designation of all the others, so you have to solve the whole system at once.

    The Rigetti team effectively assigned each city a qubit, indicating which group it was assigned to. Through the interactions of the qubits (which, in Rigetti’s system, are electrical rather than magnetic), each pair of qubits sought to take on opposite values — their energy was minimized when they did so. Clearly, for any system with more than two qubits, some pairs of qubits had to consent to be assigned to the same group. Nearby cities assented more readily since the energetic cost for them to be in the same group was lower than for more-distant cities.

    To drive the system to its lowest energy, the Rigetti team took an approach similar in some ways to the D-Wave annealer. They initialized the qubits to a superposition of all possible cluster assignments. They allowed qubits to interact briefly, which biased them toward assuming the same or opposite values. Then they applied the analogue of a horizontal magnetic field, allowing the qubits to flip if they were so inclined, pushing the system a little way toward its lowest-energy state. They repeated this two-step process — interact then flip — until the system minimized its energy, thus sorting the cities into two distinct regions.

    These classification tasks are useful but straightforward. The real frontier of machine learning is in generative models, which do not simply recognize puppies and kittens, but can generate novel archetypes — animals that never existed, but are every bit as cute as those that did. They might even figure out the categories of “kitten” and “puppy” on their own, or reconstruct images missing a tail or paw. “These techniques are very powerful and very useful in machine learning, but they are very hard,” said Mohammad Amin, the chief scientist at D-Wave. A quantum assist would be most welcome.

    D-Wave and other research teams have taken on this challenge. Training such a model means tuning the magnetic or electrical interactions among qubits so the network can reproduce some sample data. To do this, you combine the network with an ordinary computer. The network does the heavy lifting — figuring out what a given choice of interactions means for the final network configuration — and its partner computer uses this information to adjust the interactions. In one demonstration last year, Alejandro Perdomo-Ortiz, a researcher at NASA’s Quantum Artificial Intelligence Lab, and his team exposed a D-Wave system to images of handwritten digits. It discerned that there were 10 categories, matching the digits 0 through 9, and generated its own scrawled numbers.

    Bottlenecks Into the Tunnels

    Well, that’s the good news. The bad is that it doesn’t much matter how awesome your processor is if you can’t get your data into it. In matrix-algebra algorithms, a single operation may manipulate a matrix of 16 numbers, but it still takes 16 operations to load the matrix. “State preparation — putting classical data into a quantum state — is completely shunned, and I think this is one of the most important parts,” said Maria Schuld, a researcher at the quantum-computing startup Xanadu and one of the first people to receive a doctorate in quantum machine learning. Machine-learning systems that are laid out in physical form face parallel difficulties of how to embed a problem in a network of qubits and get the qubits to interact as they should.

    Once you do manage to enter your data, you need to store it in such a way that a quantum system can interact with it without collapsing the ongoing calculation. Lloyd and his colleagues have proposed a quantum RAM that uses photons, but no one has an analogous contraption for superconducting qubits or trapped ions, the technologies found in the leading quantum computers. “That’s an additional huge technological problem beyond the problem of building a quantum computer itself,” Aaronson said. “The impression I get from the experimentalists I talk to is that they are frightened. They have no idea how to begin to build this.”

    And finally, how do you get your data out? That means measuring the quantum state of the machine, and not only does a measurement return only a single number at a time, drawn at random, it collapses the whole state, wiping out the rest of the data before you even have a chance to retrieve it. You’d have to run the algorithm over and over again to extract all the information.

    Yet all is not lost. For some types of problems, you can exploit quantum interference. That is, you can choreograph the operations so that wrong answers cancel themselves out and right ones reinforce themselves; that way, when you go to measure the quantum state, it won’t give you just any random value, but the desired answer. But only a few algorithms, such as brute-force search, can make good use of interference, and the speedup is usually modest.

    In some cases, researchers have found shortcuts to getting data in and out. In 2015 Lloyd, Silvano Garnerone of the University of Waterloo in Canada, and Paolo Zanardi at USC showed that, for some kinds of statistical analysis, you don’t need to enter or store the entire data set. Likewise, you don’t need to read out all the data when a few key values would suffice. For instance, tech companies use machine learning to suggest shows to watch or things to buy based on a humongous matrix of consumer habits. “If you’re Netflix or Amazon or whatever, you don’t actually need the matrix written down anywhere,” Aaronson said. “What you really need is just to generate recommendations for a user.”

    All this invites the question: If a quantum machine is powerful only in special cases, might a classical machine also be powerful in those cases? This is the major unresolved question of the field. Ordinary computers are, after all, extremely capable. The usual method of choice for handling large data sets — random sampling — is actually very similar in spirit to a quantum computer, which, whatever may go on inside it, ends up returning a random result. Schuld remarked: “I’ve done a lot of algorithms where I felt, ‘This is amazing. We’ve got this speedup,’ and then I actually, just for fun, write a sampling technique for a classical computer, and I realize you can do the same thing with sampling.”

    If you look back at the successes that quantum machine learning has had so far, they all come with asterisks. Take the D-Wave machine. When classifying car images and Higgs bosons, it was no faster than a classical machine. “One of the things we do not talk about in this paper is quantum speedup,” said Alex Mott, a computer scientist at Google DeepMind who was a member of the Higgs research team. Matrix-algebra approaches such as the Harrow-Hassidim-Lloyd algorithm show a speedup only if the matrices are sparse — mostly filled with zeroes. “No one ever asks, are sparse data sets actually interesting in machine learning?” Schuld noted.

    Quantum Intelligence

    On the other hand, even the occasional incremental improvement over existing techniques would make tech companies happy. “These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

    Schuld also sees scope for innovation on the software side. Machine learning is more than a bunch of calculations. It is a complex of problems that have their own particular structure. “The algorithms that people construct are removed from the things that make machine learning interesting and beautiful,” she said. “This is why I started to work the other way around and think: If have this quantum computer already — these small-scale ones — what machine-learning model actually can it generally implement? Maybe it is a model that has not been invented yet.” If physicists want to impress machine-learning experts, they’ll need to do more than just make quantum versions of existing models.

    Just as many neuroscientists now think that the structure of human thought reflects the requirements of having a body, so, too, are machine-learning systems embodied. The images, language and most other data that flow through them come from the physical world and reflect its qualities. Quantum machine learning is similarly embodied — but in a richer world than ours. The one area where it will undoubtedly shine is in processing data that is already quantum. When the data is not an image, but the product of a physics or chemistry experiment, the quantum machine will be in its element. The input problem goes away, and classical computers are left in the dust.

    In a neatly self-referential loop, the first quantum machine-learning systems may help to design their successors. “One way we might actually want to use these systems is to build quantum computers themselves,” Wiebe said. “For some debugging tasks, it’s the only approach that we have.” Maybe they could even debug us. Leaving aside whether the human brain is a quantum computer — a highly contentious question — it sometimes acts as if it were one. Human behavior is notoriously contextual; our preferences are formed by the choices we are given, in ways that defy logic. In this, we are like quantum particles. “The way you ask questions and the ordering matters, and that is something that is very typical in quantum data sets,” Perdomo-Ortiz said. So a quantum machine-learning system might be a natural way to study human cognitive biases.

    Neural networks and quantum processors have one thing in common: It is amazing they work at all. It was never obvious that you could train a network, and for decades most people doubted it would ever be possible. Likewise, it is not obvious that quantum physics could ever be harnessed for computation, since the distinctive effects of quantum physics are so well hidden from us. And yet both work — not always, but more often than we had any right to expect. On this precedent, it seems likely that their union will also find its place.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 1:00 pm on January 14, 2018 Permalink | Reply
    Tags: , , Physicists Aim to Classify All Possible Phases of Matter, , Quanta Magazine, The Haah Code   

    From Quanta: “Physicists Aim to Classify All Possible Phases of Matter” 

    Quanta Magazine
    Quanta Magazine

    January 3, 2018
    Natalie Wolchover

    Olena Shmahalo/Quanta Magazine

    In the last three decades, condensed matter physicists have discovered a wonderland of exotic new phases of matter: emergent, collective states of interacting particles that are nothing like the solids, liquids and gases of common experience.

    The phases, some realized in the lab and others identified as theoretical possibilities, arise when matter is chilled almost to absolute-zero temperature, hundreds of degrees below the point at which water freezes into ice. In these frigid conditions, particles can interact in ways that cause them to shed all traces of their original identities. Experiments in the 1980s revealed that in some situations electrons split en masse into fractions of particles that make braidable trails through space-time; in other cases, they collectively whip up massless versions of themselves. A lattice of spinning atoms becomes a fluid of swirling loops or branching strings; crystals that began as insulators start conducting electricity over their surfaces. One phase that shocked experts when recognized as a mathematical possibility [Phys. Rev. A] in 2011 features strange, particle-like “fractons” that lock together in fractal patterns.

    Now, research groups at Microsoft and elsewhere are racing to encode quantum information in the braids and loops of some of these phases for the purpose of developing a quantum computer. Meanwhile, condensed matter theorists have recently made major strides in understanding the pattern behind the different collective behaviors that can arise, with the goal of enumerating and classifying all possible phases of matter. If a complete classification is achieved, it would not only account for all phases seen in nature so far, but also potentially point the way toward new materials and technologies.

    Led by dozens of top theorists, with input from mathematicians, researchers have already classified a huge swath of phases that can arise in one or two spatial dimensions by relating them to topology: the math that describes invariant properties of shapes like the sphere and the torus. They’ve also begun to explore the wilderness of phases that can arise near absolute zero in 3-D matter.

    Xie Chen, a condensed matter theorist at the California Institute of Technology, says the “grand goal” of the classification program is to enumerate all phases that can possibly arise from particles of any given type. Max Gerber, courtesy of Caltech Development and Institute Relations.

    “It’s not a particular law of physics” that these scientists seek, said Michael Zaletel, a condensed matter theorist at Princeton University. “It’s the space of all possibilities, which is a more beautiful or deeper idea in some ways.” Perhaps surprisingly, Zaletel said, the space of all consistent phases is itself a mathematical object that “has this incredibly rich structure that we think ends up, in 1-D and 2-D, in one-to-one correspondence with these beautiful topological structures.”

    In the landscape of phases, there is “an economy of options,” said Ashvin Vishwanath of Harvard University. “It all seems comprehensible” — a stroke of luck that mystifies him. Enumerating phases of matter could have been “like stamp collecting,” Vishwanath said, “each a little different, and with no connection between the different stamps.” Instead, the classification of phases is “more like a periodic table. There are many elements, but they fall into categories and we can understand the categories.”

    While classifying emergent particle behaviors might not seem fundamental, some experts, including Xiao-Gang Wen of the Massachusetts Institute of Technology, say the new rules of emergent phases show how the elementary particles themselves might arise from an underlying network of entangled bits of quantum information, which Wen calls the “qubit ocean.” For example, a phase called a “string-net liquid” that can emerge in a three-dimensional system of qubits has excitations that look like all the known elementary particles. “A real electron and a real photon are maybe just fluctuations of the string-net,” Wen said.

    A New Topological Order

    Before these zero-temperature phases cropped up, physicists thought they had phases all figured out. By the 1950s, they could explain what happens when, for example, water freezes into ice, by describing it as the breaking of a symmetry: Whereas liquid water has rotational symmetry at the atomic scale (it looks the same in every direction), the H20 molecules in ice are locked in crystalline rows and columns.

    Things changed in 1982 with the discovery of phases called fractional quantum Hall states in an ultracold, two-dimensional gas of electrons. These strange states of matter feature emergent particles with fractions of an electron’s charge that take fractions of steps in a one-way march around the perimeter of the system. “There was no way to use different symmetry to distinguish those phases,” Wen said.

    A new paradigm was needed. In 1989, Wen imagined phases like the fractional quantum Hall states arising not on a plane, but on different topological manifolds — connected spaces such as the surface of a sphere or a torus. Topology concerns global, invariant properties of such spaces that can’t be changed by local deformations. Famously, to a topologist, you can turn a doughnut into a coffee cup by simply deforming its surface, since both surfaces have one hole and are therefore equivalent topologically. You can stretch and squeeze all you like, but even the most malleable doughnut will refuse to become a pretzel.

    Wen found that new properties of the zero-temperature phases were revealed in the different topological settings, and he coined the term “topological order” to describe the essence of these phases. Other theorists were also uncovering links to topology. With the discovery of many more exotic phases — so many that researchers say they can barely keep up — it became clear that topology, together with symmetry, offers a good organizing schema.

    The topological phases only show up near absolute zero, because only at such low temperatures can systems of particles settle into their lowest-energy quantum “ground state.” In the ground state, the delicate interactions that correlate particles’ identities — effects that are destroyed at higher temperatures — link up particles in global patterns of quantum entanglement. Instead of having individual mathematical descriptions, particles become components of a more complicated function that describes all of them at once, often with entirely new particles emerging as the excitations of the global phase. The long-range entanglement patterns that arise are topological, or impervious to local changes, like the number of holes in a manifold.

    Lucy Reading-Ikkanda/Quanta Magazine

    Consider the simplest topological phase in a system — called a “quantum spin liquid” — that consists of a 2-D lattice of “spins,” or particles that can point up, down, or some probability of each simultaneously. At zero temperature, the spin liquid develops strings of spins that all point down, and these strings form closed loops. As the directions of spins fluctuate quantum-mechanically, the pattern of loops throughout the material also fluctuates: Loops of down spins merge into bigger loops and divide into smaller loops. In this quantum-spin-liquid phase, the system’s ground state is the quantum superposition of all possible loop patterns.

    To understand this entanglement pattern as a type of topological order, imagine, as Wen did, that the quantum spin liquid is spilling around the surface of a torus, with some loops winding around the torus’s hole. Because of these hole windings, instead of having a single ground state associated with the superposition of all loop patterns, the spin liquid will now exist in one of four distinct ground states, tied to four different superpositions of loop patterns. One state consists of all possible loop patterns with an even number of loops winding around the torus’s hole and an even number winding through the hole. Another state has an even number of loops around the hole and an odd number through the hole; the third and fourth ground states correspond to odd and even, and odd and odd, numbers of hole windings, respectively.

    Which of these ground states the system is in stays fixed, even as the loop pattern fluctuates locally. If, for instance, the spin liquid has an even number of loops winding around the torus’s hole, two of these loops might touch and combine, suddenly becoming a loop that doesn’t wrap around the hole at all. Long-way loops decrease by two, but the number remains even. The system’s ground state is a topologically invariant property that withstands local changes.

    Future quantum computers could take advantage of this invariant quality. Having four topological ground states that aren’t affected by local deformations or environmental error “gives you a way to store quantum information, because your bit could be what ground state it’s in,” explained Zaletel, who has studied the topological properties of spin liquids and other quantum phases. Systems like spin liquids don’t really need to wrap around a torus to have topologically protected ground states. A favorite playground of researchers is the toric code, a phase theoretically constructed by the condensed matter theorist Alexei Kitaev of the California Institute of Technology in 1997 and demonstrated in experiments over the past decade. The toric code can live on a plane and still maintain the multiple ground states of a torus. (Loops of spins are essentially able to move off the edge of the system and re-enter on the opposite side, allowing them to wind around the system like loops around a torus’s hole.) “We know how to translate between the ground-state properties on a torus and what the behavior of the particles would be,” Zaletel said.

    Spin liquids can also enter other phases, in which spins, instead of forming closed loops, sprout branching networks of strings. This is the string-net liquid phase [Phys.Rev. B] that, according to Wen, “can produce the Standard Model” of particle physics starting from a 3-D qubit ocean.

    The Universe of Phases

    Research by several groups in 2009 and 2010 completed the classification of “gapped” phases of matter in one dimension, such as in chains of particles. A gapped phase is one with a ground state: a lowest-energy configuration sufficiently removed or “gapped” from higher-energy states that the system stably settles into it. Only gapped quantum phases have well-defined excitations in the form of particles. Gapless phases are like swirling matter miasmas or quantum soups and remain largely unknown territory in the landscape of phases.

    For a 1-D chain of bosons — particles like photons that have integer values of quantum spin, which means they return to their initial quantum states after swapping positions — there is only one gapped topological phase. In this phase, first studied by the Princeton theorist Duncan Haldane, who, along with David Thouless and J. Michael Kosterlitz, won the 2016 Nobel Prize for decades of work on topological phases, the spin chain gives rise to half-spin particles on both ends. Two gapped topological phases exist for chains of fermions — particles like electrons and quarks that have half-integer values of spin, meaning their states become negative when they switch positions. The topological order in all these 1-D chains stems not from long-range quantum entanglement, but from local symmetries acting between neighboring particles. Called “symmetry-protected topological phases,” they correspond to “cocycles of the cohomology group,” mathematical objects related to invariants like the number of holes in a manifold.

    Lucy Reading-Ikkanda/Quanta Magazine, adapted from figure by Xiao-Gang Wen

    Two-dimensional phases are more plentiful and more interesting. They can have what some experts consider “true” topological order: the kind associated with long-range patterns of quantum entanglement, like the fluctuating loop patterns in a spin liquid. In the last few years, researchers have shown that these entanglement patterns correspond to topological structures called tensor categories, which enumerate the different ways that objects can possibly fuse and braid around one another. “The tensor categories give you a way [to describe] particles that fuse and braid in a consistent way,” said David Pérez-García of Complutense University of Madrid.

    Researchers like Pérez-García are working to mathematically prove that the known classes of 2-D gapped topological phases are complete. He helped close the 1-D case in 2010 [Phys. Rev. B], at least under the widely-held assumption that these phases are always well-approximated by quantum field theories — mathematical descriptions that treat the particles’ environments as smooth. “These tensor categories are conjectured to cover all 2-D phases, but there is no mathematical proof yet,” Pérez-García said. “Of course, it would be much more interesting if one can prove that this is not all. Exotic things are always interesting because they have new physics, and they’re maybe useful.”

    Gapless quantum phases represent another kingdom of possibilities to explore, but these impenetrable fogs of matter resist most theoretical methods. “The language of particles is not useful, and there are supreme challenges that we are starting to confront,” said Senthil Todadri, a condensed matter theorist at MIT. Gapless phases present the main barrier in the quest to understand high-temperature superconductivity, for instance. And they hinder quantum gravity researchers in the “it from qubit” movement, who believe that not only elementary particles, but also space-time and gravity, arise from patterns of entanglement in some kind of underlying qubit ocean. “In it from qubit, we spend much of our time on gapless states because this is where one gets gravity, at least in our current understanding,” said Brian Swingle, a theoretical physicist at the University of Maryland. Some researchers try to use mathematical dualities to convert the quantum-soup picture into an equivalent particle description in one higher dimension. “It should be viewed in the spirit of exploring,” Todadri said.

    Even more enthusiastic exploration is happening in 3-D. What’s already clear is that, when spins and other particles spill from their chains and flatlands and fill the full three spatial dimensions of reality, unimaginably strange patterns of quantum entanglement can emerge. “In 3-D, there are things that escape, so far, this tensor-category picture,” said Pérez-García. “The excitations are very wild.”

    The Haah Code

    The very wildest of the 3-D phases appeared seven years ago. A talented Caltech graduate student named Jeongwan Haah discovered the phase in a computer search while looking for what’s known as the “dream code”: a quantum ground state so robust that it can be used to securely store quantum memory, even at room temperature.

    For this, Haah had to turn to 3-D matter. In 2-D topological phases like the toric code, a significant source of error is “stringlike operators”: perturbations to the system that cause new strings of spins to accidentally form. These strings will sometimes wind new loops around the torus’s hole, bumping the number of windings from even to odd or vice versa and converting the toric code to one of its three other quantum ground states. Because strings grow uncontrollably and wrap around things, experts say there cannot be good quantum memories in 2-D.

    eongwan Haah, a condensed matter theorist now working at Microsoft Research in Redmond, Washington, discovered a bizarre 3-D phase of matter with fractal properties. Jeremy Mashburn.

    Haah wrote an algorithm to search for 3-D phases that avoid the usual kinds of stringlike operators. The computer coughed up 17 exact solutions that he then studied by hand. Four of the phases were confirmed to be free of stringlike operators; the one with the highest symmetry was what’s now known as the Haah code.

    As well as being potentially useful for storing quantum memory, the Haah code was also profoundly weird. Xie Chen, a condensed matter theorist at Caltech, recalled hearing the news as a graduate student in 2011, within a month or two of Haah’s disorienting discovery. “Everyone was totally shocked,” she said. “We didn’t know anything we could do about it. And now, that’s been the situation for many years.”

    The Haah code is relatively simple on paper: It’s the solution of a two-term energy formula, describing spins that interact with their eight nearest neighbors in a cubic lattice. But the resulting phase “strains our imaginations,” Todadri said.

    The code features particle-like entities called fractons that, unlike the loopy patterns in, say, a quantum spin liquid, are nonliquid and locked in place; the fractons can only hop between positions in the lattice if those positions are operated upon in a fractal pattern. That is, you have to inject energy into the system at each corner of, say, a tetrahedron connecting four fractons in order to make them switch positions, but when you zoom in, you see that what you treated as a point-like corner was actually the four corners of a smaller tetrahedron, and you have to inject energy into the corners of that one as well. At a finer scale, you see an even smaller tetrahedron, and so on, all the way down to the finest scale of the lattice. This fractal behavior means that the Haah code never forgets the underlying lattice it comes from, and it can never be approximated by a smoothed-out description of the lattice, as in a quantum field theory. What’s more, the number of ground states in the Haah code grows with the size of the underlying lattice — a decidedly non-topological property. (Stretch a torus, and it’s still a torus.)

    The quantum state of the Haah code is extraordinarily secure, since a “fractal operator” that perfectly hits all the marks is unlikely to come along at random. Experts say a realizable version of the code would be of great technological interest.

    Haah’s phase has also generated a surge of theoretical speculation. Haah helped matters along in 2015 when he and two collaborators at MIT discovered [Phys. Rev. B] many examples of a class of phases now known as “fracton models” that are simpler cousins of the Haah code. (The first model in this family was introduced [Physical Review Letters] by Claudio Chamon of Boston University in 2005.) Chen and others have since been studying [the topology of these fracton systems, some of which permit particles to move along lines or sheets within a 3-D volume and might aid conceptual understanding or be easier to realize experimentally [Physical Review Letters]. “It’s opening the door to many more exotic things,” Chen said of the Haah code. “It’s an indication about how little we know about 3-D and higher dimensions. And because we don’t yet have a systematic picture of what is going on, there might be a lot of things lying out there waiting to be explored.”

    No one knows yet where the Haah code and its cousins belong in the landscape of possible phases, or how much bigger this space of possibilities might be. According to Todadri, the community has made progress in classifying the simplest gapped 3-D phases, but more exploration is needed in 3-D before a program of complete classification can begin there. What’s clear, he said, is that “when the classification of gapped phases of matter is taken up in 3-D, it will have to confront these weird possibilities that Haah first discovered.”

    Many researchers think new classifying concepts, and even whole new frameworks, might be necessary to capture the Haah code’s fractal nature and reveal the full scope of possibilities for 3-D quantum matter. Wen said, “You need a new type of theory, new thinking.” Perhaps, he said, we need a new picture of nonliquid patterns of long-range entanglement. “We have some vague ideas but don’t have a very systematic mathematics to do them,” he said. “We have some feeling what it looks like. The detailed systematics are still lacking. But that’s exciting.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: