Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:59 am on December 14, 2019 Permalink | Reply
    Tags: , , Quanta Magazine,   

    From Quanta Magazine: “Why the Laws of Physics Are Inevitable” 

    Quanta Magazine
    From Quanta Magazine

    December 9, 2019
    Natalie Wolchover

    By considering simple symmetries, physicists working on the “bootstrap” have rederived the four known forces. “There’s just no freedom in the laws of physics,” said one.

    1
    These three objects illustrate the principles behind “spin,” a property of fundamental particles. A domino needs a full turn to get back to the same place. A two of clubs needs only a half turn. And the hour hand on a clock must spin around twice before it tells the same time again. Lucy Reading-Ikkanda/Quanta Magazine

    Compared to the unsolved mysteries of the universe, far less gets said about one of the most profound facts to have crystallized in physics over the past half-century: To an astonishing degree, nature is the way it is because it couldn’t be any different. “There’s just no freedom in the laws of physics that we have,” said Daniel Baumann, a theoretical physicist at the University of Amsterdam.

    Since the 1960s, and increasingly in the past decade, physicists like Baumann have used a technique known as the “bootstrap” to infer what the laws of nature must be. This approach assumes that the laws essentially dictate one another through their mutual consistency — that nature “pulls itself up by its own bootstraps.” The idea turns out to explain a huge amount about the universe.

    When bootstrapping, physicists determine how elementary particles with different amounts of “spin,” or intrinsic angular momentum, can consistently behave. In doing this, they rediscover the four fundamental forces that shape the universe. Most striking is the case of a particle with two units of spin: As the Nobel Prize winner Steven Weinberg showed in 1964 [Physical Review Journals Archive], the existence of a spin-2 particle leads inevitably to general relativity — Albert Einstein’s theory of gravity. Einstein arrived at general relativity through abstract thoughts about falling elevators and warped space and time, but the theory also follows directly from the mathematically consistent behavior of a fundamental particle.

    “I find this inevitability of gravity [and other forces] to be one of the deepest and most inspiring facts about nature,” said Laurentiu Rodina, a theoretical physicist at the Institute of Theoretical Physics at CEA Saclay who helped to modernize and generalize Weinberg’s proof in 2014 [Physical Review D]. “Namely, that nature is above all self-consistent.”

    How Bootstrapping Works

    A particle’s spin reflects its underlying symmetries, or the ways it can be transformed that leave it unchanged. A spin-1 particle, for instance, returns to the same state after being rotated by one full turn. A spin-1/2 particle must complete two full rotations to come back to the same state, while a spin-2 particle looks identical after just half a turn. Elementary particles can only carry 0, 1/2, 1, 3/2 or 2 units of spin.

    To figure out what behavior is possible for particles of a given spin, bootstrappers consider simple particle interactions, such as two particles annihilating and yielding a third. The particles’ spins place constraints on these interactions. An interaction of spin-2 particles, for instance, must stay the same when all participating particles are rotated by 180 degrees, since they’re symmetric under such a half-turn.

    Interactions must obey a few other basic rules: Momentum must be conserved; the interactions must respect locality, which dictates that particles scatter by meeting in space and time; and the probabilities of all possible outcomes must add up to 1, a principle known as unitarity. These consistency conditions translate into algebraic equations that the particle interactions must satisfy. If the equation corresponding to a particular interaction has solutions, then these solutions tend to be realized in nature.

    For example, consider the case of the photon, the massless spin-1 particle of light and electromagnetism. For such a particle, the equation describing four-particle interactions — where two particles go in and two come out, perhaps after colliding and scattering — has no viable solutions. Thus, photons don’t interact in this way. “This is why light waves don’t scatter off each other and we can see over macroscopic distances,” Baumann explained. The photon can participate in interactions involving other types of particles, however, such as spin-1/2 electrons. These constraints on the photon’s interactions lead to Maxwell’s equations, the 154-year-old theory of electromagnetism.

    2

    Or take gluons, particles that convey the strong force that binds atomic nuclei together. Gluons are also massless spin-1 particles, but they represent the case where there are multiple types of the same massless spin-1 particle. Unlike the photon, gluons can satisfy the four-particle interaction equation, meaning that they self-interact. Constraints on these gluon self-interactions match the description given by quantum chromodynamics, the theory of the strong force.

    A third scenario involves spin-1 particles that have mass. Mass came about when a symmetry broke during the universe’s birth: A constant — the value of the omnipresent Higgs field — spontaneously shifted from zero to a positive number, imbuing many particles with mass. The breaking of the Higgs symmetry created massive spin-1 particles called W and Z bosons, the carriers of the weak force that’s responsible for radioactive decay.

    Then “for spin-2, a miracle happens,” said Adam Falkowski, a theoretical physicist at the Laboratory of Theoretical Physics in Orsay, France. In this case, the solution to the four-particle interaction equation at first appears to be beset with infinities. But physicists find that this interaction can proceed in three different ways, and that mathematical terms related to the three different options perfectly conspire to cancel out the infinities, which permits a solution.

    That solution is the graviton: a spin-2 particle that couples to itself and all other particles with equal strength. This evenhandedness leads straight to the central tenet of general relativity: the equivalence principle, Einstein’s postulate that gravity is indistinguishable from acceleration through curved space-time, and that gravitational mass and intrinsic mass are one and the same. Falkowski said of the bootstrap approach, “I find this reasoning much more compelling than the abstract one of Einstein.”

    Thus, by thinking through the constraints placed on fundamental particle interactions by basic symmetries, physicists can understand the existence of the strong and weak forces that shape atoms, and the forces of electromagnetism and gravity that sculpt the universe at large.

    In addition, bootstrappers find that many different spin-0 particles are possible. The only known example is the Higgs boson, the particle associated with the symmetry-breaking Higgs field that imbues other particles with mass. A hypothetical spin-0 particle called the inflaton may have driven the initial expansion of the universe. These particles’ lack of angular momentum means that fewer symmetries restrict their interactions. Because of this, bootstrappers can infer less about nature’s governing laws, and nature itself has more creative license.

    Spin-1/2 matter particles also have more freedom. These make up the family of massive particles we call matter, and they are individually differentiated by their masses and couplings to the various forces. Our universe contains, for example, spin-1/2 quarks that interact with both gluons and photons, and spin-1/2 neutrinos that interact with neither.

    The spin spectrum stops at 2 because the infinities in the four-particle interaction equation kill off all massless particles that have higher spin values. Higher-spin states can exist if they’re extremely massive, and such particles do play a role in quantum theories of gravity such as string theory. But higher-spin particles can’t be detected, and they can’t affect the macroscopic world.

    Undiscovered Country

    Spin-3/2 particles could complete the 0, 12, 1, 3/2, 2 pattern, but only if “supersymmetry” is true in the universe — that is, if every force particle with integer spin has a corresponding matter particle with half-integer spin. In recent years, experiments have ruled out many of the simplest versions of supersymmetry. But the gap in the spin spectrum strikes some physicists as a reason to hold out hope that supersymmetry is true and spin-3/2 particles exist.

    In his work, Baumann applies the bootstrap to the beginning of the universe. A recent Quanta article described how he and other physicists used symmetries and other principles to constrain the possibilities for those first moments.

    It’s “just aesthetically pleasing,” Baumann said, “that the laws are inevitable — that there is some inevitability of the laws of physics that can be summarized by a short handful of principles that then lead to building blocks that then build up the macroscopic world.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:30 am on December 14, 2019 Permalink | Reply
    Tags: , , , , Quanta Magazine, Thermalization, Time’s arrow is irreversible   

    From Quanta Magazine: “The Universal Law That Aims Time’s Arrow” 

    Quanta Magazine
    From Quanta Magazine

    August 1, 2019 [Just now in social media]
    Natalie Wolchover

    1
    Coffee and the cosmos at large both approach thermal equilibrium. Rolando Barry for Quanta Magazine

    Pour milk in coffee, and the eddies and tendrils of white soon fade to brown. In half an hour, the drink cools to room temperature. Left for days, the liquid evaporates. After centuries, the cup will disintegrate, and billions of years later, the entire planet, sun and solar system will disperse. Throughout the universe, all matter and energy is diffusing out of hot spots like coffee and stars, ultimately destined (after trillions of years) to spread uniformly through space. In other words, the same future awaits coffee and the cosmos.

    This gradual spreading of matter and energy, called “thermalization,” aims the arrow of time. But the fact that time’s arrow is irreversible, so that hot coffee cools down but never spontaneously heats up, isn’t written into the underlying laws that govern the motion of the molecules in the coffee. Rather, thermalization is a statistical outcome: The coffee’s heat is far more likely to spread into the air than the cold air molecules are to concentrate energy into the coffee, just as shuffling a new deck of cards randomizes the cards’ order, and repeat shuffles will practically never re-sort them by suit and rank. Once coffee, cup and air reach thermal equilibrium, no more energy flows between them, and no further change occurs. Thus thermal equilibrium on a cosmic scale is dubbed the “heat death of the universe.”

    But while it’s easy to see where thermalization leads (to tepid coffee and eventual heat death), it’s less obvious how the process begins. “If you start far from equilibrium, like in the early universe, how does the arrow of time emerge, starting from first principles?” said Jürgen Berges, a theoretical physicist at Heidelberg University in Germany who has studied this problem for more than a decade.

    Over the last few years, Berges and a network of colleagues have uncovered a surprising answer. The researchers have discovered simple, so-called “universal” laws [World Scientific] governing the initial stages of change in a variety of systems consisting of many particles that are far from thermal equilibrium. Their calculations indicate that these systems — examples include the hottest plasma ever produced on Earth and the coldest gas, and perhaps also the field of energy that theoretically filled the universe in its first split second — begin to evolve in time in a way described by the same handful of universal numbers, no matter what of what the systems consist.

    The findings suggest that the initial stages of thermalization play out in a way that’s very different from what comes later. In particular, far-from-equilibrium systems exhibit fractal-like behavior, which means they look very much the same at different spatial and temporal scales. Their properties are shifted only by a so-called “scaling exponent” — and scientists are discovering that these exponents are often simple numbers like 1-2 and −1/3. For example, particles’ speeds at one instant can be rescaled, according to the scaling exponent, to give the distribution of speeds at any time later or earlier. All kinds of quantum systems in various extreme starting conditions seem to fall into this fractal-like pattern, exhibiting universal scaling for a period of time before transitioning to standard thermalization.

    “I find this work exciting because it pulls out a unifying principle that we can use to understand large classes of far-from-equilibrium systems,” said Nicole Yunger Halpern, a quantum physicist at Harvard University who is not involved in the work. “These studies offer hope that we can describe even these very messy, complicated systems with simple patterns.”

    Berges is widely seen as leading the theoretical effort, with a series of seminal papers since 2008 elucidating the physics of universal scaling. He and a co-author took another step this spring in a paper in Physical Review Letters that explored “prescaling,” the ramp-up to universal scaling. A group led by Thomas Gasenzer of Heidelberg also investigated prescaling in a [Physical Review Letters] paper in May, offering a deeper look at the onset of the fractal-like behavior.

    Some researchers are now exploring far-from-equilibrium dynamics in the lab, as others dig into the origins of the universal numbers. Experts say universal scaling is also helping to address deep conceptual questions about how quantum systems are able to thermalize at all.

    There’s “chaotic progress on various fronts,” said Zoran Hadzibabic of the University of Cambridge. He and his team are studying universal scaling in a hot gas of potassium-39 atoms by suddenly dialing up the atoms’ interaction strength, then letting them evolve.

    Energy Cascades

    When Berges began studying far-from-equilibrium dynamics, he wanted to understand the extreme conditions at the beginning of the universe when the particles that now populate the cosmos originated.

    These conditions would have occurred right after “cosmic inflation” — the explosive expansion of space thought by many cosmologists to have jump-started the Big Bang. Inflation would have blasted away any existing particles, leaving only the uniform energy of space itself: a perfectly smooth, dense, oscillating field of energy known as a “condensate.” Berges modeled this condensate in 2008 [Physical Review Letters] with collaborators Alexander Rothkopf and Jonas Schmidt, and they discovered that the first stages of its evolution should have exhibited fractal-like universal scaling. “You find that when this big condensate decayed into the particles that we observe today, that this process can be very elegantly described by a few numbers,” he said.

    To understand what this universal scaling phenomenon looks like, consider a vivid historical precursor of the recent discoveries. In 1941, the Russian mathematician Andrey Kolmogorov described the way energy “cascades” through turbulent fluids. When you’re stirring coffee, for instance, you create a vortex on a large spatial scale. Kolmogorov realized that this vortex will spontaneously generate smaller eddies, which spawn still smaller eddies. As you stir the coffee, the energy you inject into the system cascades down the spatial scales into smaller and smaller eddies, with the rate of the transfer of energy described by a universal exponential decay factor of −5/3, which Kolmogorov deduced from the fluid’s dimensions.

    Kolmogorov’s “−5/3 law” always seemed mysterious, even as it served as a cornerstone of turbulence research. But now physicists have been finding essentially the same cascading, fractal-like universal scaling phenomenon in far-from-equilibrium dynamics. According to Berges, energy cascades probably arise in both contexts because they are the most efficient way to distribute energy across scales. We instinctively know this. “If you want to distribute your sugar in your coffee, you stir it,” Berges said — as opposed to shaking it. “You know that’s the most efficient way to redistribute energy.”

    There’s one key difference between the universal scaling phenomenon in far-from-equilibrium systems and the fractal eddies in a turbulent fluid: In the fluid case, Kolmogorov’s law describes energy cascading across spatial dimensions. In the new work, researchers see far-from-equilibrium systems undergoing fractal-like universal scaling across both time and space.

    Take the birth of the universe. After cosmic inflation, the hypothetical oscillating, space-filling condensate would have quickly transformed into a dense field of quantum particles all moving with the same characteristic speed. Berges and his colleagues conjecture that these far-from-equilibrium particles then exhibited fractal scaling governed by universal scaling exponents as they began the thermal evolution of the universe.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    According to the team’s calculations and computer simulations, instead of a single cascade like the one you’d find in a turbulent fluid, there would have been two cascades, going in opposite directions. Most of the particles in the system would have slowed from one moment to the next, cascading to slower and slower speeds at a characteristic rate — in this case, with a scaling exponent of approximately −3/2. Eventually they would have reached a standstill, forming another condensate [Physical Review Letters]. (This one wouldn’t oscillate or transform into particles; instead it would gradually decay.) Meanwhile, the majority of the energy leaving the slowing particles would have cascaded to a few particles that gained speed at a rate governed by the exponent 1/2. Essentially, these particles started to move extremely fast.

    The fast particles would have subsequently decayed into the quarks, electrons and other elementary particles that exist today. These particles would then have undergone standard thermalization, scattering off each other and distributing their energy. That process is still ongoing in the present-day universe and will continue for trillions of years.

    Simplicity Occurs

    The ideas about the early universe aren’t easily testable. But around 2012, the researchers realized that a far-from-equilibrium scenario also arises in experiments — namely, when heavy atomic nuclei are smashed together at nearly the speed of light in the Relativistic Heavy Ion Collider in New York and in Europe’s Large Hadron Collider.


    BNL RHIC

    CERN LHC

    These nuclear collisions create extreme configurations of matter and energy, which then start to relax toward equilibrium. You might think the collisions would produce a complicated mess. But when Berges and his colleagues analyzed the collisions theoretically, they found structure and simplicity. The dynamics, Berges said, “can be encoded in a few numbers.”

    The pattern continued. Around 2015, after talking to experimentalists who were probing ultracold atomic gases in the lab, Berges, Gasenzer and other theorists calculated that these systems should also exhibit universal scaling after being rapidly cooled to conditions extremely far from equilibrium.

    Last fall, two groups — one led by Markus Oberthaler of Heidelberg and the other by Jörg Schmiedmayer of the Vienna Center for Quantum Science and Technology — reported simultaneously in Nature [np link that they had observed fractal-like universal scaling in the way various properties of the 100,000-or-so atoms in their gases changed over space and time. “Again, simplicity occurs,” said Berges, who was one of the first to predict the phenomenon in such systems. “You can see that the dynamics can be described by a few scaling exponents and universal scaling functions. And some of them turned out to be the same as what was predicted for particles in the early universe. That’s the universality.”

    The researchers now believe that the universal scaling phenomenon occurs at the nanokelvin scale of ultracold atoms, the 10-trillion-kelvin scale of nuclear collisions, and the 10,000-trillion-trillion-kelvin scale of the early universe. “That’s the point of universality — that you can expect to see these phenomena on different energy and length scales,” Berges said.

    The case of the early universe may hold the most intrinsic interest, but it’s the highly controlled, isolated laboratory systems that are enabling scientists to tease out the universal rules governing the beginning stages of change. “We know everything that’s in the box,” as Hadzibabic put it. “It’s this isolation from the environment that allows you to study the phenomenon in its pure form.”

    One major thrust has been to figure out where systems’ scaling exponents come from. In some cases, experts have traced the exponents [Physical Review D] to the number of spatial dimensions a system occupies, as well as its symmetries — that is, all the ways it can be transformed without changing (just as a square stays the same when rotated by 90 degrees).

    Those insights are helping to address a paradox about what happens to information about the past as systems thermalize. Quantum mechanics requires that as particles evolve, information about their past is never lost. And yet, thermalization seems to contradict this: When two neglected cups of coffee are both at room temperature, how can you tell which one started out hotter?

    It seems that as a system begins to evolve, key details, like its symmetries, are retained and become encoded in the scaling exponents dictating its fractal evolution, while other details, like the initial configuration of its particles or the interactions between them, become irrelevant to its behavior, scrambled among its particles.

    And this scrambling process happens very early indeed. In their papers this spring, Berges, Gasenzer and their collaborators independently described prescaling for the first time, a period before universal scaling that their papers predicted for nuclear collisions and ultracold atoms, respectively. Prescaling suggests that when a system first evolves from its initial, far-from-equilibrium condition, scaling exponents don’t yet perfectly describe it. The system retains some of its previous structure — remnants of its initial configuration. But as prescaling progresses, the system assumes a more universal form in space and time, essentially obscuring irrelevant information about its own past. If this idea is borne out by future experiments, prescaling may be the nocking of time’s arrow onto the bowstring.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:23 am on December 2, 2019 Permalink | Reply
    Tags: , , , Cosmic inflation yields pristine flatness, , ESA/Planck CMB, ΛCDM does not predict any curvature; it says the universe is flat., , perhaps the universe is really closed., Quanta Magazine, What Shape Is the Universe?   

    From Quanta Magazine: “What Shape Is the Universe? A New Study Suggests We’ve Got It All Wrong” 

    Quanta Magazine
    From Quanta Magazine

    November 4, 2019
    Natalie Wolchover

    When researchers reanalyzed the gold-standard data set of the early universe, they concluded that the cosmos must be “closed,” or curled up like a ball. Most others remain unconvinced.

    1
    Lucy Reading-Ikkanda/Quanta Magazine
    In a flat universe, as seen on the left, a straight line will extend out to infinity. A closed universe, right, is curled up like the surface of a sphere. In it, a straight line will eventually return to its starting point.

    A provocative paper published today in the journal Nature Astronomy argues that the universe may curve around and close in on itself like a sphere, rather than lying flat like a sheet of paper as the standard theory of cosmology predicts. The authors reanalyzed a major cosmological data set and concluded that the data favors a closed universe with 99% certainty — even as other evidence suggests the universe is flat.

    The data in question — the Planck space telescope’s observations of ancient light called the cosmic microwave background (CMB) — “clearly points towards a closed model,” said Alessandro Melchiorri of Sapienza University of Rome.

    CMB per ESA/Planck

    ESA/Planck 2009 to 2013

    He co-authored the new paper with Eleonora di Valentino of the University of Manchester and Joseph Silk, principally of the University of Oxford. In their view, the discordance between the CMB data, which suggests the universe is closed, and other data pointing to flatness represents a “cosmological crisis” that calls for “drastic rethinking.”

    However, the team of scientists behind the Planck telescope reached different conclusions in their 2018 analysis. Antony Lewis, a cosmologist at the University of Sussex and a member of the Planck team who worked on that analysis, said the simplest explanation for the specific feature in the CMB data that di Valentino, Melchiorri and Silk interpreted as evidence for a closed universe “is that it is just a statistical fluke.” Lewis and other experts say they’ve already closely scrutinized the issue, along with related puzzles in the data.

    “There is no dispute that these symptoms exist at some level,” said Graeme Addison, a cosmologist at Johns Hopkins University who was not involved in the Planck analysis or the new research. “There is only disagreement as to the interpretation.”

    Whether the universe is flat — that is, whether two light beams shooting side by side through space will stay parallel forever, rather than eventually crossing and swinging back around to where they started, as in a closed universe — critically depends on the universe’s density. If all the matter and energy in the universe, including dark matter and dark energy, adds up to exactly the concentration at which the energy of the outward expansion balances the energy of the inward gravitational pull, space will extend flatly in all directions.

    The leading theory of the universe’s birth, known as cosmic inflation, yields pristine flatness.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Alan Guth’s notes:

    Alan Guth’s original notes on inflation

    And various observations since the early 2000s have shown that our universe is very nearly flat and must therefore come within a hair of this critical density — which is calculated to be about 5.7 hydrogen atoms’ worth of stuff per cubic meter of space, much of it invisible.

    The Planck telescope measures the density of the universe by gauging how much the CMB light has been deflected or “gravitationally lensed” while passing through the universe over the past 13.8 billion years. The more matter these CMB photons encounter on their journey to Earth, the more lensed they get, so that their direction no longer crisply reflects their starting point in the early universe. This shows up in the data as a blurring effect, which smooths out certain peaks and dips in the spatial pattern of the light. According to the new analysis, the large amount of lensing of the CMB suggests that the universe may be about 5% denser than the critical density, averaging something like six hydrogen atoms per cubic meter instead of 5.7, so that gravity wins and the cosmos closes in on itself.

    The Planck scientists noticed the larger-than-expected lensing effect years ago; the anomaly showed up most prominently in their final analysis of the full data set, released last year. If the universe is flat, cosmologists expect a curvature measurement to fall within about one “standard deviation” of zero, due to random statistical fluctuations in the data. But both the Planck team and the authors of the new paper found that the CMB data deviates by 3.4 standard deviations. Assuming that the universe is flat, this is a major fluke — about equivalent to getting heads in a coin toss 11 times in a row, which happens less than 1% of the time. The Planck team attributes the measurement to just such a fluke, or to some unaccounted-for effect that blurs the CMB light, mimicking the effect of extra matter.

    Or perhaps the universe is really closed. Di Valentino and co-authors point out that a closed model resolves other anomalous findings in the CMB. For instance, researchers deduce the values of key ingredients of our universe, such as the amount of dark matter and dark energy, by measuring variations in the color of the CMB light coming from different regions of the sky. But curiously, they get different answers when they compare small regions of the sky and when they compare large regions. The authors point out that when you recalculate these values assuming a closed universe, they don’t differ.

    Will Kinney, a cosmologist at the University at Buffalo in New York, called this bonus benefit of the closed universe model “really interesting.” But he noted that the discrepancies between small and large-scale variations seen in the CMB light could easily be statistical fluctuations themselves, or they might stem from the same unidentified error that may affect the lensing measurement.

    There are only six of these key properties that shape the universe, according to the standard theory of cosmology, which is known as ΛCDM (named for dark energy, represented by the Greek letter Λ, or lambda, and cold dark matter).

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    With only six numbers, ΛCDM accurately describes almost all features of the cosmos. And ΛCDM does not predict any curvature; it says the universe is flat.

    The new paper effectively argues that we may need to add a seventh parameter to ΛCDM: a number that describes the curvature of the universe. For the lensing measurement, adding a seventh number improves the fit with the data.

    But other cosmologists argue that before taking an anomaly seriously enough to add a seventh parameter to the theory, we need to take into account all the other things that ΛCDM gets right. Sure, we can focus on this one anomaly — a coin coming up heads 11 times in a row — and say that something’s off. But the CMB is such a huge data set that it’s like flipping a coin hundreds or thousands of times. It’s not too hard to imagine that in doing so, we’ll encounter one random run of 11 heads. Physicists call this the “look elsewhere” effect.

    Furthermore, researchers note that the seventh parameter isn’t needed for most other measurements. There’s a second way of gleaning the spatial curvature from the CMB, by measuring correlations between light from sets of four points in the sky; this “lensing reconstruction” measurement indicates that the universe is flat, with no seventh parameter needed. In addition, the BOSS survey’s independent observations of cosmological signals called baryon acoustic oscillations also point to flatness. Planck, in their 2018 analysis, combined their lensing measurement with these two other measurements and arrived at an overall value for the spatial curvature within one standard deviation of zero.

    Di Valentino, Melchiorri and Silk think that pulling these three different data sets together masks the fact that the different data sets don’t actually agree. “The point here is not that the universe is closed,” Melchiorri said by email. “The problem is the inconsistency between the data. This indicates that there is currently no concordance model and that we are missing something.” In other words, ΛCDM is wrong or incomplete.

    All other researchers consulted for this article think the weight of the evidence points to the universe being flat. “Given the other measurements,” Addison said, “the clearest interpretation of this behavior of the Planck data is that it’s a statistical fluctuation. Maybe it’s caused by some slight inaccuracy in the Planck analysis, or maybe it’s completely just noise fluctuations or random chance. But either way, there’s not really a reason to take this closed model seriously.”

    That’s not to say pieces aren’t missing from the cosmological picture. ΛCDM seemingly predicts the wrong value for the current expansion rate of the universe, causing a controversy known as the Hubble constant problem. But assuming the universe is closed doesn’t fix this problem — in fact, adding curvature worsens the prediction of the expansion rate. Other than Planck’s anomalous lensing measurement, there’s no reason to think the universe is closed.

    “Time will tell, but I am not, personally, terribly worried about this one,” Kinney said, referring to the suggestion of curvature in the CMB data. “It’s of a kind with similar anomalies that have proven to be vapor.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:50 pm on March 18, 2019 Permalink | Reply
    Tags: "AI Algorithms Are Now Shockingly Good at Doing Science", Quanta Magazine,   

    From Quanta via WIRED: “AI Algorithms Are Now Shockingly Good at Doing Science” 

    Quanta Magazine
    Quanta Magazine

    via

    Wired logo

    From WIRED

    3.17.19
    Dan Falk

    1
    Whether probing the evolution of galaxies or discovering new chemical compounds, algorithms are detecting patterns no humans could have spotted. Rachel Suggs/Quanta Magazine

    No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

    SKA Square Kilometer Array

    The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

    Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

    Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. (He eventually deduced that planets move in elliptical orbits.) Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). “It’s a different way to attack a problem.”

    Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said.


    Discovery by Generation

    Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born.

    Galaxy Zoo via Astrobites

    Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.”

    Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo.

    3
    None of these faces is real. The faces in the top row (A) and left-hand column (B) were constructed by a generative adversarial network (GAN) using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. NVIDIA

    The best-known generative modeling systems are “generative adversarial networks” (GANs). After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it.

    More broadly, generative modeling takes sets of data (typically images, but not always) and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system.

    The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection.

    In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment.

    For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

    First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

    The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation (in other words, their stars tend to be older). With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.”

    4
    Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. K. Schawinski et al.; doi: 10.1051/0004-6361/201833800

    The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”

    The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said.

    Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries.


    In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”

    Hardworking Assistants

    Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task.

    Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.”

    But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a (very wrong) age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.”

    For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said.

    Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained.

    Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.”

    It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system.

    Perimeter Institute in Waterloo, Canada


    AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates.

    Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says.

    The Mind of the Machine

    Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science?

    Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. (Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes.) Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.”

    More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed.

    Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent.

    Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws.

    These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large (and growing) piles of it. In The Book of Why (2018), the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.”

    Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. (Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative.) “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes.

    Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen.

    “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:21 am on January 28, 2019 Permalink | Reply
    Tags: , , , Black Hole Engines and Superbubble Shockwaves, BlueTides simulation on Blue Waters supercomputer, Cold dark matter halos, , , MOND - Modified Newtonian Dynamics and Mordehai Milgrom, Quanta Magazine, Simulation of the 14-billion-year history of the universe on a supercomputer, The Universe Is Not a Simulation but We Can Now Simulate It   

    From Quanta Magazine: “The Universe Is Not a Simulation, but We Can Now Simulate It” 

    Quanta Magazine
    From Quanta Magazine

    June 12, 2018 [Just found this.]
    Natalie Wolchover

    1
    From video by Mark Volgersberger/IllustrisTNG for Quanta
    The evolution of magnetic fields in a 10-Megaparsec section of the IllustrisTNG universe simulation. Regions of low magnetic energy appear in blue and purple, while orange and white correspond to more magnetically energetic regions inside dark matter halos and galaxies.

    In the early 2000s, a small community of coder-cosmologists set out to simulate the 14-billion-year history of the universe on a supercomputer. They aimed to create a proxy of the cosmos, a Cliffs Notes version in computer code that could run in months instead of giga-years, to serve as a laboratory for studying the real universe.

    The simulations failed spectacularly. Like mutant cells in a petri dish, mock galaxies grew all wrong, becoming excessively starry blobs instead of gently rotating spirals. When the researchers programmed in supermassive black holes at the centers of galaxies, the black holes either turned those galaxies into donuts or drifted out from galactic centers like monsters on the prowl.

    But recently, the scientists seem to have begun to master the science and art of cosmos creation. They are applying the laws of physics to a smooth, hot fluid of (simulated) matter, as existed in the infant universe, and seeing the fluid evolve into spiral galaxies and galaxy clusters like those in the cosmos today.

    “I was like, wow, I can’t believe it!” said Tiziana Di Matteo, a numerical cosmologist at Carnegie Mellon University, about seeing realistic spiral galaxies form for the first time in 2015 in the initial run of BlueTides, one of several major ongoing simulation series. “You kind of surprise yourself, because it’s just a bunch of lines of code, right?”

    2
    Tiziana Di Matteo, a professor of physics at Carnegie Mellon University, co-developed the MassiveBlack-II and BlueTides cosmological simulations.

    With the leap in mock-universe verisimilitude, researchers are now using their simulations as laboratories. After each run, they can peer into their codes and figure out how and why certain features of their simulated cosmos arise, potentially also explaining what’s going on in reality. The newly functional proxies have inspired explanations and hypotheses about the 84 percent of matter that’s invisible — the long-sought “dark matter” that seemingly engulfs galaxies. Formerly puzzling telescope observations about real galaxies that raised questions about the standard dark matter hypothesis are being explained in the state-of-the-art facsimiles.

    The simulations have also granted researchers such as Di Matteo virtual access to the supermassive black holes that anchor the centers of galaxies, whose formation in the early universe remains mysterious. “Now we are in an exciting place where we can actually use these models to make completely new predictions,” she said.

    Black Hole Engines and Superbubble Shockwaves

    Until about 15 years ago, most cosmological simulations didn’t even attempt to form realistic galaxies. They modeled only dark matter, which in the standard hypothesis interacts only gravitationally, making it much easier to code than the complicated atomic stuff we see.

    The dark-matter-only simulations found that roundish “halos” of invisible matter spontaneously formed with the right sizes and shapes to potentially cradle visible galaxies within them.

    Caterpillar Project A Milky-Way-size dark-matter halo and its subhalos circled, an enormous suite of simulations . Griffen et al. 2016

    Volker Springel, a leading coder-cosmologist at Heidelberg University in Germany, said, “These calculations were really instrumental to establish that the now-standard cosmological model, despite its two strange components — the dark matter and the dark energy — is actually a pretty promising prediction of what’s going on.”

    5
    Volker Springel, a professor at Heidelberg University, developed the simulation codes GADGET and AREPO, which is used in the state-of-the-art IllustrisTNG simulation [below]. HITS

    Researchers then started adding visible matter into their codes, stepping up the difficulty astronomically. Unlike dark matter halos, interacting atoms evolve complexly as the universe unfolds, giving rise to fantastic objects like stars and supernovas. Unable to code the physics in full, coders had to simplify and omit. Every team took a different approach to this abridgement, picking and programming what they saw as the key astrophysics.

    Then, in 2012, a study [AIP] by Cecilia Scannapieco of the Leibniz Institute for Astrophysics in Potsdam gave the field a wake-up call. “She convinced a bunch of people to run the same galaxy with all their codes,” said James Wadsley of McMaster University in Canada, who participated. “And everyone got it wrong.” All their galaxies looked different, and “everyone made too many stars.”

    3
    Henize 70 is a superbubble of hot expanding gas about 300 light-years across that is located within the Large Magellanic Cloud, a satellite of the Milky Way galaxy.
    Credit: FORS Team, 8.2-meter VLT, ESO

    ESO/FORS1 on the VLT


    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    Scannapieco’s study was both “embarrassing,” Wadsley said, and hugely motivational: “That’s when people doubled down and realized they needed black holes, and they needed the supernovae to work better” in order to create credible galaxies. In real galaxies, he and others explained, star production is diminishing. As the galaxies run low on fuel, their lights are burning out and not being replaced. But in the simulations, Wadsley said, late-stage galaxies were “still making stars like crazy,” because gas wasn’t getting kicked out.

    The first of the two critical updates that have fixed the problem in the latest generation of simulations is the addition of supermassive black holes at spiral galaxies’ centers.

    SgrA* NASA/Chandra supermassive black hole at the center of the Milky Way

    These immeasurably dense, bottomless pits in the space-time fabric, some weighing more than a billion suns, act as fuel-burning engines, messily eating surrounding stars, gas and dust and spewing the debris outward in lightsaber-like beams called jets. They’re the main reason present-day spiral galaxies form fewer stars than they used to.

    The other new key ingredient is supernovas — and the “superbubbles” formed from the combined shockwaves of hundreds of supernovas exploding in quick succession.

    This is an artist’s impression of the SN 1987A remnant. The image is based on real data and reveals the cold, inner regions of the remnant, in red, where tremendous amounts of dust were detected and imaged by ALMA. This inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    In a superbubble [see Henize 70 above], “a small galaxy over a few million years could blow itself apart,” said Wadsley, who integrated superbubbles into a code called GASOLINE2 in 2015. “They’re very kind of crazy extreme objects.” They occur because stars tend to live and die in clusters, forming by the hundreds of thousands as giant gas clouds collapse and later going supernova within about a million years of one another. Superbubbles sweep whole areas or even entire small galaxies clean of gas and dust, curbing star formation and helping to stir the pushed-out matter before it later recollapses. Their inclusion made small simulated galaxies much more realistic.

    4
    Jillian Bellovary, a numerical cosmologist at Queensborough Community College and the American Museum of Natural History in New York, put black holes into the GASOLINE simulation code. H.N. James.

    Jillian Bellovary, a wry young numerical cosmologist at Queensborough Community College and the American Museum of Natural History in New York, coded some of the first black holes, putting them into GASOLINE in 2008. Skipping or simplifying tons of physics, she programmed an equation dictating how much gas the black hole should consume as a function of the gas’s density and temperature, and a second equation telling the black hole how much energy to release. Others later built on Bellovary’s work, most importantly by figuring out how to keep black holes anchored at the centers of mock galaxies, while stopping them from blowing out so much gas that they’d form galactic donuts.

    Simulating all this physics for hundreds of thousands of galaxies at once takes immense computing power and cleverness. Modern supercomputers, having essentially maxed out the number of transistors they can pack upon a single chip, have expanded outward across as many as 100,000 parallel cores that crunch numbers in concert. Coders have had to figure out how to divvy up the cores — not an easy task when some parts of a simulated universe evolve quickly and complexly, while little happens elsewhere, and then conditions can switch on a dime. Researchers have found ways of dealing with this huge dynamic range with algorithms that adaptively allocate computer resources according to need.

    They’ve also fought and won a variety of logistical battles. For instance, “If you have two black holes eating the same gas,” Bellovary said, and they’re “on two different processors of the supercomputer, how do you have the black holes not eat the same particle?” Parallel processors “have to talk to each other,” she said.

    Saving Dark Matter

    The simulations finally work well enough to be used for science. With BlueTides, Di Matteo and collaborators are focusing on galaxy formation during the universe’s first 600 million years. Somehow, supermassive black holes wound up at the centers of dark matter halos during that period and helped pull rotating skirts of visible gas and dust around themselves. What isn’t known is how they got so big so fast. One possibility, as witnessed in BlueTides, is that supermassive black holes spontaneously formed from the gravitational collapse of gargantuan gas clouds in over-dense patches of the infant universe.

    BlueTides simulation on Blue Waters supercomputer

    U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer

    “We’ve used the BlueTides simulations to actually predict what this first population of galaxies and black holes is like,” Di Matteo said. In the simulations, they see pickle-shaped proto-galaxies and miniature spirals taking shape around the newborn supermassive black holes. What future telescopes (including the James Webb Space Telescope, set to launch in 2020) observe as they peer deep into space and back in time to the birth of galaxies will in turn test the equations that went into the code.

    Another leader in this back-and-forth game is Phil Hopkins, a professor at the California Institute of Technology. His code, FIRE, simulates relatively small volumes of the cosmos at high resolution. Hopkins “has pushed the resolution in a way that not many other people have,” Wadsley said. “His galaxies look very good.” Hopkins and his team have created some of the most realistic small galaxies, like the “dwarf galaxy” satellites that orbit the Milky Way.


    Video: The formation of a Milky Way-size disk galaxy and its merger with another galaxy in the IllustrisTNG simulation. Credit: Shy Genel/IllustrisTNG

    These small, faint galaxies have always presented problems. The “missing satellite problem,” for instance, is the expectation, based on standard cold dark matter models, that hundreds of satellite galaxies should orbit every spiral galaxy. But the Milky Way has just dozens. This has caused some physicists to contemplate more complicated models of dark matter. However, when Hopkins and colleagues incorporated realistic superbubbles into their simulations, they saw many of those excess satellite galaxies go away. Hopkins has also found potential resolutions to two other problems, called “cusp-core” and “too-big-to-fail,” that have troubled the cold dark matter paradigm.

    With their upgraded simulations, Wadsley, Di Matteo and others are also strengthening the case that dark matter exists at all. Arguably the greatest source of lingering doubt about dark matter is a curious relationship between the visible parts of galaxies.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LUX Dark matter Experiment at SURF, Lead, SD, USA

    ADMX Axion Dark Matter Experiment, U Uashington

    Namely, the speeds at which stars circumnavigate the galaxy closely track with the amount of visible matter enclosed by their orbits — even though the stars are also driven by the gravity of dark matter halos. There’s so much dark matter supposedly accelerating the stars that you wouldn’t expect the stars’ motions to have much to do with the amount of visible matter. For this relationship to exist within the dark matter framework, the amounts of dark matter and visible matter in galaxies must be fine-tuned such that they are tightly correlated themselves and galactic rotation speeds track with either one.

    An alternative theory called modified Newtonian dynamics, or MOND, argues that there is no dark matter; rather, visible matter exerts a stronger gravitational force than expected at galactic outskirts.

    MOND UMd

    MOND Modified Newtonian Dynamics a Humble Introduction Marcus Nielbock

    MOND Rotation Curves with MOND Tully-Fisher

    Mordehai Milgrom, MOND theorist, is an Israeli physicist and professor in the department of Condensed Matter Physics at the Weizmann Institute in Rehovot, Israel http://cosmos.nautil.us

    By slightly tweaking the famous inverse-square law of gravity, MOND broadly matches observed galaxy rotation speeds (though it struggles to account for other phenomena attributed to dark matter).

    The fine-tuning problem appeared to sharpen in 2016, when the cosmologist Stacy McGaugh of Case Western Reserve University and collaborators showed [The Astronomical Journal]how tightly the relationship between stars’ rotation speeds and visible matter holds across a range of real galaxies. But McGaugh’s paper met with three quick rejoinders from the numerical cosmology community. Three teams (one including Wadsley; another [MNRAS], Di Matteo; and the third led by Julio Navarro of the University of Victoria) published the results of simulations indicating that the relation arises naturally in dark-matter-filled galaxies.

    Making the standard assumptions about cold dark matter halos, the researchers simulated galaxies like those in McGaugh’s sample. Their galaxies ended up exhibiting linear relationships very similar to the observed one, suggesting dark matter really does closely track visible matter. “We essentially fit their relation — pretty much on top,” said Wadsley. He and his then-student Ben Keller ran their simulation prior to seeing McGaugh’s paper, “so we felt that the fact that we could reproduce the relation without needing any tweaks to our model was fairly telling,” he said.

    In a simulation that’s running now, Wadsley is generating a bigger volume of mock universe to test whether the relation holds for the full range of galaxy types in McGaugh’s sample. If it does, the cold dark matter hypothesis is seemingly safe from this quandary. As for why dark matter and visible matter end up so tightly correlated in galaxies, based on the simulations, Navarro and colleagues attribute [MNRAS] it to angular momentum acting together with gravity during galaxy formation.

    Beyond questions of dark matter, galactic simulation codes continue to improve, and reflect on other unknowns. The much-lauded, ongoing IllustrisTNG simulation series by Springel and collaborators now includes magnetic fields on a large scale for the first time.

    IllustrisTNG simulation

    “Magnetic fields are like this ghost in astronomy,” Bellovary explained, playing a little-understood role in galactic dynamics. Springel thinks they might influence galactic winds — another enigma — and the simulations will help test this.

    A big goal, Hopkins said, is to combine many simulations that each specialize in different time periods or spatial scales. “What you want to do is just tile all the scales,” he said, “where you can use, at each stage, the smaller-scale theory and observations to give you the theory and inputs you need on all scales.”

    With the recent improvements, researchers say a philosophical debate has ensued about when to say “good enough.” Adding too many astrophysical bells and whistles into the simulations will eventually limit their usefulness by making it increasingly difficult to tell what’s causing what. As Wadsley put it, “We would just be observing a fake universe instead of a real one, but not understanding it.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:37 pm on September 15, 2018 Permalink | Reply
    Tags: , , Quanta Magazine, The End of Theoretical Physics As We Know It   

    From Quanta Magazine: “The End of Theoretical Physics As We Know It” 

    Quanta Magazine
    From Quanta Magazine

    August 27, 2018
    Sabine Hossenfelder

    1
    James O’Brien for Quanta Magazine

    Computer simulations and custom-built quantum analogues are changing what it means to search for the laws of nature.

    Theoretical physics has a reputation for being complicated. I beg to differ. That we are able to write down natural laws in mathematical form at all means that the laws we deal with are simple — much simpler than those of other scientific disciplines.

    Unfortunately, actually solving those equations is often not so simple. For example, we have a perfectly fine theory that describes the elementary particles called quarks and gluons, but no one can calculate how they come together to make a proton. The equations just can’t be solved by any known methods. Similarly, a merger of black holes or even the flow of a mountain stream can be described in deceptively simple terms, but it’s hideously difficult to say what’s going to happen in any particular case.

    Of course, we are relentlessly pushing the limits, searching for new mathematical strategies. But in recent years much of the pushing has come not from more sophisticated math but from more computing power.

    When the first math software became available in the 1980s, it didn’t do much more than save someone a search through enormous printed lists of solved integrals. But once physicists had computers at their fingertips, they realized they no longer had to solve the integrals in the first place, they could just plot the solution.

    In the 1990s, many physicists opposed this “just plot it” approach. Many were not trained in computer analysis, and sometimes they couldn’t tell physical effects from coding artifacts. Maybe this is why I recall many seminars in which a result was degraded as “merely numerical.” But over the past two decades, this attitude has markedly shifted, not least thanks to a new generation of physicists for whom coding is a natural extension of their mathematical skill.

    Accordingly, theoretical physics now has many subdisciplines dedicated to computer simulations of real-world systems, studies that would just not be possible any other way. Computer simulations are what we now use to study the formation of galaxies and supergalactic structures, to calculate the masses of particles that are composed of several quarks, to find out what goes on in the collision of large atomic nuclei, and to understand solar cycles, to name but a few areas of research that are mainly computer based.

    The next step of this shift away from purely mathematical modeling is already on the way: Physicists now custom design laboratory systems that stand in for other systems which they want to better understand. They observe the simulated system in the lab to draw conclusions about, and make predictions for, the system it represents.

    The best example may be the research area that goes by the name “quantum simulations.” These are systems composed of interacting, composite objects, like clouds of atoms. Physicists manipulate the interactions among these objects so the system resembles an interaction among more fundamental particles. For example, in circuit quantum electrodynamics, researchers use tiny superconducting circuits to simulate atoms, and then study how these artificial atoms interact with photons. Or in a lab in Munich, physicists use a superfluid of ultra-cold atoms to settle the debate over whether Higgs-like particles can exist in two dimensions of space (the answer is yes [Nature]).

    These simulations are not only useful to overcome mathematical hurdles in theories we already know. We can also use them to explore consequences of new theories that haven’t been studied before and whose relevance we don’t yet know.

    This is particularly interesting when it comes to the quantum behavior of space and time itself — an area where we still don’t have a good theory. In a recent experiment, for example, Raymond Laflamme, a physicist at the Institute for Quantum Computing at the University of Waterloo in Ontario, Canada, and his group used a quantum simulation to study so-called spin networks, structures that, in some theories, constitute the fundamental fabric of space-time. And Gia Dvali, a physicist at the University of Munich, has proposed a way to simulate the information processing of black holes with ultracold atom gases.

    A similar idea is being pursued in the field of analogue gravity, where physicists use fluids to mimic the behavior of particles in gravitational fields. Black hole space-times have attracted the bulk of attention, as with Jeff Steinhauer’s (still somewhat controversial) claim of having measured Hawking radiation in a black-hole analogue. But researchers have also studied the rapid expansion of the early universe, called “inflation,” with fluid analogues for gravity.

    In addition, physicists have studied hypothetical fundamental particles by observing stand-ins called quasiparticles. These quasiparticles behave like fundamental particles, but they emerge from the collective movement of many other particles. Understanding their properties allows us to learn more about their behavior, and thereby might also to help us find ways of observing the real thing.

    This line of research raises some big questions. First of all, if we can simulate what we now believe to be fundamental by using composite quasiparticles, then maybe what we currently think of as fundamental — space and time and the 25 particles that make up the Standard Model of particle physics — is made up of an underlying structure, too. Quantum simulations also make us wonder what it means to explain the behavior of a system to begin with. Does observing, measuring, and making a prediction by use of a simplified version of a system amount to an explanation?

    But for me, the most interesting aspect of this development is that it ultimately changes how we do physics. With quantum simulations, the mathematical model is of secondary relevance. We currently use the math to identify a suitable system because the math tells us what properties we should look for. But that’s not, strictly speaking, necessary. Maybe, over the course of time, experimentalists will just learn which system maps to which other system, as they have learned which system maps to which math. Perhaps one day, rather than doing calculations, we will just use observations of simplified systems to make predictions.

    At present, I am sure, most of my colleagues would be appalled by this future vision. But in my mind, building a simplified model of a system in the laboratory is conceptually not so different from what physicists have been doing for centuries: writing down simplified models of physical systems in the language of mathematics.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:39 am on August 15, 2018 Permalink | Reply
    Tags: , , , , Dark Energy May Be Incompatible With String Theory, , , Quanta Magazine,   

    From Quanta Magazine: “Dark Energy May Be Incompatible With String Theory” 

    Quanta Magazine
    From Quanta Magazine

    August 9, 2018
    Natalie Wolchover

    1
    String theory permits a “landscape” of possible universes, surrounded by a “swampland” of logically inconsistent universes. In all of the simple, viable stringy universes physicists have studied, the density of dark energy is either diminishing or has a stable negative value, unlike our universe, which appears to have a stable positive value. Maciej Rebisz for Quanta Magazine

    On June 25, Timm Wrase awoke in Vienna and groggily scrolled through an online repository of newly posted physics papers. One title startled him into full consciousness.

    The paper, by the prominent string theorist Cumrun Vafa of Harvard University and collaborators, conjectured a simple formula dictating which kinds of universes are allowed to exist and which are forbidden, according to string theory. The leading candidate for a “theory of everything” weaving the force of gravity together with quantum physics, string theory defines all matter and forces as vibrations of tiny strands of energy. The theory permits some 10500 different solutions: a vast, varied “landscape” of possible universes. String theorists like Wrase and Vafa have strived for years to place our particular universe somewhere in this landscape of possibilities.

    But now, Vafa and his colleagues were conjecturing that in the string landscape, universes like ours — or what ours is thought to be like — don’t exist. If the conjecture is correct, Wrase and other string theorists immediately realized, the cosmos must either be profoundly different than previously supposed or string theory must be wrong.

    After dropping his kindergartner off that morning, Wrase went to work at the Vienna University of Technology, where his colleagues were also buzzing about the paper. That same day, in Okinawa, Japan, Vafa presented the conjecture at the Strings 2018 conference, which was streamed by physicists worldwide. Debate broke out on- and off-site. “There were people who immediately said, ‘This has to be wrong,’ other people who said, ‘Oh, I’ve been saying this for years,’ and everything in the middle,” Wrase said. There was confusion, he added, but “also, of course, huge excitement. Because if this conjecture was right, then it has a lot of tremendous implications for cosmology.”

    Researchers have set to work trying to test the conjecture and explore its implications. Wrase has already written two papers, including one that may lead to a refinement of the conjecture, and both mostly while on vacation with his family. He recalled thinking, “This is so exciting. I have to work and study that further.”

    The conjectured formula — posed in the June 25 paper by Vafa, Georges Obied, Hirosi Ooguri and Lev Spodyneiko and further explored in a second paper released two days later by Vafa, Obied, Prateek Agrawal and Paul Steinhardt — says, simply, that as the universe expands, the density of energy in the vacuum of empty space must decrease faster than a certain rate. The rule appears to be true in all simple string theory-based models of universes. But it violates two widespread beliefs about the actual universe: It deems impossible both the accepted picture of the universe’s present-day expansion and the leading model of its explosive birth.

    Dark Energy in Question

    Since 1998, telescope observations have indicated that the cosmos is expanding ever-so-slightly faster all the time, implying that the vacuum of empty space must be infused with a dose of gravitationally repulsive “dark energy.”

    In addition, it looks like the amount of dark energy infused in empty space stays constant over time (as best anyone can tell).

    But the new conjecture asserts that the vacuum energy of the universe must be decreasing.

    Vafa and colleagues contend that universes with stable, constant, positive amounts of vacuum energy, known as “de Sitter universes,” aren’t possible. String theorists have struggled mightily since dark energy’s 1998 discovery to construct convincing stringy models of stable de Sitter universes. But if Vafa is right, such efforts are bound to sink in logical inconsistency; de Sitter universes lie not in the landscape, but in the “swampland.” “The things that look consistent but ultimately are not consistent, I call them swampland,” he explained recently. “They almost look like landscape; you can be fooled by them. You think you should be able to construct them, but you cannot.”

    According to this “de Sitter swampland conjecture,” in all possible, logical universes, the vacuum energy must either be dropping, its value like a ball rolling down a hill, or it must have obtained a stable negative value. (So-called “anti-de Sitter” universes, with stable, negative doses of vacuum energy, are easily constructed in string theory.)

    The conjecture, if true, would mean the density of dark energy in our universe cannot be constant, but must instead take a form called “quintessence” — an energy source that will gradually diminish over tens of billions of years. Several telescope experiments are underway now to more precisely probe whether the universe is expanding with a constant rate of acceleration, which would mean that as new space is created, a proportionate amount of new dark energy arises with it, or whether the cosmic acceleration is gradually changing, as in quintessence models. A discovery of quintessence would revolutionize fundamental physics and cosmology, including rewriting the cosmos’s history and future. Instead of tearing apart in a Big Rip, a quintessent universe would gradually decelerate, and in most models, would eventually stop expanding and contract in either a Big Crunch or Big Bounce.

    Paul Steinhardt, a cosmologist at Princeton University and one of Vafa’s co-authors, said that over the next few years, “all eyes should be on” measurements by the Dark Energy Survey, WFIRST and Euclid telescopes of whether the density of dark energy is changing.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    NASA/WFIRST

    ESA/Euclid spacecraft

    “If you find it’s not consistent with quintessence,” Steinhardt said, “it means either the swampland idea is wrong, or string theory is wrong, or both are wrong or — something’s wrong.”

    Inflation Under Siege

    No less dramatically, the new swampland conjecture also casts doubt on the widely believed story of the universe’s birth: the Big Bang theory known as cosmic inflation.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:
    5

    According to this theory, a minuscule, energy-infused speck of space-time rapidly inflated to form the macroscopic universe we inhabit. The theory was devised to explain, in part, how the universe got so huge, smooth and flat.

    But the hypothetical “inflaton field” of energy that supposedly drove cosmic inflation doesn’t sit well with Vafa’s formula. To abide by the formula, the inflaton field’s energy would probably have needed to diminish too quickly to form a smooth- and flat-enough universe, he and other researchers explained. Thus, the conjecture disfavors many popular models of cosmic inflation. In the coming years, telescopes such as the Simons Observatory will look for definitive signatures of cosmic inflation, testing it against rival ideas.

    In the meantime, string theorists, who normally form a united front, will disagree about the conjecture. Eva Silverstein, a physics professor at Stanford University and a leader in the effort to construct string-theoretic models of inflation, thinks it is very likely to be false. So does her husband, the Stanford professor Shamit Kachru; he is the first “K” in KKLT, a famous 2003 paper (known by its authors’ initials) that suggested a set of stringy ingredients that might be used to construct de Sitter universes. Vafa’s formula says both Silverstein’s and Kachru’s constructions won’t work. “We’re besieged by these conjectures in our family,” Silverstein joked. But in her view, accelerating-expansion models are no more disfavored now, in light of the new papers, than before. “They essentially just speculate that those things don’t exist, citing very limited and in some cases highly dubious analyses,” she said.

    Matthew Kleban, a string theorist and cosmologist at New York University, also works on stringy models of inflation. He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,” since much of the string landscape has yet to be explored. And yet he acknowledges that, based on existing evidence, the conjecture could well be true. “It could be true about string theory, and then maybe string theory doesn’t describe the world,” Kleban said. “[Maybe] dark energy has falsified it. That obviously would be very interesting.”

    Mapping the Swampland

    Whether the de Sitter swampland conjecture and future experiments really have the power to falsify string theory remains to be seen. The discovery in the early 2000s that string theory has something like 10^500 solutions killed the dream that it might uniquely and inevitably predict the properties of our one universe. The theory seemed like it could support almost any observations and became very difficult to experimentally test or disprove.

    In 2005, Vafa and a network of collaborators began to think about how to pare the possibilities down by mapping out fundamental features of nature that absolutely have to be true. For example, their “weak gravity conjecture” asserts that gravity must always be the weakest force in any logical universe. Imagined universes that don’t satisfy such requirements get tossed from the landscape into the swampland. Many of these swampland conjectures have held up famously against attack, and some are now “on a very solid theoretical footing,” said Hirosi Ooguri, a theoretical physicist at the California Institute of Technology and one of Vafa’s first swampland collaborators. The weak gravity conjecture, for instance, has accumulated so much evidence that it’s now suspected to hold generally, independent of whether string theory is the correct theory of quantum gravity.

    The intuition about where landscape ends and swampland begins derives from decades of effort to construct stringy models of universes. The chief challenge of that project has been that string theory predicts the existence of 10 space-time dimensions — far more than are apparent in our 4-D universe. String theorists posit that the six extra spatial dimensions must be small — curled up tightly at every point. The landscape springs from all the different ways of configuring these extra dimensions. But although the possibilities are enormous, researchers like Vafa have found that general principles emerge. For instance, the curled-up dimensions typically want to gravitationally contract inward, whereas fields like electromagnetic fields tend to push everything apart. And in simple, stable configurations, these effects balance out by having negative vacuum energy, producing anti-de Sitter universes. Turning the vacuum energy positive is hard. “Usually in physics, we have simple examples of general phenomena,” Vafa said. “De Sitter is not such a thing.”

    The KKLT paper, by Kachru, Renata Kallosh, Andrei Linde and Sandip Trivedi, suggested stringy trappings like “fluxes,” “instantons” and “anti-D-branes” that could potentially serve as tools for configuring a positive, constant vacuum energy. However, these constructions are complicated, and over the years possible instabilities have been identified. Though Kachru said he does not have “any serious doubts,” many researchers have come to suspect the KKLT scenario does not produce stable de Sitter universes after all.

    Vafa thinks a concerted search for definitely stable de Sitter universe models is long overdue. His conjecture is, above all, intended to press the issue. In his view, string theorists have not felt sufficiently motivated to figure out whether string theory really is capable of describing our world, instead taking the attitude that because the string landscape is huge, there must be a place in it for us, even if no one knows where. “The bulk of the community in string theory still sides on the side of de Sitter constructions [existing],” he said, “because the belief is, ‘Look, we live in a de Sitter universe with positive energy; therefore we better have examples of that type.’”

    His conjecture has roused the community to action, with researchers like Wrase looking for stable de Sitter counterexamples, while others toy with little-explored stringy models of quintessent universes. “I would be equally interested to know if the conjecture is true or false,” Vafa said. “Raising the question is what we should be doing. And finding evidence for or against it — that’s how we make progress.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:47 am on August 2, 2018 Permalink | Reply
    Tags: , , , , Quanta Magazine   

    From Quanta Magazine via Nautilus: “How Artificial Intelligence Can Supercharge the Search for New Particles” 

    Nautilus

    Nautilus

    Quanta Magazine
    From Quanta Magazine

    Jul 25, 2018
    Charlie Wood

    1
    In the hunt for new fundamental particles, physicists have always had to make assumptions about how the particles will behave. New machine learning algorithms don’t.
    Image by ATLAS Experiment © 2018 CERN

    The Large Hadron Collider (LHC) smashes a billion pairs of protons together each second.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    Occasionally the machine may rattle reality enough to have a few of those collisions generate something that’s never been seen before. But because these events are by their nature a surprise, physicists don’t know exactly what to look for. They worry that in the process of winnowing their data from those billions of collisions to a more manageable number, they may be inadvertently deleting evidence for new physics. “We’re always afraid we’re throwing the baby away with the bathwater,” said Kyle Cranmer, a particle physicist at New York University who works with the ATLAS experiment at CERN.

    CERN ATLAS

    Faced with the challenge of intelligent data reduction, some physicists are trying to use a machine learning technique called a “deep neural network” to dredge the sea of familiar events for new physics phenomena.

    In the prototypical use case, a deep neural network learns to tell cats from dogs by studying a stack of photos labeled “cat” and a stack labeled “dog.” But that approach won’t work when hunting for new particles, since physicists can’t feed the machine pictures of something they’ve never seen. So they turn to “weakly supervised learning,” where machines start with known particles and then look for rare events using less granular information, such as how often they might take place overall.

    In a paper posted on the scientific preprint site arxiv.org in May, three researchers proposed applying a related strategy to extend “bump hunting,” the classic particle-hunting technique that found the Higgs boson. The general idea, according to one of the authors, Ben Nachman, a researcher at the Lawrence Berkeley National Laboratory, is to train the machine to seek out rare variations in a data set.

    Consider, as a toy example in the spirit of cats and dogs, a problem of trying to discover a new species of animal in a data set filled with observations of forests across North America. Assuming that any new animals might tend to cluster in certain geographical areas (a notion that corresponds with a new particle that clusters around a certain mass), the algorithm should be able to pick them out by systematically comparing neighboring regions. If British Columbia happens to contain 113 caribous to Washington state’s 19 (even against a background of millions of squirrels), the program will learn to sort caribous from squirrels, all without ever studying caribous directly. “It’s not magic but it feels like magic,” said Tim Cohen, a theoretical particle physicist at the University of Oregon who also studies weak supervision.

    By contrast, traditional searches in particle physics usually require researchers to make an assumption about what the new phenomena will look like. They create a model of how the new particles will behave—for example, a new particle might tend to decay into particular constellations of known particles. Only after they define what they’re looking for can they engineer a custom search strategy. It’s a task that generally takes a Ph.D. student at least a year, and one that Nachman thinks could be done much faster, and more thoroughly.

    The proposed CWoLa algorithm, which stands for Classification Without Labels, can search existing data for any unknown particle that decays into either two lighter unknown particles of the same type, or two known particles of the same or different type. Using ordinary search methods, it would take the LHC collaborations at least 20 years to scour the possibilities for the latter, and no searches currently exist for the former. Nachman, who works on the ATLAS project, says CWoLa could do them all in one go.

    Other experimental particle physicists agree it could be a worthwhile project. “We’ve looked in a lot of the predictable pockets, so starting to fill in the corners we haven’t looked in is an important direction for us to go in next,” said Kate Pachal, a physicist who searches for new particle bumps with the ATLAS project. She batted around the idea of trying to design flexible software that could deal with a range of particle masses last year with some colleagues, but no one knew enough about machine learning. “Now I think it might be the time to try this,” she said.

    The hope is that neural networks could pick up on subtle correlations in the data that resist current modeling efforts. Other machine learning techniques have successfully boosted the efficiency of certain tasks at the LHC, such as identifying “jets” made by bottom-quark particles. The work has left no doubt that some signals are escaping physicists’ notice. “They’re leaving information on the table, and when you spend $10 billion on a machine, you don’t want to leave information on the table,” said Daniel Whiteson, a particle physicist at the University of California, Irvine.

    Yet machine learning is rife with cautionary tales of programs that confused arms with dumbbells (or worse). At the LHC, some worry that the shortcuts will end up reflecting gremlins in the machine itself, which experimental physicists take great pains to intentionally overlook. “Once you find an anomaly, is it new physics or is it something funny that went on with the detector?” asked Till Eifert, a physicist on ATLAS.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:41 pm on July 22, 2018 Permalink | Reply
    Tags: , , , , , , , Quanta Magazine, Sau Lan Wu, ,   

    From LHC at CERN and University of Wisconsin Madison via WIRED and Quanta: Women in STEM “Meet the Woman Who Rocked Particle Physics—Three Times” Sau Lan Wu 

    Cern New Bloc

    Cern New Particle Event

    CERN New Masthead

    CERN

    U Wisconsin

    via

    Wired logo

    WIRED

    originated at

    Quanta Magazine
    Quanta Magazine

    7.22.18
    Joshua Roebke

    1
    Sau Lan Wu at CERN, the laboratory near Geneva that houses the Large Hadron Collider. The mural depicts the detector she and her collaborators used to discover the Higgs boson. Thi My Lien Nguyen/Quanta Magazine

    In 1963, Maria Goeppert Mayer won the Nobel Prize in physics for describing the layered, shell-like structures of atomic nuclei. No woman has won since.

    One of the many women who, in a different world, might have won the physics prize in the intervening 55 years is Sau Lan Wu. Wu is the Enrico Fermi Distinguished Professor of Physics at the University of Wisconsin, Madison, and an experimentalist at CERN, the laboratory near Geneva that houses the Large Hadron Collider.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    Wu’s name appears on more than 1,000 papers in high-energy physics, and she has contributed to a half-dozen of the most important experiments in her field over the past 50 years. She has even realized the improbable goal she set for herself as a young researcher: to make at least three major discoveries.

    Wu was an integral member of one of the two groups that observed the J/psi particle, which heralded the existence of a fourth kind of quark, now called the charm. The discovery, in 1974, was known as the November Revolution, a coup that led to the establishment of the Standard Model of particle physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    Later in the 1970s, Wu did much of the math and analysis to discern the three “jets” of energy flying away from particle collisions that signaled the existence of gluons—particles that mediate the strong force holding protons and neutrons together. This was the first observation of particles that communicate a force since scientists recognized photons of light as the carriers of electromagnetism. Wu later became one of the group leaders for the ATLAS experiment, one of the two collaborations at the Large Hadron Collider that discovered the Higgs boson in 2012, filling in the final piece of the Standard Model.

    CERN ATLAS Higgs Event


    CERN/ATLAS detector

    She continues to search for new particles that would transcend the Standard Model and push physics forward.

    Sau Lan Wu was born in occupied Hong Kong during World War II. Her mother was the sixth concubine to a wealthy businessman who abandoned them and her younger brother when Wu was a child. She grew up in abject poverty, sleeping alone in a space behind a rice shop. Her mother was illiterate, but she urged her daughter to pursue an education and become independent of volatile men.

    Wu graduated from a government school in Hong Kong and applied to 50 universities in the United States. She received a scholarship to attend Vassar College and arrived with $40 to her name.

    Although she originally intended to become an artist, she was inspired to study physics after reading a biography of Marie Curie. She worked on experiments during consecutive summers at Brookhaven National Laboratory on Long Island, and she attended graduate school at Harvard University. She was the only woman in her cohort and was barred from entering the male dormitories to join the study groups that met there. She has labored since then to make a space for everyone in physics, mentoring more than 60 men and women through their doctorates.

    Quanta Magazine joined Sau Lan Wu on a gray couch in sunny Cleveland in early June. She had just delivered an invited lecture about the discovery of gluons at a symposium to honor the 50th birthday of the Standard Model. The interview has been condensed and edited for clarity.

    2
    3
    Wu’s office at CERN is decorated with mementos and photos, including one of her and her husband, Tai Tsun Wu, a professor of theoretical physics at Harvard.
    Thi My Lien Nguyen/Quanta Magazine

    You work on the largest experiments in the world, mentor dozens of students, and travel back and forth between Madison and Geneva. What is a normal day like for you?

    Very tiring! In principle, I am full-time at CERN, but I do go to Madison fairly often. So I do travel a lot.

    How do you manage it all?

    Well, I think the key is that I am totally devoted. My husband, Tai Tsun Wu, is also a professor, in theoretical physics at Harvard. Right now, he’s working even harder than me, which is hard to imagine. He’s doing a calculation about the Higgs boson decay that is very difficult. But I encourage him to work hard, because it’s good for your mental state when you are older. That’s why I work so hard, too.

    Of all the discoveries you were involved in, do you have a favorite?

    Discovering the gluon was a fantastic time. I was just a second- or third-year assistant professor. And I was so happy. That’s because I was the baby, the youngest of all the key members of the collaboration.

    The gluon was the first force-carrying particle discovered since the photon. The W and Z bosons, which carry the weak force, were discovered a few years later, and the researchers who found them won a Nobel Prize. Why was no prize awarded for the discovery of the gluon?

    Well, you are going to have to ask the Nobel committee that. [Laughs.] I can tell you what I think, though. Only three people can win a Nobel Prize. And there were three other physicists on the experiment with me who were more senior than I was. They treated me very well. But I pushed the idea of searching for the gluon right away, and I did the calculations. I didn’t even talk to theorists. Although I married a theorist, I never really paid attention to what the theorists told me to do.

    How did you wind up being the one to do those calculations?

    If you want to be successful, you have to be fast. But you also have to be first. So I did the calculations to make sure that as soon as a new collider at at DESY [the German Electron Synchrotron] turned on in Hamburg we could see the gluon and recognize its signal of three jets of particles.

    DESY Helmholtz Centres & Networks: DESY’s synchrotron radiation source: the PETRA III storage ring (in orange) with the three experimental halls (in blue) in 2015.

    We were not so sure in those days that the signal for the gluon would be clear-cut, because the concept of jets had only been introduced a couple of years earlier, but this seemed to be the only way to discover gluons.

    You were also involved in discovering the Higgs boson, the particle in the Standard Model that gives many other particles their masses. How was that experiment different from the others that you were part of?

    I worked a lot more and a lot longer to discover the Higgs than I have on anything else. I worked for over 30 years, doing one experiment after another. I think I contributed a lot to that discovery. But the ATLAS collaboration at CERN is so large that you can’t even talk about your individual contribution. There are 3,000 people who built and worked on our experiment [including 600 scientists at Brookhaven National Lab, NY, USA]. How can anyone claim anything? In the old days, life was easier.

    Has it gotten any easier to be a woman in physics than when you started?

    Not for me. But for younger women, yes. There is a trend among funding agencies and institutions to encourage younger women, which I think is great. But for someone like me it is harder. I went through a very difficult time. And now that I am established others say: Why should we treat you any differently?

    Who were some of your mentors when you were a young researcher?

    Bjørn Wiik really helped me when I was looking for the gluon at DESY.

    How so?

    Well, when I started at the University of Wisconsin, I was looking for a new project. I was interested in doing electron-positron collisions, which could give the clearest indication of a gluon. So I went to talk to another professor at Wisconsin who did these kinds of experiments at SLAC, the lab at Stanford. But he was not interested in working with me.

    So I tried to join a project at the new electron-positron collider at DESY. I wanted to join the JADE experiment [abbreviated from the nations that developed the detector: Japan, Germany (Deutschland) and England]. I had some friends working there, so I went to Germany and I was all set to join them. But then I heard that no one had told a big professor in the group about me, so I called him up. He said, “I am not sure if I can take you, and I am going on vacation for a month. I’ll phone you when I get back.” I was really sad because I was already in Germany at DESY.

    But then I ran into Bjørn Wiik, who led a different experiment called TASSO, and he said, “What are you doing here?” I said, “I tried to join JADE, but they turned me down.” He said, “Come and talk to me.” He accepted me the very next day.

    4
    TASSO detector at PETRA at DESY

    And the thing is, JADE later broke their chamber, and they could not have observed the three-jet signal for gluons when we observed it first at TASSO. So I have learned that if something does not work out for you in life, something else will.

    5
    Wu and Bjørn Wiik in 1978, in the electronic control room of the TASSO experiment at the German Electron Synchrotron in Hamburg, Germany. Dr. Ulrich Kötz

    You certainly turned that negative into a positive.

    Yes. The same thing happened when I left Hong Kong to attend college in the US. I applied to 50 universities after I went through a catalog at the American consulate. I wrote in every application, “I need a full scholarship and room and board,” because I had no money. Four universities replied. Three of them turned me down. Vassar was the only American college that accepted me. And it turns out, it was the best college of all the ones I applied to.

    If you persist, something good is bound to happen. My philosophy is that you have to work hard and have good judgment. But you also have to have luck.

    I know this is an unfair question, because no one ever asks men, even though we should, but how can society inspire more women to study physics or consider it as a career?

    Well, I can only say something about my field, experimental high-energy physics. I think my field is very hard for women. I think partially it’s the problem of family.

    My husband and I did not live together for 10 years, except during the summers. And I gave up having children. When I was considering having children, it was around the time when I was up for tenure and a grant. I feared I would lose both if I got pregnant. I was less worried about actually having children than I was about walking into my department or a meeting while pregnant. So it’s very, very hard for families.

    I think it still can be.

    Yeah, but for the younger generation it’s different. Nowadays, a department looks good if it supports women. I don’t mean that departments are deliberately doing that only to look better, but they no longer actively fight against women. It’s still hard, though. Especially in experimental high-energy physics. I think there is so much traveling that it makes having a family or a life difficult. Theory is much easier.

    You have done so much to help establish the Standard Model of particle physics. What do you like about it? What do you not like?

    It’s just amazing that the Standard Model works as well as it does. I like that every time we try to search for something that is not accounted for in the Standard Model, we do not find it, because the Standard Model says we shouldn’t.

    But back in my day, there was so much that we had yet to discover and establish. The problem now is that everything fits together so beautifully and the Model is so well confirmed. That’s why I miss the time of the J/psi discovery. Nobody expected that, and nobody really had a clue what it was.

    But maybe those days of surprise aren’t over.

    We know that the Standard Model is an incomplete description of nature. It doesn’t account for gravity, the masses of neutrinos, or dark matter—the invisible substance that seems to make up six-sevenths of the universe’s mass. Do you have a favorite idea for what lies beyond the Standard Model?

    Well, right now I am searching for the particles that make up dark matter. The only thing is, I am committed to working at the Large Hadron Collider at CERN. But a collider may or may not be the best place to look for dark matter. It’s out there in the galaxies, but we don’t see it here on Earth.

    Still, I am going to try. If dark matter has any interactions with the known particles, it can be produced via collisions at the LHC. But weakly interacting dark matter would not leave a visible signature in our detector at ATLAS, so we have to intuit its existence from what we actually see. Right now, I am concentrating on finding hints of dark matter in the form of missing energy and momentum in a collision that produces a single Higgs boson.

    What else have you been working on?What else have you been working on?

    Our most important task is to understand the properties of the Higgs boson, which is a completely new kind of particle. The Higgs is more symmetric than any other particle we know about; it’s the first particle that we have discovered without any spin. My group and I were major contributors to the very recent measurement of Higgs bosons interacting with top quarks. That observation was extremely challenging. We examined five years of collision data, and my team worked intensively on advanced machine-learning techniques and statistics.

    In addition to studying the Higgs and searching for dark matter, my group and I also contributed to the silicon pixel detector, to the trigger system [that identifies potentially interesting collisions], and to the computing system in the ATLAS detector. We are now improving these during the shutdown and upgrade of the LHC. We are also very excited about the near future, because we plan to start using quantum computing to do our data analysis.

    6
    Wu at CERN. Thi My Lien Nguyen/Quanta Magazine

    Do you have any advice for young physicists just starting their careers?

    Some of the young experimentalists today are a bit too conservative. In other words, they are afraid to do something that is not in the mainstream. They fear doing something risky and not getting a result. I don’t blame them. It’s the way the culture is. My advice to them is to figure out what the most important experiments are and then be persistent. Good experiments always take time.

    But not everyone gets to take that time.

    Right. Young students don’t always have the freedom to be very innovative, unless they can do it in a very short amount of time and be successful. They don’t always get to be patient and just explore. They need to be recognized by their collaborators. They need people to write them letters of recommendation.

    The only thing that you can do is work hard. But I also tell my students, “Communicate. Don’t close yourselves off. Try to come up with good ideas on your own but also in groups. Try to innovate. Nothing will be easy. But it is all worth it to discover something new.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    In achievement and prestige, the University of Wisconsin–Madison has long been recognized as one of America’s great universities. A public, land-grant institution, UW–Madison offers a complete spectrum of liberal arts studies, professional programs and student activities. Spanning 936 acres along the southern shore of Lake Mendota, the campus is located in the city of Madison.

     
  • richardmitnick 8:19 am on July 9, 2018 Permalink | Reply
    Tags: , , , Quanta Magazine,   

    From Quanta Magazine: “Physicists Find a Way to See the ‘Grin’ of Quantum Gravity” 

    Quanta Magazine
    From Quanta Magazine

    March 6, 2018
    Natalie Wolchover

    Re-released 7.8.18

    A recently proposed experiment would confirm that gravity is a quantum force.

    1
    Two microdiamonds would be used to test the quantum nature of gravity. Olena Shmahalo/Quanta Magazine

    In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.

    Bronstein figured out how to describe gravity in terms of quantized particles, now called gravitons, but only when the force of gravity is weak — that is (in general relativity), when the space-time fabric is so weakly curved that it can be approximated as flat. When gravity is strong, “the situation is quite different,” he wrote. “Without a deep revision of classical notions, it seems hardly possible to extend the quantum theory of gravity also to this domain.”

    His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.

    2
    To Solve the Biggest Mystery in Physics, Join Two Kinds of Law. Robbert Dijkgraaf . James O’Brien for Quanta Magazine.Reductionism breaks the world into elementary building blocks. Emergence finds the simple laws that arise out of complexity. These two complementary ways of viewing the universe come together in modern theories of quantum gravity. September 7, 2017

    Perhaps, given the chance, the whip-smart Bronstein might have helped to speed things along. Aside from quantum gravity, he contributed to astrophysics and cosmology, semiconductor theory, and quantum electrodynamics, and he also wrote several science books for children, before being caught up in Stalin’s Great Purge and executed in 1938, at the age of 31.

    The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.

    The problem is gravity’s extreme weakness. Whereas the quantized particles that convey the strong, weak and electromagnetic forces are so powerful that they tightly bind matter into atoms, and can be studied in tabletop experiments, gravitons are individually so weak that laboratories have no hope of detecting them. To detect a graviton with high probability, a particle detector would have to be so huge and massive that it would collapse into a black hole. This weakness is why it takes an astronomical accumulation of mass to gravitationally influence other massive bodies, and why we only see gravity writ large.

    Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity — where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed — always hide behind the horizons of black holes.

    3
    Mike Zeng for Quanta Magazine. Where Gravity Is Weak and Naked Singularities Are Verboten. Natalie Wolchover Recent calculations tie together two conjectures about gravity, potentially revealing new truths about its elusive quantum nature.

    “Even a few years ago it was a generic consensus that, most likely, it’s not even conceivably possible to measure quantization of the gravitational field in any way,” said Igor Pikovski, a theoretical physicist at Harvard University.

    Now, a pair of papers recently published in Physical Review Letters has changed the calculus.

    Spin Entanglement Witness for Quantum Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240401
    Gravitationally Induced Entanglement between Two Massive Particles is Sufficient Evidence of Quantum Effects in Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240402

    The papers contend that it’s possible to access quantum gravity after all — while learning nothing about it. The papers, written by Sougato Bose at University College London and nine collaborators and by Chiara Marletto and Vlatko Vedral at the University of Oxford, propose a technically challenging, but feasible, tabletop experiment that could confirm that gravity is a quantum force like all the rest, without ever detecting a graviton. Miles Blencowe, a quantum physicist at Dartmouth College who was not involved in the work, said the experiment would detect a sure sign of otherwise invisible quantum gravity — the “grin of the Cheshire cat.”

    2
    A levitating microdiamond (green dot) in Gavin Morley’s lab at the University of Warwick, in front of the lens used to trap the diamond with light. Gavin W Morley

    The proposed experiment will determine whether two objects — Bose’s group plans to use a pair of microdiamonds — can become quantum-mechanically entangled with each other through their mutual gravitational attraction. Entanglement is a quantum phenomenon in which particles become inseparably entwined, sharing a single physical description that specifies their possible combined states. (The coexistence of different possible states, called a “superposition,” is the hallmark of quantum systems.) For example, an entangled pair of particles might exist in a superposition in which there’s a 50 percent chance that the “spin” of particle A points upward and B’s points downward, and a 50 percent chance of the reverse. There’s no telling in advance which outcome you’ll get when you measure the particles’ spin directions, but you can be sure they’ll point opposite ways.

    The authors argue that the two objects in their proposed experiment can become entangled with each other in this way only if the force that acts between them — in this case, gravity — is a quantum interaction, mediated by gravitons that can maintain quantum superpositions. “If you can do the experiment and you get entanglement, then according to those papers, you have to conclude that gravity is quantized,” Blencowe explained.

    To Entangle a Diamond

    Quantum gravity is so imperceptible that some researchers have questioned whether it even exists. The venerable mathematical physicist Freeman Dyson, 94, has argued since 2001 that the universe might sustain a kind of “dualistic” description, where “the gravitational field described by Einstein’s theory of general relativity is a purely classical field without any quantum behavior,” as he wrote that year in The New York Review of Books, even though all the matter within this smooth space-time continuum is quantized into particles that obey probabilistic rules.

    Dyson, who helped develop quantum electrodynamics (the theory of interactions beween matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.

    He is not the only skeptic. The renowned British physicist Sir Roger Penrose and, independently, the Hungarian researcher Lajos Diósi have hypothesized that space-time cannot maintain superpositions. They argue that its smooth, solid, fundamentally classical nature prevents it from curving in two different possible ways at once — and that its rigidity is exactly what causes superpositions of quantum systems like electrons and photons to collapse. This “gravitational decoherence,” in their view, gives rise to the single, rock-solid, classical reality experienced at macroscopic scales.

    The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.

    Bose’s and Marletto’s proposals appeared simultaneously mostly by chance, though experts said they reflect the zeitgeist. Experimental quantum physics labs around the world are putting ever-larger microscopic objects into quantum superpositions and streamlining protocols for testing whether two quantum systems are entangled. The proposed experiment will have to combine these procedures while requiring further improvements in scale and sensitivity; it could take a decade or more to pull it off. “But there are no physical roadblocks,” said Pikovski, who also studies how laboratory experiments might probe gravitational phenomena. “I think it’s challenging, but I don’t think it’s impossible.”

    The plan is laid out in greater detail in the paper by Bose and co-authors — an Ocean’s Eleven cast of experts for different steps of the proposal. In his lab at the University of Warwick, for instance, co-author Gavin Morley is working on step one, attempting to put a microdiamond in a quantum superposition of two locations. To do this, he’ll embed a nitrogen atom in the microdiamond, next to a vacancy in the diamond’s structure, and zap it with a microwave pulse. An electron orbiting the nitrogen-vacancy system both absorbs the light and doesn’t, and the system enters a quantum superposition of two spin directions — up and down — like a spinning top that has some probability of spinning clockwise and some chance of spinning counterclockwise. The microdiamond, laden with this superposed spin, is subjected to a magnetic field, which makes up-spins move left while down-spins go right. The diamond itself therefore splits into a superposition of two trajectories.

    In the full experiment, the researchers must do all this to two diamonds — a blue one and a red one, say — suspended next to each other inside an ultracold vacuum. When the trap holding them is switched off, the two microdiamonds, each in a superposition of two locations, fall vertically through the vacuum. As they fall, the diamonds feel each other’s gravity. But how strong is their gravitational attraction?

    If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.

    In each case, the different degrees of gravitational attraction affect the evolving components of the diamonds’ superpositions. The two diamonds become interdependent, meaning that their states can only be specified in combination — if this, then that — so that, in the end, the spin directions of their two nitrogen-vacancy systems will be correlated.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    After the microdiamonds have fallen side by side for about three seconds — enough time to become entangled by each other’s gravity — they then pass through another magnetic field that brings the branches of each superposition back together. The last step of the experiment is an “entanglement witness” protocol developed by the Dutch physicist Barbara Terhal and others: The blue and red diamonds enter separate devices that measure the spin directions of their nitrogen-vacancy systems. (Measurement causes superpositions to collapse into definite states.) The two outcomes are then compared. By running the whole experiment over and over and comparing many pairs of spin measurements, the researchers can determine whether the spins of the two quantum systems are correlated with each other more often than a known upper bound for objects that aren’t quantum-mechanically entangled. In that case, it would follow that gravity does entangle the diamonds and can sustain superpositions.

    “What’s beautiful about the arguments is that you don’t really need to know what the quantum theory is, specifically,” Blencowe said. “All you have to say is there has to be some quantum aspect to this field that mediates the force between the two particles.”

    Technical challenges abound. The largest object that’s been put in a superposition of two locations before is an 800-atom molecule. Each microdiamond contains more than 100 billion carbon atoms — enough to muster a sufficient gravitational force. Unearthing its quantum-mechanical character will require colder temperatures, a higher vacuum and finer control. “So much of the work is getting this initial superposition up and running,” said Peter Barker, a member of the experimental team based at UCL who is improving methods for laser-cooling and trapping the microdiamonds. If it can be done with one diamond, Bose added, “then two doesn’t make much of a difference.”

    Why Gravity Is Unique

    Quantum gravity researchers do not doubt that gravity is a quantum interaction, capable of inducing entanglement. Certainly, gravity is special in some ways, and there’s much to figure out about the origin of space and time, but quantum mechanics must be involved, they say. “It doesn’t really make much sense to try to have a theory in which the rest of physics is quantum and gravity is classical,” said Daniel Harlow, a quantum gravity researcher at the Massachusetts Institute of Technology. The theoretical arguments against mixed quantum-classical models are strong (though not conclusive).

    On the other hand, theorists have been wrong before, Harlow noted: “So if you can check, why not? If that will shut up these people” — meaning people who question gravity’s quantumness — “that’s great.”

    Dyson wrote in an email, after reading the PRL papers, “The proposed experiment is certainly of great interest and worth performing with real quantum systems.” However, he said the authors’ way of thinking about quantum fields differs from his. “It is not clear to me whether [the experiment] would settle the question whether quantum gravity exists,” he wrote. “The question that I have been asking, whether a single graviton is observable, is a different question and may turn out to have a different answer.”

    In fact, the way Bose, Marletto and their co-authors think about quantized gravity derives from how Bronstein first conceived of it in 1935. (Dyson called Bronstein’s paper “a beautiful piece of work” that he had not seen before.) In particular, Bronstein showed that the weak gravity produced by a small mass can be approximated by Newton’s law of gravity. (This is the force that acts between the microdiamond superpositions.) According to Blencowe, weak quantized-gravity calculations haven’t been developed much, despite being arguably more physically relevant than the physics of black holes or the Big Bang. He hopes the new experimental proposal will spur theorists to find out whether there are any subtle corrections to the Newtonian approximation that future tabletop experiments might be able to probe.

    Leonard Susskind, a prominent quantum gravity and string theorist at Stanford University, saw value in carrying out the proposed experiment because “it provides an observation of gravity in a new range of masses and distances.” But he and other researchers emphasized that microdiamonds cannot reveal anything about the full theory of quantum gravity or space-time. He and his colleagues want to understand what happens at the center of a black hole, and at the moment of the Big Bang.

    Perhaps one clue as to why it is so much harder to quantize gravity than everything else is that other force fields in nature exhibit a feature called “locality”: The quantum particles in one region of the field (photons in the electromagnetic field, for instance) are “independent of the physical entities in some other region of space,” said Mark Van Raamsdonk, a quantum gravity theorist at the University of British Columbia. But “there’s at least a bunch of theoretical evidence that that’s not how gravity works.”

    In the best toy models of quantum gravity (which have space-time geometries that are simpler than those of the real universe), it isn’t possible to assume that the bendy space-time fabric subdivides into independent 3-D pieces, Van Raamsdonk said. Instead, modern theory suggests that the underlying, fundamental constituents of space “are organized more in a 2-D way.” The space-time fabric might be like a hologram, or a video game: “Even though the picture is three-dimensional, the information is stored in some two-dimensional computer chip,” he said. In that case, the 3-D world is illusory in the sense that different parts of it aren’t all that independent. In the video-game analogy, a handful of bits stored in the 2-D chip might encode global features of the game’s universe.

    The distinction matters when you try to construct a quantum theory of gravity. The usual approach to quantizing something is to identify its independent parts — particles, say — and then apply quantum mechanics to them. But if you don’t identify the correct constituents, you get the wrong equations. Directly quantizing 3-D space, as Bronstein did, works to some extent for weak gravity, but the method fails when space-time is highly curved.

    Witnessing the “grin” of quantum gravity would help motivate these abstract lines of reasoning, some experts said. After all, even the most sensible theoretical arguments for the existence of quantum gravity lack the gravitas of experimental facts. When Van Raamsdonk explains his research in a colloquium or conversation, he said, he usually has to start by saying that gravity needs to be reconciled with quantum mechanics because the classical space-time description fails for black holes and the Big Bang, and in thought experiments about particles colliding at unreachably high energies. “But if you could just do this simple experiment and get the result that shows you that the gravitational field was actually in a superposition,” he said, then the reason the classical description falls short would be self-evident: “Because there’s this experiment that suggests gravity is quantum.”

    Correction March 6, 2018: An earlier version of this article referred to Dartmouth University. Despite the fact that Dartmouth has multiple individual schools, including an undergraduate college as well as academic and professional graduate schools, the institution refers to itself as Dartmouth College for historical reasons.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: