Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:47 am on July 5, 2020 Permalink | Reply
    Tags: "Astronomers Are Uncovering the Magnetic Soul of the Universe", , , , , Quanta Magazine, ,   

    From Quanta Magazine via WIRED: “Astronomers Are Uncovering the Magnetic Soul of the Universe” 

    From Quanta Magazine

    via


    WIRED

    07.05.2020
    Natalie Wolchover

    Researchers are discovering that magnetic fields permeate much of the cosmos. If these fields date back to the Big Bang, they could solve a cosmological mystery.

    1
    Hidden magnetic field lines stretch millions of light years across the universe.Illustration: Pauline Voß/Quanta Magazine.

    Anytime astronomers figure out a new way of looking for magnetic fields in ever more remote regions of the cosmos, inexplicably, they find them.

    These force fields—the same entities that emanate from fridge magnets—surround Earth, the sun, and all galaxies. Twenty years ago, astronomers started to detect magnetism permeating entire galaxy clusters, including the space between one galaxy and the next. Invisible field lines swoop through intergalactic space like the grooves of a fingerprint.

    Last year, astronomers finally managed to examine a far sparser region of space—the expanse between galaxy clusters. There, they discovered the largest magnetic field yet: 10 million light-years of magnetized space spanning the entire length of this “filament” of the cosmic web [Science]. A second magnetized filament has already been spotted elsewhere in the cosmos by means of the same techniques. “We are just looking at the tip of the iceberg, probably,” said Federica Govoni of the National Institute for Astrophysics in Cagliari, Italy, who led the first detection.

    The question is: Where did these enormous magnetic fields come from?

    “It clearly cannot be related to the activity of single galaxies or single explosions or, I don’t know, winds from supernovae,” said Franco Vazza, an astrophysicist at the University of Bologna who makes state-of-the-art computer simulations of cosmic magnetic fields. “This goes much beyond that.”

    One possibility is that cosmic magnetism is primordial, tracing all the way back to the birth of the universe. In that case, weak magnetism should exist everywhere, even in the “voids” of the cosmic web—the very darkest, emptiest regions of the universe. The omnipresent magnetism would have seeded the stronger fields that blossomed in galaxies and clusters.

    2
    The cosmic web, shown here in a computer simulation, is the large-scale structure of the universe. Dense regions are filled with galaxies and galaxy clusters. Thin filaments connect these clumps. Voids are nearly empty regions of space.Illustration: Springel & others/Virgo Consortium.

    Primordial magnetism might also help resolve another cosmological conundrum known as the Hubble tension—probably the hottest topic in cosmology.

    The problem at the heart of the Hubble tension is that the universe seems to be expanding significantly faster than expected based on its known ingredients. In a paper posted online in April and under review with Physical Review Letters, the cosmologists Karsten Jedamzik and Levon Pogosian argue that weak magnetic fields in the early universe would lead to the faster cosmic expansion rate seen today.

    Primordial magnetism relieves the Hubble tension so simply that Jedamzik and Pogosian’s paper has drawn swift attention. “This is an excellent paper and idea,” said Marc Kamionkowski, a theoretical cosmologist at Johns Hopkins University who has proposed other solutions to the Hubble tension.

    Kamionkowski and others say more checks are needed to ensure that the early magnetism doesn’t throw off other cosmological calculations. And even if the idea works on paper, researchers will need to find conclusive evidence of primordial magnetism to be sure it’s the missing agent that shaped the universe.

    Still, in all the years of talk about the Hubble tension, it’s perhaps strange that no one considered magnetism before. According to Pogosian, who is a professor at Simon Fraser University in Canada, most cosmologists hardly think about magnetism. “Everyone knows it’s one of those big puzzles,” he said. But for decades, there was no way to tell whether magnetism is truly ubiquitous and thus a primordial component of the cosmos, so cosmologists largely stopped paying attention.

    Meanwhile, astrophysicists kept collecting data. The weight of evidence has led most of them to suspect that magnetism is indeed everywhere.

    The Magnetic Soul of the Universe

    In the year 1600, the English scientist William Gilbert’s studies of lodestones—naturally magnetized rocks that people had been fashioning into compasses for thousands of years—led him to opine that their magnetic force “imitates a soul.” He correctly surmised that Earth itself is a “great magnet,” and that lodestones “look toward the poles of the Earth.”

    Magnetic fields arise anytime electric charge flows. Earth’s field, for instance, emanates from its inner “dynamo,” the current of liquid iron churning in its core. The fields of fridge magnets and lodestones come from electrons spinning around their constituent atoms.


    Cosmological simulations illustrate two possible explanations for how magnetic fields came to permeate galaxy clusters. At left, the fields grow from uniform “seed” fields that filled the cosmos in the moments after the Big Bang. At right, astrophysical processes such as star formation and the flow of matter into supermassive black holes create magnetized winds that spill out from galaxies.Video: F. Vazza.

    However, once a “seed” magnetic field arises from charged particles in motion, it can become bigger and stronger by aligning weaker fields with it. Magnetism “is a little bit like a living organism,” said Torsten Enßlin, a theoretical astrophysicist at the Max Planck Institute for Astrophysics in Garching, Germany, “because magnetic fields tap into every free energy source they can hold onto and grow. They can spread and affect other areas with their presence, where they grow as well.”

    Ruth Durrer, a theoretical cosmologist at the University of Geneva, explained that magnetism is the only force apart from gravity that can shape the large-scale structure of the cosmos, because only magnetism and gravity can “reach out to you” across vast distances. Electricity, by contrast, is local and short-lived, since the positive and negative charge in any region will neutralize overall. But you can’t cancel out magnetic fields; they tend to add up and survive.

    Yet for all their power, these force fields keep low profiles. They are immaterial, perceptible only when acting upon other things. “You can’t just take a picture of a magnetic field; it doesn’t work like that,” said Reinout van Weeren, an astronomer at Leiden University who was involved in the recent detections of magnetized filaments.

    In their paper last year, van Weeren and 28 coauthors inferred the presence of a magnetic field in the filament between galaxy clusters Abell 399 and Abell 401 from the way the field redirects high-speed electrons and other charged particles passing through it. As their paths twist in the field, these charged particles release faint “synchrotron radiation.”

    The synchrotron signal is strongest at low radio frequencies, making it ripe for detection by LOFAR, an array of 20,000 low-frequency radio antennas spread across Europe.

    ASTRON LOFAR European Map

    The team actually gathered data from the filament back in 2014 during a single eight-hour stretch, but the data sat waiting as the radio astronomy community spent years figuring out how to improve the calibration of LOFAR’s measurements. Earth’s atmosphere refracts radio waves that pass through it, so LOFAR views the cosmos as if from the bottom of a swimming pool. The researchers solved the problem by tracking the wobble of “beacons” in the sky—radio emitters with precisely known locations—and correcting for this wobble to deblur all the data. When they applied the deblurring algorithm to data from the filament, they saw the glow of synchrotron emissions right away.

    3
    LOFAR consists of 20,000 individual radio antennas spread across Europe.Photograph: ASTRON.

    The filament looks magnetized throughout, not just near the galaxy clusters that are moving toward each other from either end. The researchers hope that a 50-hour data set they’re analyzing now will reveal more detail. Additional observations have recently uncovered magnetic fields extending throughout a second filament. Researchers plan to publish this work soon.

    The presence of enormous magnetic fields in at least these two filaments provides important new information. “It has spurred quite some activity,” van Weeren said, “because now we know that magnetic fields are relatively strong.”

    A Light Through the Voids

    If these magnetic fields arose in the infant universe, the question becomes: how? “People have been thinking about this problem for a long time,” said Tanmay Vachaspati of Arizona State University.

    In 1991, Vachaspati proposed that magnetic fields might have arisen during the electroweak phase transition—the moment, a split second after the Big Bang, when the electromagnetic and weak nuclear forces became distinct. Others have suggested that magnetism materialized microseconds later, when protons formed. Or soon after that: The late astrophysicist Ted Harrison argued in the earliest primordial magnetogenesis theory in 1973 that the turbulent plasma of protons and electrons might have spun up the first magnetic fields. Still others have proposed that space became magnetized before all this, during cosmic inflation—the explosive expansion of space that purportedly jump-started the Big Bang itself. It’s also possible that it didn’t happen until the growth of structures a billion years later.

    The way to test theories of magnetogenesis is to study the pattern of magnetic fields in the most pristine patches of intergalactic space, such as the quiet parts of filaments and the even emptier voids. Certain details—such as whether the field lines are smooth, helical, or “curved every which way, like a ball of yarn or something” (per Vachaspati), and how the pattern changes in different places and on different scales—carry rich information that can be compared to theory and simulations. For example, if the magnetic fields arose during the electroweak phase transition, as Vachaspati proposed, then the resulting field lines should be helical, “like a corkscrew,” he said.

    The hitch is that it’s difficult to detect force fields that have nothing to push on.

    One method, pioneered by the English scientist Michael Faraday back in 1845, detects a magnetic field from the way it rotates the polarization direction of light passing through it. The amount of “Faraday rotation” depends on the strength of the magnetic field and the frequency of the light. So by measuring the polarization at different frequencies, you can infer the strength of magnetism along the line of sight. “If you do it from different places, you can make a 3D map,” said Enßlin.

    4
    Illustration: Samuel Velasco/Quanta Magazine.

    Researchers have started to make [MNRAS] rough Faraday rotation measurements using LOFAR, but the telescope has trouble picking out the extremely faint signal. Valentina Vacca, an astronomer and a colleague of Govoni’s at the National Institute for Astrophysics, devised an algorithm a few years ago for teasing out subtle Faraday rotation signals statistically, by stacking together many measurements of empty places. “In principle, this can be used for voids,” Vacca said.

    But the Faraday technique will really take off when the next-generation radio telescope, a gargantuan international project called the Square Kilometer Array, starts up in 2027. “SKA should produce a fantastic Faraday grid,” Enßlin said.

    For now, the only evidence of magnetism in the voids is what observers don’t see when they look at objects called blazars located behind voids.

    Blazars are bright beams of gamma rays and other energetic light and matter powered by supermassive black holes. As the gamma rays travel through space, they sometimes collide with ancient microwaves, morphing into an electron and a positron as a result. These particles then fizzle and turn into lower-energy gamma rays.

    But if the blazar’s light passes through a magnetized void, the lower-energy gamma rays will appear to be missing, reasoned Andrii Neronov and Ievgen Vovk of the Geneva Observatory in 2010. The magnetic field will deflect the electrons and positrons out of the line of sight. When they decay into lower-energy gamma rays, those gamma rays won’t be pointed at us.

    5
    Illustration: Samuel Velasco/Quanta Magazine.

    Indeed, when Neronov and Vovk analyzed data from a suitably located blazar, they saw its high-energy gamma rays, but not the low-energy gamma-ray signal. “It’s the absence of a signal that is a signal,” Vachaspati said.

    A nonsignal is hardly a smoking gun, and alternative explanations for the missing gamma rays have been suggested. However, follow-up observations have increasingly pointed to Neronov and Vovk’s hypothesis that voids are magnetized. “It’s the majority view,” Durrer said. Most convincingly, in 2015, one team overlaid many measurements of blazars behind voids and managed to tease [Physical Review Letters] out a faint halo of low-energy gamma rays around the blazars. The effect is exactly what would be expected if the particles were being scattered by faint magnetic fields—measuring only about a millionth of a trillionth as strong as a fridge magnet’s.

    Cosmology’s Biggest Mystery

    Strikingly, this exact amount of primordial magnetism may be just what’s needed to resolve the Hubble tension—the problem of the universe’s curiously fast expansion.

    That’s what Pogosian realized when he saw recent computer simulations [Physical Review Letters] by Karsten Jedamzik of the University of Montpellier in France and a collaborator. The researchers added weak magnetic fields to a simulated, plasma-filled young universe and found that protons and electrons in the plasma flew along the magnetic field lines and accumulated in the regions of weakest field strength. This clumping effect made the protons and electrons combine into hydrogen—an early phase change known as recombination—earlier than they would have otherwise.

    Pogosian, reading Jedamzik’s paper, saw that this could address the Hubble tension. Cosmologists calculate how fast space should be expanding today by observing ancient light emitted during recombination. The light shows a young universe studded with blobs that formed from sound waves sloshing around in the primordial plasma. If recombination happened earlier than supposed due to the clumping effect of magnetic fields, then sound waves couldn’t have propagated as far beforehand, and the resulting blobs would be smaller. That means the blobs we see in the sky from the time of recombination must be closer to us than researchers supposed. The light coming from the blobs must have traveled a shorter distance to reach us, meaning the light must have been traversing faster-expanding space. “It’s like trying to run on an expanding surface; you cover less distance,” Pogosian said.

    The upshot is that smaller blobs mean a higher inferred cosmic expansion rate—bringing the inferred rate much closer to measurements of how fast supernovas and other astronomical objects actually seem to be flying apart.

    “I thought, wow,” Pogosian said, “this could be pointing us to [magnetic fields’] actual presence. So I wrote Karsten immediately.” The two got together in Montpellier in February, just before the lockdown. Their calculations indicated that, indeed, the amount of primordial magnetism needed to address the Hubble tension also agrees with the blazar observations and the estimated size of initial fields needed to grow the enormous magnetic fields spanning galaxy clusters and filaments. “So it all sort of comes together,” Pogosian said, “if this turns out to be right.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 11:00 am on May 8, 2020 Permalink | Reply
    Tags: "What Goes On in a Proton? Quark Math Still Conflicts With Experiments", A million-dollar math prize awaits anyone who can solve the type of equation used in QCD to show how massive entities like protons form., “We know absolutely that quarks and gluons interact with each other but we can’t calculate” the result., , Lattice QCD, , Quanta Magazine, , The discovery of quarks in the 1960s broke everything., The holographic principle   

    From Quanta Magazine: “What Goes On in a Proton? Quark Math Still Conflicts With Experiments” 

    From Quanta Magazine

    May 6, 2020
    Charlie Wood

    The quark structure of the proton 16 March 2006 Arpad Horvath

    Objects are made of atoms, and atoms are likewise the sum of their parts — electrons, protons and neutrons. Dive into one of those protons or neutrons, however, and things get weird. Three particles called quarks ricochet back and forth at nearly the speed of light, snapped back by interconnected strings of particles called gluons. Bizarrely, the proton’s mass must somehow arise from the energy of the stretchy gluon strings, since quarks weigh very little and gluons nothing at all.

    Physicists uncovered this odd quark-gluon picture in the 1960s and matched it to an equation in the ’70s, creating the theory of quantum chromodynamics (QCD). The problem is that while the theory seems accurate, it is extraordinarily complicated mathematically. Faced with a task like calculating how three wispy quarks produce the hulking proton, QCD simply fails to produce a meaningful answer.

    “It’s tantalizing and frustrating,” said Mark Lancaster, a particle physicist based at the University of Manchester in the United Kingdom. “We know absolutely that quarks and gluons interact with each other, but we can’t calculate” the result.

    A million-dollar math prize awaits anyone who can solve the type of equation used in QCD to show how massive entities like protons form. Lacking such a solution, particle physicists have developed arduous workarounds that deliver approximate answers. Some infer quark activity experimentally at particle colliders, while others harness the world’s most powerful supercomputers. But these approximation techniques have recently come into conflict, leaving physicists unsure exactly what their theory predicts and thus less able to interpret signs of new, unpredicted particles or effects.

    To understand what makes quarks and gluons such mathematical scofflaws, consider how much mathematical machinery goes into describing even well-behaved particles.

    A humble electron, for instance, can briefly emit and then absorb a photon. During that photon’s short life, it can split into a pair of matter-antimatter particles, each of which can engage in further acrobatics, ad infinitum. As long as each individual event ends quickly, quantum mechanics allows the combined flurry of “virtual” activity to continue indefinitely.

    In the 1940s, after considerable struggle, physicists developed mathematical rules that could accommodate this bizarre feature of nature. Studying an electron involved breaking down its virtual entourage into a series of possible events, each corresponding to a squiggly drawing known as a Feynman diagram and a matching equation. A perfect analysis of the electron would require an infinite string of diagrams — and a calculation with infinitely many steps — but fortunately for the physicists, the more byzantine sketches of rarer events ended up being relatively inconsequential. Truncating the series gives good-enough answers.

    The discovery of quarks in the 1960s broke everything. By pelting protons with electrons, researchers uncovered the proton’s internal parts, bound by a novel force. Physicists raced to find a description that could handle these new building blocks, and they managed to wrap all the details of quarks and the “strong interaction” that binds them into a compact equation in 1973. But their theory of the strong interaction, quantum chromodynamics, didn’t behave in the usual way, and neither did the particles.

    Feynman diagrams treat particles as if they interact by approaching each other from a distance, like billiard balls. But quarks don’t act like this. The Feynman diagram representing three quarks coming together from a distance and binding to one another to form a proton is a mere “cartoon,” according to Flip Tanedo, a particle physicist at the University of California, Riverside, because quarks are bound so strongly that they have no separate existence. The strength of their connection also means that the infinite series of terms corresponding to the Feynman diagrams grows in an unruly fashion, rather than fading away quickly enough to permit an easy approximation. Feynman diagrams are simply the wrong tool.

    The strong interaction is weird for two main reasons. First, whereas the electromagnetic interaction involves just one variety of charge (electric charge), the strong interaction involves three: “color” charges nicknamed red, green and blue. Weirder still, the carrier of the strong interaction, dubbed the gluon, itself bears color charge. So while the (electrically neutral) photons that comprise electromagnetic fields don’t interact with each other, collections of colorful gluons draw together into strings. “That really drives the differences we see,” Lancaster said. The ability of gluons to trip over themselves, together with the three charges, makes the strong interaction strong — so strong that quarks can’t escape each other’s company.

    Evidence piled up over the decades that gluons exist and act as predicted in certain circumstances. But for most calculations, the QCD equation has proved intractable. Physicists need to know what QCD predicts, however — not just to understand quarks and gluons, but to pin down properties of other particles as well, since they’re all affected by the dance of quantum activity that includes virtual quarks.

    One approach has been to infer incalculable values by watching how quarks behave in experiments. “You take electrons and positrons and slam them together,” said Chris Polly, a particle physicist at the Fermi National Accelerator Laboratory, “and ask how often you make quark [products] in the final state.” From those measurements, he said, you can extrapolate how often quark bundles should pop up in the hubbub of virtual activity that surrounds all particles.

    Other researchers have continued to try to wring information from the canonical QCD equation by calculating approximate solutions using supercomputers. “You just keep throwing more computing cycles at it and your answer will keep getting better,” said Aaron Meyer, a particle physicist at Brookhaven National Laboratory.

    This computational approach, known as lattice QCD, turns computers into laboratories that model the behavior of digital quarks and gluons. The technique gets its name from the way it slices space-time into a grid of points. Quarks sit on the lattice points, and the QCD equation lets them interact. The denser the grid, the more accurate the simulation. The Fermilab physicist Andreas Kronfeld remembers how, three decades ago, these simulations had just a handful of lattice points on a side. But computing power has increased, and lattice QCD can now successfully predict the proton’s mass to within a few percent of the experimentally determined value.

    Kronfeld is a spokesperson for USQCD, a federation of lattice QCD groups in the United States that have banded together to negotiate for bulk supercomputer time. He serves as the principal investigator for the federation’s efforts on the Summit supercomputer, currently the world’s fastest, located at Oak Ridge National Laboratory. USQCD runs one of Summit’s largest programs, occupying nearly 4% of the machine’s annual computing capacity.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Theorists thought these digital laboratories were still a year or two away from becoming competitive with the collider experiments in approximating the effects quarks have on other particles. But in February a European collaboration shocked the community with a preprint claiming to nail a magnetic property of a particle called the muon to within 1% of its true value, using novel noise reduction techniques. “You might think of it as throwing down the gauntlet,” said Aida El-Khadra, a high-energy theorist at the University of Illinois, Urbana-Champaign.

    The team’s prediction for virtual quark activity around the muon clashed with the inferences from electron-positron collisions, however. Meyer, who recently co-authored a survey of the conflicting results, says that many technical details in lattice QCD remain poorly understood, such as how to hop from the gritty lattice back to smooth space. Efforts to determine what QCD predicts for the muon, which many researchers consider a bellwether for undiscovered particles, are ongoing.

    Meanwhile, mathematically minded researchers haven’t entirely despaired of finding a pen-and-paper strategy for tackling the strong interaction — and reaping the million-dollar reward offered by the Clay Mathematics Institute for a rigorous prediction of the mass of the lightest possible collection of quarks or gluons.

    One such Hail Mary pass in the theoretical world is a tool called the holographic principle. The general strategy is to translate the problem into an abstract mathematical space where some hologram of quarks can be separated from each other, allowing an analysis in terms of Feynman diagrams.

    Simple attempts look promising, according to Tanedo, but none come close to the hard-won accuracy of lattice QCD. For now, theorists will continue to refine their imperfect tools and dream of new mathematical machinery capable of taming the fundamental but inseparable quarks.

    “That would be the holy grail,” Tanedo says. QCD is “just begging for us to figure out how that actually works.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:21 pm on March 20, 2020 Permalink | Reply
    Tags: Quanta Magazine, The shape of the universe   

    From Quanta Magazine: “What Is the Geometry of the Universe?” 


    From Quanta Magazine

    In our mind’s eye, the universe seems to go on forever. But using geometry we can explore a variety of three-dimensional shapes that offer alternatives to “ordinary” infinite space.

    1
    Lukas Schlagenhauf

    Erica Klarreich

    Graphics
    Lucy Reading Ikkanda

    When you gaze out at the night sky, space seems to extend forever in all directions. That’s our mental model for the universe, but it’s not necessarily correct. There was a time, after all, when everyone thought the Earth was flat, because our planet’s curvature was too subtle to detect and a spherical Earth was unfathomable.

    Today, we know the Earth is shaped like a sphere. But most of us give little thought to the shape of the universe. Just as the sphere offered an alternative to a flat Earth, other three-dimensional shapes offer alternatives to “ordinary” infinite space.

    We can ask two separate but interrelated questions about the shape of the universe. One is about its geometry: the fine-grained local measurements of things like angles and areas. The other is about its topology: how these local pieces are stitched together into an overarching shape.

    Cosmological evidence suggests that the part of the universe we can see is smooth and homogeneous, at least approximately. The local fabric of space looks much the same at every point and in every direction. Only three geometries fit this description: flat, spherical and hyperbolic. Let’s explore these geometries, some topological considerations, and what the cosmological evidence says about which shapes best describe our universe.

    Flat Geometry

    This is the geometry we learned in school. The angles of a triangle add up to 180 degrees, and the area of a circle is πr2. The simplest example of a flat three-dimensional shape is ordinary infinite space — what mathematicians call Euclidean space — but there are other flat shapes to consider too.

    2

    These shapes are harder to visualize, but we can build some intuition by thinking in two dimensions instead of three. In addition to the ordinary Euclidean plane, we can create other flat shapes by cutting out some piece of the plane and taping its edges together. For instance, suppose we cut out a rectangular piece of paper and tape its opposite edges. Taping the top and bottom edges gives us a cylinder:

    4

    Next, we can tape the right and left edges to get a doughnut (what mathematicians call a torus):

    5

    Now, you might be thinking, “This doesn’t look flat to me.” And you’d be right. We cheated a bit in describing how the flat torus works. If you actually tried to make a torus out of a sheet of paper in this way, you’d run into difficulties. Making the cylinder would be easy, but taping the ends of the cylinder wouldn’t work: The paper would crumple along the inner circle of the torus, and it wouldn’t stretch far enough along the outer circle. You’d have to use some stretchy material instead of paper. But this stretching distorts lengths and angles, changing the geometry.

    Inside ordinary three-dimensional space, there’s no way to build an actual, smooth physical torus from flat material without distorting the flat geometry. But we can reason abstractly about what it would feel like to live inside a flat torus.

    Imagine you’re a two-dimensional creature whose universe is a flat torus. Since the geometry of this universe comes from a flat piece of paper, all the geometric facts we’re used to are the same as usual, at least on a small scale: Angles in a triangle sum to 180 degrees, and so on. But the changes we’ve made to the global topology by cutting and taping mean that the experience of living in the torus will feel very different from what we’re used to.

    For starters, there are straight paths on the torus that loop around and return to where they started:

    6

    These paths look curved on a distorted torus, but to the inhabitants of the flat torus they feel straight. And since light travels along straight paths, if you look straight ahead in one of these directions, you’ll see yourself from the rear:

    7

    On the original piece of paper, it’s as if the light you see traveled from behind you until it hit the left-hand edge, then reappeared on the right, as though you were in a wraparound video game:

    7

    An equivalent way to think about this is that if you (or a beam of light) travel across one of the four edges, you emerge in what appears to be a new “room” but is actually the same room, just seen from a new vantage point. As you wander around in this universe, you can cross into an infinite array of copies of your original room.

    9

    That means you can also see infinitely many different copies of yourself by looking in different directions. It’s a sort of hall-of-mirrors effect, except that the copies of you are not reflections:

    10

    On the doughnut, these correspond to the many different loops by which light can travel from you back to you:

    11

    Similarly, we can build a flat three-dimensional torus by gluing the opposite faces of a cube or other box. We can’t visualize this space as an object inside ordinary infinite space — it simply doesn’t fit — but we can reason abstractly about life inside it.

    Just as life in the two-dimensional torus was like living in an infinite two-dimensional array of identical rectangular rooms, life in the three-dimensional torus is like living in an infinite three-dimensional array of identical cubic rooms. You’ll see infinitely many copies of yourself:

    11
    Adapted from TechR

    The three-dimensional torus is just one of 10 different flat finite worlds. There are also flat infinite worlds such as the three-dimensional analogue of an infinite cylinder. In each of these worlds there’s a different hall-of-mirrors array to experience.
    Is Our Universe One of These Other Flat Shapes?

    When we look out into space, we don’t see infinitely many copies of ourselves. Even so, it’s surprisingly hard to rule out these flat shapes. For one thing, they all have the same local geometry as Euclidean space, so no local measurement can distinguish among them.

    And if you did see a copy of yourself, that faraway image would show how you (or your galaxy, for example) looked in the distant past, since the light had to travel a long time to reach you. Maybe we’re seeing unrecognizable copies of ourselves out there. Making matters worse, different copies of yourself will usually be different distances away from you, so most of them won’t look the same as each other. And maybe they’re all too far away for us to see anyway.

    To get around these difficulties, astronomers generally look not for copies of ourselves but for repeating features in the farthest thing we can see: the cosmic microwave background (CMB) radiation left over from shortly after the Big Bang. In practice, this means searching for pairs of circles in the CMB that have matching patterns of hot and cold spots, suggesting that they are really the same circle seen from two different directions.

    12
    CMB per Planck. ESA, Planck Collaboration.

    In 2015, astronomers performed just such a search using data from the Planck space telescope. They combed the data for the kinds of matching circles we would expect to see inside a flat three-dimensional torus or one other flat three-dimensional shape called a slab, but they failed to find them. That means that if we do live in a torus, it’s probably such a large one that any repeating patterns lie beyond the observable universe.

    Spherical Geometry

    We’re all familiar with two-dimensional spheres — the surface of a ball, or an orange, or the Earth. But what would it mean for our universe to be a three-dimensional sphere?

    It’s hard to visualize a three-dimensional sphere, but it’s easy to define one through a simple analogy. Just as a two-dimensional sphere is the set of all points a fixed distance from some center point in ordinary three-dimensional space, a three-dimensional sphere (or “three-sphere”) is the set of all points a fixed distance from some center point in four-dimensional space.

    Life in a three-sphere feels very different from life in a flat space. To get a feel for it, imagine you’re a two-dimensional being living in a two-dimensional sphere. The two-dimensional sphere is the entire universe — you can’t see or access any of the surrounding three-dimensional space. Within this spherical universe, light travels along the shortest possible paths: the great circles. To you, these great circles feel like straight lines.

    13

    Now imagine that you and your two-dimensional friend are hanging out at the North Pole, and your friend goes for a walk. As your friend strolls away, at first they’ll appear smaller and smaller in your visual circle, just as in our ordinary world (although they won’t shrink as quickly as we’re used to). That’s because as your visual circle grows, your friend is taking up a smaller percentage of it:

    13

    But once your friend passes the equator, something strange happens: They start looking bigger and bigger the farther they walk away from you. That’s because the percentage they’re occupying in your visual circle is growing:

    14

    When your friend is 10 feet away from the South Pole, they’ll look just as big as when they were 10 feet away from you:

    15

    And when they reach the South Pole itself, you can see them in every direction, so they fill your entire visual horizon:

    15

    If there’s no one at the South Pole, your visual horizon is something even stranger: yourself. That’s because light coming off of you will go all the way around the sphere until it returns to you.

    This carries over directly to life in the three-dimensional sphere. Every point on the three-sphere has an opposite point, and if there’s an object there, we’ll see it as the entire backdrop, as if it’s the sky. If there’s nothing there, we’ll see ourselves as the backdrop instead, as if our exterior has been superimposed on a balloon, then turned inside out and inflated to be the entire horizon.

    17

    While the three-sphere is the fundamental model for spherical geometry, it’s not the only such space. Just as we built different flat spaces by cutting a chunk out of Euclidean space and gluing it together, we can build spherical spaces by gluing up a suitable chunk of a three-sphere. Each of these glued shapes will have a hall-of-mirrors effect, as with the torus, but in these spherical shapes, there are only finitely many rooms to travel through.
    Is Our Universe Spherical?

    Even the most narcissistic among us don’t typically see ourselves as the backdrop to the entire night sky. But as with the flat torus, just because we don’t see a phenomenon, that doesn’t mean it can’t exist. The circumference of the spherical universe could be bigger than the size of the observable universe, making the backdrop too far away to see.

    But unlike the torus, a spherical universe can be detected through purely local measurements. Spherical shapes differ from infinite Euclidean space not just in their global topology but also in their fine-grained geometry. For example, because straight lines in spherical geometry are great circles, triangles are puffier than their Euclidean counterparts, and their angles add up to more than 180 degrees:

    18

    In fact, measuring cosmic triangles is a primary way cosmologists test whether the universe is curved. For each hot or cold spot in the cosmic microwave background, its diameter across and its distance from the Earth are known, forming the three sides of a triangle. We can measure the angle the spot subtends in the night sky — one of the three angles of the triangle. Then we can check whether the combination of side lengths and angle measure is a good fit for flat, spherical or hyperbolic geometry (in which the angles of a triangle add up to less than 180 degrees).

    Most such tests, along with other curvature measurements, suggest that the universe is either flat or very close to flat. However, one research team recently argued that certain data from the Planck space telescope’s 2018 release point instead to a spherical universe, although other researchers have countered that this evidence is most likely a statistical fluke.

    Hyperbolic Geometry

    Unlike the sphere, which curves in on itself, hyperbolic geometry opens outward. It’s the geometry of floppy hats, coral reefs and saddles. The basic model of hyperbolic geometry is an infinite expanse, just like flat Euclidean space. But because hyperbolic geometry expands outward much more quickly than flat geometry does, there’s no way to fit even a two-dimensional hyperbolic plane inside ordinary Euclidean space unless we’re willing to distort its geometry. Here, for example, is a distorted view of the hyperbolic plane known as the Poincaré disk:

    19
    Roice Nelson

    From our perspective, the triangles near the boundary circle look much smaller than the ones near the center, but from the perspective of hyperbolic geometry all the triangles are the same size. If we tried to actually make the triangles the same size — maybe by using stretchy material for our disk and inflating each triangle in turn, working outward from the center — our disk would start to resemble a floppy hat and would buckle more and more as we worked our way outward. As we approached the boundary, this buckling would grow out of control.

    From the point of view of hyperbolic geometry, the boundary circle is infinitely far from any interior point, since you have to cross infinitely many triangles to get there. So the hyperbolic plane stretches out to infinity in all directions, just like the Euclidean plane. But in terms of the local geometry, life in the hyperbolic plane is very different from what we’re used to.

    In ordinary Euclidean geometry, the circumference of a circle is directly proportional to its radius, but in hyperbolic geometry, the circumference grows exponentially compared to the radius. We can see that exponential pileup in the masses of triangles near the boundary of the hyperbolic disk.

    20

    Because of this feature, mathematicians like to say that it’s easy to get lost in hyperbolic space. If your friend walks away from you in ordinary Euclidean space, they’ll start looking smaller, but slowly, because your visual circle isn’t growing so fast. But in hyperbolic space, your visual circle is growing exponentially, so your friend will soon appear to shrink to an exponentially small speck. If you haven’t tracked your friend’s route carefully, it will be nearly impossible to find your way to them later.

    21

    And in hyperbolic geometry, the angles of a triangle sum to less than 180 degrees — for example, the triangles in our tiling of the Poincaré disk have angles that sum to 165 degrees:

    22

    The sides of these triangles don’t look straight, but that’s because we’re looking at hyperbolic geometry through a distorted lens. To an inhabitant of the Poincaré disk these curves are the straight lines, because the quickest way to get from point A to point B is to take a shortcut toward the center:

    23

    There’s a natural way to make a three-dimensional analogue to the Poincaré disk — simply make a three-dimensional ball and fill it with three-dimensional shapes that grow smaller as they approach the boundary sphere, like the triangles in the Poincaré disk. And just as with flat and spherical geometries, we can make an assortment of other three-dimensional hyperbolic spaces by cutting out a suitable chunk of the three-dimensional hyperbolic ball and gluing together its faces.
    Is Our Universe Hyperbolic?

    Hyperbolic geometry, with its narrow triangles and exponentially growing circles, doesn’t feel as if it fits the geometry of the space around us. And indeed, as we’ve already seen, so far most cosmological measurements seem to favor a flat universe.

    But we can’t rule out the possibility that we live in either a spherical or a hyperbolic world, because small pieces of both of these worlds look nearly flat. For example, small triangles in spherical geometry have angles that sum to only slightly more than 180 degrees, and small triangles in hyperbolic geometry have angles that sum to only slightly less than 180 degrees.

    That’s why early people thought the Earth was flat — on the scales they were able to observe, the curvature of the Earth was too minuscule to detect. The larger the spherical or hyperbolic shape, the flatter each small piece of it is, so if our universe is an extremely large spherical or hyperbolic shape, the part we can observe may be so close to being flat that its curvature can only be detected by uber-precise instruments we have yet to invent.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:59 am on December 14, 2019 Permalink | Reply
    Tags: , , Quanta Magazine,   

    From Quanta Magazine: “Why the Laws of Physics Are Inevitable” 

    Quanta Magazine
    From Quanta Magazine

    December 9, 2019
    Natalie Wolchover

    By considering simple symmetries, physicists working on the “bootstrap” have rederived the four known forces. “There’s just no freedom in the laws of physics,” said one.

    1
    These three objects illustrate the principles behind “spin,” a property of fundamental particles. A domino needs a full turn to get back to the same place. A two of clubs needs only a half turn. And the hour hand on a clock must spin around twice before it tells the same time again. Lucy Reading-Ikkanda/Quanta Magazine

    Compared to the unsolved mysteries of the universe, far less gets said about one of the most profound facts to have crystallized in physics over the past half-century: To an astonishing degree, nature is the way it is because it couldn’t be any different. “There’s just no freedom in the laws of physics that we have,” said Daniel Baumann, a theoretical physicist at the University of Amsterdam.

    Since the 1960s, and increasingly in the past decade, physicists like Baumann have used a technique known as the “bootstrap” to infer what the laws of nature must be. This approach assumes that the laws essentially dictate one another through their mutual consistency — that nature “pulls itself up by its own bootstraps.” The idea turns out to explain a huge amount about the universe.

    When bootstrapping, physicists determine how elementary particles with different amounts of “spin,” or intrinsic angular momentum, can consistently behave. In doing this, they rediscover the four fundamental forces that shape the universe. Most striking is the case of a particle with two units of spin: As the Nobel Prize winner Steven Weinberg showed in 1964 [Physical Review Journals Archive], the existence of a spin-2 particle leads inevitably to general relativity — Albert Einstein’s theory of gravity. Einstein arrived at general relativity through abstract thoughts about falling elevators and warped space and time, but the theory also follows directly from the mathematically consistent behavior of a fundamental particle.

    “I find this inevitability of gravity [and other forces] to be one of the deepest and most inspiring facts about nature,” said Laurentiu Rodina, a theoretical physicist at the Institute of Theoretical Physics at CEA Saclay who helped to modernize and generalize Weinberg’s proof in 2014 [Physical Review D]. “Namely, that nature is above all self-consistent.”

    How Bootstrapping Works

    A particle’s spin reflects its underlying symmetries, or the ways it can be transformed that leave it unchanged. A spin-1 particle, for instance, returns to the same state after being rotated by one full turn. A spin-1/2 particle must complete two full rotations to come back to the same state, while a spin-2 particle looks identical after just half a turn. Elementary particles can only carry 0, 1/2, 1, 3/2 or 2 units of spin.

    To figure out what behavior is possible for particles of a given spin, bootstrappers consider simple particle interactions, such as two particles annihilating and yielding a third. The particles’ spins place constraints on these interactions. An interaction of spin-2 particles, for instance, must stay the same when all participating particles are rotated by 180 degrees, since they’re symmetric under such a half-turn.

    Interactions must obey a few other basic rules: Momentum must be conserved; the interactions must respect locality, which dictates that particles scatter by meeting in space and time; and the probabilities of all possible outcomes must add up to 1, a principle known as unitarity. These consistency conditions translate into algebraic equations that the particle interactions must satisfy. If the equation corresponding to a particular interaction has solutions, then these solutions tend to be realized in nature.

    For example, consider the case of the photon, the massless spin-1 particle of light and electromagnetism. For such a particle, the equation describing four-particle interactions — where two particles go in and two come out, perhaps after colliding and scattering — has no viable solutions. Thus, photons don’t interact in this way. “This is why light waves don’t scatter off each other and we can see over macroscopic distances,” Baumann explained. The photon can participate in interactions involving other types of particles, however, such as spin-1/2 electrons. These constraints on the photon’s interactions lead to Maxwell’s equations, the 154-year-old theory of electromagnetism.

    2

    Or take gluons, particles that convey the strong force that binds atomic nuclei together. Gluons are also massless spin-1 particles, but they represent the case where there are multiple types of the same massless spin-1 particle. Unlike the photon, gluons can satisfy the four-particle interaction equation, meaning that they self-interact. Constraints on these gluon self-interactions match the description given by quantum chromodynamics, the theory of the strong force.

    A third scenario involves spin-1 particles that have mass. Mass came about when a symmetry broke during the universe’s birth: A constant — the value of the omnipresent Higgs field — spontaneously shifted from zero to a positive number, imbuing many particles with mass. The breaking of the Higgs symmetry created massive spin-1 particles called W and Z bosons, the carriers of the weak force that’s responsible for radioactive decay.

    Then “for spin-2, a miracle happens,” said Adam Falkowski, a theoretical physicist at the Laboratory of Theoretical Physics in Orsay, France. In this case, the solution to the four-particle interaction equation at first appears to be beset with infinities. But physicists find that this interaction can proceed in three different ways, and that mathematical terms related to the three different options perfectly conspire to cancel out the infinities, which permits a solution.

    That solution is the graviton: a spin-2 particle that couples to itself and all other particles with equal strength. This evenhandedness leads straight to the central tenet of general relativity: the equivalence principle, Einstein’s postulate that gravity is indistinguishable from acceleration through curved space-time, and that gravitational mass and intrinsic mass are one and the same. Falkowski said of the bootstrap approach, “I find this reasoning much more compelling than the abstract one of Einstein.”

    Thus, by thinking through the constraints placed on fundamental particle interactions by basic symmetries, physicists can understand the existence of the strong and weak forces that shape atoms, and the forces of electromagnetism and gravity that sculpt the universe at large.

    In addition, bootstrappers find that many different spin-0 particles are possible. The only known example is the Higgs boson, the particle associated with the symmetry-breaking Higgs field that imbues other particles with mass. A hypothetical spin-0 particle called the inflaton may have driven the initial expansion of the universe. These particles’ lack of angular momentum means that fewer symmetries restrict their interactions. Because of this, bootstrappers can infer less about nature’s governing laws, and nature itself has more creative license.

    Spin-1/2 matter particles also have more freedom. These make up the family of massive particles we call matter, and they are individually differentiated by their masses and couplings to the various forces. Our universe contains, for example, spin-1/2 quarks that interact with both gluons and photons, and spin-1/2 neutrinos that interact with neither.

    The spin spectrum stops at 2 because the infinities in the four-particle interaction equation kill off all massless particles that have higher spin values. Higher-spin states can exist if they’re extremely massive, and such particles do play a role in quantum theories of gravity such as string theory. But higher-spin particles can’t be detected, and they can’t affect the macroscopic world.

    Undiscovered Country

    Spin-3/2 particles could complete the 0, 12, 1, 3/2, 2 pattern, but only if “supersymmetry” is true in the universe — that is, if every force particle with integer spin has a corresponding matter particle with half-integer spin. In recent years, experiments have ruled out many of the simplest versions of supersymmetry. But the gap in the spin spectrum strikes some physicists as a reason to hold out hope that supersymmetry is true and spin-3/2 particles exist.

    In his work, Baumann applies the bootstrap to the beginning of the universe. A recent Quanta article described how he and other physicists used symmetries and other principles to constrain the possibilities for those first moments.

    It’s “just aesthetically pleasing,” Baumann said, “that the laws are inevitable — that there is some inevitability of the laws of physics that can be summarized by a short handful of principles that then lead to building blocks that then build up the macroscopic world.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:30 am on December 14, 2019 Permalink | Reply
    Tags: , , , , Quanta Magazine, Thermalization, Time’s arrow is irreversible   

    From Quanta Magazine: “The Universal Law That Aims Time’s Arrow” 

    Quanta Magazine
    From Quanta Magazine

    August 1, 2019 [Just now in social media]
    Natalie Wolchover

    1
    Coffee and the cosmos at large both approach thermal equilibrium. Rolando Barry for Quanta Magazine

    Pour milk in coffee, and the eddies and tendrils of white soon fade to brown. In half an hour, the drink cools to room temperature. Left for days, the liquid evaporates. After centuries, the cup will disintegrate, and billions of years later, the entire planet, sun and solar system will disperse. Throughout the universe, all matter and energy is diffusing out of hot spots like coffee and stars, ultimately destined (after trillions of years) to spread uniformly through space. In other words, the same future awaits coffee and the cosmos.

    This gradual spreading of matter and energy, called “thermalization,” aims the arrow of time. But the fact that time’s arrow is irreversible, so that hot coffee cools down but never spontaneously heats up, isn’t written into the underlying laws that govern the motion of the molecules in the coffee. Rather, thermalization is a statistical outcome: The coffee’s heat is far more likely to spread into the air than the cold air molecules are to concentrate energy into the coffee, just as shuffling a new deck of cards randomizes the cards’ order, and repeat shuffles will practically never re-sort them by suit and rank. Once coffee, cup and air reach thermal equilibrium, no more energy flows between them, and no further change occurs. Thus thermal equilibrium on a cosmic scale is dubbed the “heat death of the universe.”

    But while it’s easy to see where thermalization leads (to tepid coffee and eventual heat death), it’s less obvious how the process begins. “If you start far from equilibrium, like in the early universe, how does the arrow of time emerge, starting from first principles?” said Jürgen Berges, a theoretical physicist at Heidelberg University in Germany who has studied this problem for more than a decade.

    Over the last few years, Berges and a network of colleagues have uncovered a surprising answer. The researchers have discovered simple, so-called “universal” laws [World Scientific] governing the initial stages of change in a variety of systems consisting of many particles that are far from thermal equilibrium. Their calculations indicate that these systems — examples include the hottest plasma ever produced on Earth and the coldest gas, and perhaps also the field of energy that theoretically filled the universe in its first split second — begin to evolve in time in a way described by the same handful of universal numbers, no matter what of what the systems consist.

    The findings suggest that the initial stages of thermalization play out in a way that’s very different from what comes later. In particular, far-from-equilibrium systems exhibit fractal-like behavior, which means they look very much the same at different spatial and temporal scales. Their properties are shifted only by a so-called “scaling exponent” — and scientists are discovering that these exponents are often simple numbers like 1-2 and −1/3. For example, particles’ speeds at one instant can be rescaled, according to the scaling exponent, to give the distribution of speeds at any time later or earlier. All kinds of quantum systems in various extreme starting conditions seem to fall into this fractal-like pattern, exhibiting universal scaling for a period of time before transitioning to standard thermalization.

    “I find this work exciting because it pulls out a unifying principle that we can use to understand large classes of far-from-equilibrium systems,” said Nicole Yunger Halpern, a quantum physicist at Harvard University who is not involved in the work. “These studies offer hope that we can describe even these very messy, complicated systems with simple patterns.”

    Berges is widely seen as leading the theoretical effort, with a series of seminal papers since 2008 elucidating the physics of universal scaling. He and a co-author took another step this spring in a paper in Physical Review Letters that explored “prescaling,” the ramp-up to universal scaling. A group led by Thomas Gasenzer of Heidelberg also investigated prescaling in a [Physical Review Letters] paper in May, offering a deeper look at the onset of the fractal-like behavior.

    Some researchers are now exploring far-from-equilibrium dynamics in the lab, as others dig into the origins of the universal numbers. Experts say universal scaling is also helping to address deep conceptual questions about how quantum systems are able to thermalize at all.

    There’s “chaotic progress on various fronts,” said Zoran Hadzibabic of the University of Cambridge. He and his team are studying universal scaling in a hot gas of potassium-39 atoms by suddenly dialing up the atoms’ interaction strength, then letting them evolve.

    Energy Cascades

    When Berges began studying far-from-equilibrium dynamics, he wanted to understand the extreme conditions at the beginning of the universe when the particles that now populate the cosmos originated.

    These conditions would have occurred right after “cosmic inflation” — the explosive expansion of space thought by many cosmologists to have jump-started the Big Bang. Inflation would have blasted away any existing particles, leaving only the uniform energy of space itself: a perfectly smooth, dense, oscillating field of energy known as a “condensate.” Berges modeled this condensate in 2008 [Physical Review Letters] with collaborators Alexander Rothkopf and Jonas Schmidt, and they discovered that the first stages of its evolution should have exhibited fractal-like universal scaling. “You find that when this big condensate decayed into the particles that we observe today, that this process can be very elegantly described by a few numbers,” he said.

    To understand what this universal scaling phenomenon looks like, consider a vivid historical precursor of the recent discoveries. In 1941, the Russian mathematician Andrey Kolmogorov described the way energy “cascades” through turbulent fluids. When you’re stirring coffee, for instance, you create a vortex on a large spatial scale. Kolmogorov realized that this vortex will spontaneously generate smaller eddies, which spawn still smaller eddies. As you stir the coffee, the energy you inject into the system cascades down the spatial scales into smaller and smaller eddies, with the rate of the transfer of energy described by a universal exponential decay factor of −5/3, which Kolmogorov deduced from the fluid’s dimensions.

    Kolmogorov’s “−5/3 law” always seemed mysterious, even as it served as a cornerstone of turbulence research. But now physicists have been finding essentially the same cascading, fractal-like universal scaling phenomenon in far-from-equilibrium dynamics. According to Berges, energy cascades probably arise in both contexts because they are the most efficient way to distribute energy across scales. We instinctively know this. “If you want to distribute your sugar in your coffee, you stir it,” Berges said — as opposed to shaking it. “You know that’s the most efficient way to redistribute energy.”

    There’s one key difference between the universal scaling phenomenon in far-from-equilibrium systems and the fractal eddies in a turbulent fluid: In the fluid case, Kolmogorov’s law describes energy cascading across spatial dimensions. In the new work, researchers see far-from-equilibrium systems undergoing fractal-like universal scaling across both time and space.

    Take the birth of the universe. After cosmic inflation, the hypothetical oscillating, space-filling condensate would have quickly transformed into a dense field of quantum particles all moving with the same characteristic speed. Berges and his colleagues conjecture that these far-from-equilibrium particles then exhibited fractal scaling governed by universal scaling exponents as they began the thermal evolution of the universe.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    According to the team’s calculations and computer simulations, instead of a single cascade like the one you’d find in a turbulent fluid, there would have been two cascades, going in opposite directions. Most of the particles in the system would have slowed from one moment to the next, cascading to slower and slower speeds at a characteristic rate — in this case, with a scaling exponent of approximately −3/2. Eventually they would have reached a standstill, forming another condensate [Physical Review Letters]. (This one wouldn’t oscillate or transform into particles; instead it would gradually decay.) Meanwhile, the majority of the energy leaving the slowing particles would have cascaded to a few particles that gained speed at a rate governed by the exponent 1/2. Essentially, these particles started to move extremely fast.

    The fast particles would have subsequently decayed into the quarks, electrons and other elementary particles that exist today. These particles would then have undergone standard thermalization, scattering off each other and distributing their energy. That process is still ongoing in the present-day universe and will continue for trillions of years.

    Simplicity Occurs

    The ideas about the early universe aren’t easily testable. But around 2012, the researchers realized that a far-from-equilibrium scenario also arises in experiments — namely, when heavy atomic nuclei are smashed together at nearly the speed of light in the Relativistic Heavy Ion Collider in New York and in Europe’s Large Hadron Collider.


    BNL RHIC

    CERN LHC

    These nuclear collisions create extreme configurations of matter and energy, which then start to relax toward equilibrium. You might think the collisions would produce a complicated mess. But when Berges and his colleagues analyzed the collisions theoretically, they found structure and simplicity. The dynamics, Berges said, “can be encoded in a few numbers.”

    The pattern continued. Around 2015, after talking to experimentalists who were probing ultracold atomic gases in the lab, Berges, Gasenzer and other theorists calculated that these systems should also exhibit universal scaling after being rapidly cooled to conditions extremely far from equilibrium.

    Last fall, two groups — one led by Markus Oberthaler of Heidelberg and the other by Jörg Schmiedmayer of the Vienna Center for Quantum Science and Technology — reported simultaneously in Nature [np link that they had observed fractal-like universal scaling in the way various properties of the 100,000-or-so atoms in their gases changed over space and time. “Again, simplicity occurs,” said Berges, who was one of the first to predict the phenomenon in such systems. “You can see that the dynamics can be described by a few scaling exponents and universal scaling functions. And some of them turned out to be the same as what was predicted for particles in the early universe. That’s the universality.”

    The researchers now believe that the universal scaling phenomenon occurs at the nanokelvin scale of ultracold atoms, the 10-trillion-kelvin scale of nuclear collisions, and the 10,000-trillion-trillion-kelvin scale of the early universe. “That’s the point of universality — that you can expect to see these phenomena on different energy and length scales,” Berges said.

    The case of the early universe may hold the most intrinsic interest, but it’s the highly controlled, isolated laboratory systems that are enabling scientists to tease out the universal rules governing the beginning stages of change. “We know everything that’s in the box,” as Hadzibabic put it. “It’s this isolation from the environment that allows you to study the phenomenon in its pure form.”

    One major thrust has been to figure out where systems’ scaling exponents come from. In some cases, experts have traced the exponents [Physical Review D] to the number of spatial dimensions a system occupies, as well as its symmetries — that is, all the ways it can be transformed without changing (just as a square stays the same when rotated by 90 degrees).

    Those insights are helping to address a paradox about what happens to information about the past as systems thermalize. Quantum mechanics requires that as particles evolve, information about their past is never lost. And yet, thermalization seems to contradict this: When two neglected cups of coffee are both at room temperature, how can you tell which one started out hotter?

    It seems that as a system begins to evolve, key details, like its symmetries, are retained and become encoded in the scaling exponents dictating its fractal evolution, while other details, like the initial configuration of its particles or the interactions between them, become irrelevant to its behavior, scrambled among its particles.

    And this scrambling process happens very early indeed. In their papers this spring, Berges, Gasenzer and their collaborators independently described prescaling for the first time, a period before universal scaling that their papers predicted for nuclear collisions and ultracold atoms, respectively. Prescaling suggests that when a system first evolves from its initial, far-from-equilibrium condition, scaling exponents don’t yet perfectly describe it. The system retains some of its previous structure — remnants of its initial configuration. But as prescaling progresses, the system assumes a more universal form in space and time, essentially obscuring irrelevant information about its own past. If this idea is borne out by future experiments, prescaling may be the nocking of time’s arrow onto the bowstring.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 10:23 am on December 2, 2019 Permalink | Reply
    Tags: , , , Cosmic inflation yields pristine flatness, , ESA/Planck CMB, ΛCDM does not predict any curvature; it says the universe is flat., , perhaps the universe is really closed., Quanta Magazine, What Shape Is the Universe?   

    From Quanta Magazine: “What Shape Is the Universe? A New Study Suggests We’ve Got It All Wrong” 

    Quanta Magazine
    From Quanta Magazine

    November 4, 2019
    Natalie Wolchover

    When researchers reanalyzed the gold-standard data set of the early universe, they concluded that the cosmos must be “closed,” or curled up like a ball. Most others remain unconvinced.

    1
    Lucy Reading-Ikkanda/Quanta Magazine
    In a flat universe, as seen on the left, a straight line will extend out to infinity. A closed universe, right, is curled up like the surface of a sphere. In it, a straight line will eventually return to its starting point.

    A provocative paper published today in the journal Nature Astronomy argues that the universe may curve around and close in on itself like a sphere, rather than lying flat like a sheet of paper as the standard theory of cosmology predicts. The authors reanalyzed a major cosmological data set and concluded that the data favors a closed universe with 99% certainty — even as other evidence suggests the universe is flat.

    The data in question — the Planck space telescope’s observations of ancient light called the cosmic microwave background (CMB) — “clearly points towards a closed model,” said Alessandro Melchiorri of Sapienza University of Rome.

    CMB per ESA/Planck

    ESA/Planck 2009 to 2013

    He co-authored the new paper with Eleonora di Valentino of the University of Manchester and Joseph Silk, principally of the University of Oxford. In their view, the discordance between the CMB data, which suggests the universe is closed, and other data pointing to flatness represents a “cosmological crisis” that calls for “drastic rethinking.”

    However, the team of scientists behind the Planck telescope reached different conclusions in their 2018 analysis. Antony Lewis, a cosmologist at the University of Sussex and a member of the Planck team who worked on that analysis, said the simplest explanation for the specific feature in the CMB data that di Valentino, Melchiorri and Silk interpreted as evidence for a closed universe “is that it is just a statistical fluke.” Lewis and other experts say they’ve already closely scrutinized the issue, along with related puzzles in the data.

    “There is no dispute that these symptoms exist at some level,” said Graeme Addison, a cosmologist at Johns Hopkins University who was not involved in the Planck analysis or the new research. “There is only disagreement as to the interpretation.”

    Whether the universe is flat — that is, whether two light beams shooting side by side through space will stay parallel forever, rather than eventually crossing and swinging back around to where they started, as in a closed universe — critically depends on the universe’s density. If all the matter and energy in the universe, including dark matter and dark energy, adds up to exactly the concentration at which the energy of the outward expansion balances the energy of the inward gravitational pull, space will extend flatly in all directions.

    The leading theory of the universe’s birth, known as cosmic inflation, yields pristine flatness.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Alan Guth’s notes:

    Alan Guth’s original notes on inflation

    And various observations since the early 2000s have shown that our universe is very nearly flat and must therefore come within a hair of this critical density — which is calculated to be about 5.7 hydrogen atoms’ worth of stuff per cubic meter of space, much of it invisible.

    The Planck telescope measures the density of the universe by gauging how much the CMB light has been deflected or “gravitationally lensed” while passing through the universe over the past 13.8 billion years. The more matter these CMB photons encounter on their journey to Earth, the more lensed they get, so that their direction no longer crisply reflects their starting point in the early universe. This shows up in the data as a blurring effect, which smooths out certain peaks and dips in the spatial pattern of the light. According to the new analysis, the large amount of lensing of the CMB suggests that the universe may be about 5% denser than the critical density, averaging something like six hydrogen atoms per cubic meter instead of 5.7, so that gravity wins and the cosmos closes in on itself.

    The Planck scientists noticed the larger-than-expected lensing effect years ago; the anomaly showed up most prominently in their final analysis of the full data set, released last year. If the universe is flat, cosmologists expect a curvature measurement to fall within about one “standard deviation” of zero, due to random statistical fluctuations in the data. But both the Planck team and the authors of the new paper found that the CMB data deviates by 3.4 standard deviations. Assuming that the universe is flat, this is a major fluke — about equivalent to getting heads in a coin toss 11 times in a row, which happens less than 1% of the time. The Planck team attributes the measurement to just such a fluke, or to some unaccounted-for effect that blurs the CMB light, mimicking the effect of extra matter.

    Or perhaps the universe is really closed. Di Valentino and co-authors point out that a closed model resolves other anomalous findings in the CMB. For instance, researchers deduce the values of key ingredients of our universe, such as the amount of dark matter and dark energy, by measuring variations in the color of the CMB light coming from different regions of the sky. But curiously, they get different answers when they compare small regions of the sky and when they compare large regions. The authors point out that when you recalculate these values assuming a closed universe, they don’t differ.

    Will Kinney, a cosmologist at the University at Buffalo in New York, called this bonus benefit of the closed universe model “really interesting.” But he noted that the discrepancies between small and large-scale variations seen in the CMB light could easily be statistical fluctuations themselves, or they might stem from the same unidentified error that may affect the lensing measurement.

    There are only six of these key properties that shape the universe, according to the standard theory of cosmology, which is known as ΛCDM (named for dark energy, represented by the Greek letter Λ, or lambda, and cold dark matter).

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    With only six numbers, ΛCDM accurately describes almost all features of the cosmos. And ΛCDM does not predict any curvature; it says the universe is flat.

    The new paper effectively argues that we may need to add a seventh parameter to ΛCDM: a number that describes the curvature of the universe. For the lensing measurement, adding a seventh number improves the fit with the data.

    But other cosmologists argue that before taking an anomaly seriously enough to add a seventh parameter to the theory, we need to take into account all the other things that ΛCDM gets right. Sure, we can focus on this one anomaly — a coin coming up heads 11 times in a row — and say that something’s off. But the CMB is such a huge data set that it’s like flipping a coin hundreds or thousands of times. It’s not too hard to imagine that in doing so, we’ll encounter one random run of 11 heads. Physicists call this the “look elsewhere” effect.

    Furthermore, researchers note that the seventh parameter isn’t needed for most other measurements. There’s a second way of gleaning the spatial curvature from the CMB, by measuring correlations between light from sets of four points in the sky; this “lensing reconstruction” measurement indicates that the universe is flat, with no seventh parameter needed. In addition, the BOSS survey’s independent observations of cosmological signals called baryon acoustic oscillations also point to flatness. Planck, in their 2018 analysis, combined their lensing measurement with these two other measurements and arrived at an overall value for the spatial curvature within one standard deviation of zero.

    Di Valentino, Melchiorri and Silk think that pulling these three different data sets together masks the fact that the different data sets don’t actually agree. “The point here is not that the universe is closed,” Melchiorri said by email. “The problem is the inconsistency between the data. This indicates that there is currently no concordance model and that we are missing something.” In other words, ΛCDM is wrong or incomplete.

    All other researchers consulted for this article think the weight of the evidence points to the universe being flat. “Given the other measurements,” Addison said, “the clearest interpretation of this behavior of the Planck data is that it’s a statistical fluctuation. Maybe it’s caused by some slight inaccuracy in the Planck analysis, or maybe it’s completely just noise fluctuations or random chance. But either way, there’s not really a reason to take this closed model seriously.”

    That’s not to say pieces aren’t missing from the cosmological picture. ΛCDM seemingly predicts the wrong value for the current expansion rate of the universe, causing a controversy known as the Hubble constant problem. But assuming the universe is closed doesn’t fix this problem — in fact, adding curvature worsens the prediction of the expansion rate. Other than Planck’s anomalous lensing measurement, there’s no reason to think the universe is closed.

    “Time will tell, but I am not, personally, terribly worried about this one,” Kinney said, referring to the suggestion of curvature in the CMB data. “It’s of a kind with similar anomalies that have proven to be vapor.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:50 pm on March 18, 2019 Permalink | Reply
    Tags: "AI Algorithms Are Now Shockingly Good at Doing Science", Quanta Magazine,   

    From Quanta via WIRED: “AI Algorithms Are Now Shockingly Good at Doing Science” 

    Quanta Magazine
    Quanta Magazine

    via

    Wired logo

    From WIRED

    3.17.19
    Dan Falk

    1
    Whether probing the evolution of galaxies or discovering new chemical compounds, algorithms are detecting patterns no humans could have spotted. Rachel Suggs/Quanta Magazine

    No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day—and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

    SKA Square Kilometer Array

    The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks—computer-simulated networks of neurons that mimic the function of brains—can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

    Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

    Traditionally, we’ve learned about nature through observation. Think of Johannes Kepler poring over Tycho Brahe’s tables of planetary positions and trying to discern the underlying pattern. (He eventually deduced that planets move in elliptical orbits.) Science has also advanced through simulation. An astronomer might model the movement of the Milky Way and its neighboring galaxy, Andromeda, and predict that they’ll collide in a few billion years. Both observation and simulation help scientists generate hypotheses that can then be tested with further observations. Generative modeling differs from both of these approaches.

    Milkdromeda -Andromeda on the left-Earth’s night sky in 3.75 billion years-NASA

    “It’s basically a third approach, between observation and simulation,” says Kevin Schawinski, an astrophysicist and one of generative modeling’s most enthusiastic proponents, who worked until recently at the Swiss Federal Institute of Technology in Zurich (ETH Zurich). “It’s a different way to attack a problem.”

    Some scientists see generative modeling and other new techniques simply as power tools for doing traditional science. But most agree that AI is having an enormous impact, and that its role in science will only grow. Brian Nord, an astrophysicist at Fermi National Accelerator Laboratory who uses artificial neural networks to study the cosmos, is among those who fear there’s nothing a human scientist does that will be impossible to automate. “It’s a bit of a chilling thought,” he said.


    Discovery by Generation

    Ever since graduate school, Schawinski has been making a name for himself in data-driven science. While working on his doctorate, he faced the task of classifying thousands of galaxies based on their appearance. Because no readily available software existed for the job, he decided to crowdsource it—and so the Galaxy Zoo citizen science project was born.

    Galaxy Zoo via Astrobites

    Beginning in 2007, ordinary computer users helped astronomers by logging their best guesses as to which galaxy belonged in which category, with majority rule typically leading to correct classifications. The project was a success, but, as Schawinski notes, AI has made it obsolete: “Today, a talented scientist with a background in machine learning and access to cloud computing could do the whole thing in an afternoon.”

    Schawinski turned to the powerful new tool of generative modeling in 2016. Essentially, generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y. The approach has proved incredibly potent and versatile. As an example, suppose you feed a generative model a set of images of human faces, with each face labeled with the person’s age. As the computer program combs through these “training data,” it begins to draw a connection between older faces and an increased likelihood of wrinkles. Eventually it can “age” any face that it’s given—that is, it can predict what physical changes a given face of any age is likely to undergo.

    3
    None of these faces is real. The faces in the top row (A) and left-hand column (B) were constructed by a generative adversarial network (GAN) using building-block elements of real faces. The GAN then combined basic features of the faces in A, including their gender, age and face shape, with finer features of faces in B, such as hair color and eye color, to create all the faces in the rest of the grid. NVIDIA

    The best-known generative modeling systems are “generative adversarial networks” (GANs). After adequate exposure to training data, a GAN can repair images that have damaged or missing pixels, or they can make blurry photographs sharp. They learn to infer the missing information by means of a competition (hence the term “adversarial”): One part of the network, known as the generator, generates fake data, while a second part, the discriminator, tries to distinguish fake data from real data. As the program runs, both halves get progressively better. You may have seen some of the hyper-realistic, GAN-produced “faces” that have circulated recently — images of “freakishly realistic people who don’t actually exist,” as one headline put it.

    More broadly, generative modeling takes sets of data (typically images, but not always) and breaks each of them down into a set of basic, abstract building blocks — scientists refer to this as the data’s “latent space.” The algorithm manipulates elements of the latent space to see how this affects the original data, and this helps uncover physical processes that are at work in the system.

    The idea of a latent space is abstract and hard to visualize, but as a rough analogy, think of what your brain might be doing when you try to determine the gender of a human face. Perhaps you notice hairstyle, nose shape, and so on, as well as patterns you can’t easily put into words. The computer program is similarly looking for salient features among data: Though it has no idea what a mustache is or what gender is, if it’s been trained on data sets in which some images are tagged “man” or “woman,” and in which some have a “mustache” tag, it will quickly deduce a connection.

    In a paper published in December in Astronomy & Astrophysics, Schawinski and his ETH Zurich colleagues Dennis Turp and Ce Zhang used generative modeling to investigate the physical changes that galaxies undergo as they evolve. (The software they used treats the latent space somewhat differently from the way a generative adversarial network treats it, so it is not technically a GAN, though similar.) Their model created artificial data sets as a way of testing hypotheses about physical processes. They asked, for instance, how the “quenching” of star formation—a sharp reduction in formation rates—is related to the increasing density of a galaxy’s environment.

    For Schawinski, the key question is how much information about stellar and galactic processes could be teased out of the data alone. “Let’s erase everything we know about astrophysics,” he said. “To what degree could we rediscover that knowledge, just using the data itself?”

    First, the galaxy images were reduced to their latent space; then, Schawinski could tweak one element of that space in a way that corresponded to a particular change in the galaxy’s environment—the density of its surroundings, for example. Then he could re-generate the galaxy and see what differences turned up. “So now I have a hypothesis-generation machine,” he explained. “I can take a whole bunch of galaxies that are originally in a low-density environment and make them look like they’re in a high-density environment, by this process.” Schawinski, Turp and Zhang saw that, as galaxies go from low- to high-density environments, they become redder in color, and their stars become more centrally concentrated. This matches existing observations about galaxies, Schawinski said. The question is why this is so.

    The next step, Schawinski says, has not yet been automated: “I have to come in as a human, and say, ‘OK, what kind of physics could explain this effect?’” For the process in question, there are two plausible explanations: Perhaps galaxies become redder in high-density environments because they contain more dust, or perhaps they become redder because of a decline in star formation (in other words, their stars tend to be older). With a generative model, both ideas can be put to the test: Elements in the latent space related to dustiness and star formation rates are changed to see how this affects galaxies’ color. “And the answer is clear,” Schawinski said. Redder galaxies are “where the star formation had dropped, not the ones where the dust changed. So we should favor that explanation.”

    4
    Using generative modeling, astrophysicists could investigate how galaxies change when they go from low-density regions of the cosmos to high-density regions, and what physical processes are responsible for these changes. K. Schawinski et al.; doi: 10.1051/0004-6361/201833800

    The approach is related to traditional simulation, but with critical differences. A simulation is “essentially assumption-driven,” Schawinski said. “The approach is to say, ‘I think I know what the underlying physical laws are that give rise to everything that I see in the system.’ So I have a recipe for star formation, I have a recipe for how dark matter behaves, and so on. I put all of my hypotheses in there, and I let the simulation run. And then I ask: Does that look like reality?” What he’s done with generative modeling, he said, is “in some sense, exactly the opposite of a simulation. We don’t know anything; we don’t want to assume anything. We want the data itself to tell us what might be going on.”

    The apparent success of generative modeling in a study like this obviously doesn’t mean that astronomers and graduate students have been made redundant—but it appears to represent a shift in the degree to which learning about astrophysical objects and processes can be achieved by an artificial system that has little more at its electronic fingertips than a vast pool of data. “It’s not fully automated science—but it demonstrates that we’re capable of at least in part building the tools that make the process of science automatic,” Schawinski said.

    Generative modeling is clearly powerful, but whether it truly represents a new approach to science is open to debate. For David Hogg, a cosmologist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), the technique is impressive but ultimately just a very sophisticated way of extracting patterns from data—which is what astronomers have been doing for centuries.


    In other words, it’s an advanced form of observation plus analysis. Hogg’s own work, like Schawinski’s, leans heavily on AI; he’s been using neural networks to classify stars according to their spectra and to infer other physical attributes of stars using data-driven models. But he sees his work, as well as Schawinski’s, as tried-and-true science. “I don’t think it’s a third way,” he said recently. “I just think we as a community are becoming far more sophisticated about how we use the data. In particular, we are getting much better at comparing data to data. But in my view, my work is still squarely in the observational mode.”

    Hardworking Assistants

    Whether they’re conceptually novel or not, it’s clear that AI and neural networks have come to play a critical role in contemporary astronomy and physics research. At the Heidelberg Institute for Theoretical Studies, the physicist Kai Polsterer heads the astroinformatics group — a team of researchers focused on new, data-centered methods of doing astrophysics. Recently, they’ve been using a machine-learning algorithm to extract redshift information from galaxy data sets, a previously arduous task.

    Polsterer sees these new AI-based systems as “hardworking assistants” that can comb through data for hours on end without getting bored or complaining about the working conditions. These systems can do all the tedious grunt work, he said, leaving you “to do the cool, interesting science on your own.”

    But they’re not perfect. In particular, Polsterer cautions, the algorithms can only do what they’ve been trained to do. The system is “agnostic” regarding the input. Give it a galaxy, and the software can estimate its redshift and its age — but feed that same system a selfie, or a picture of a rotting fish, and it will output a (very wrong) age for that, too. In the end, oversight by a human scientist remains essential, he said. “It comes back to you, the researcher. You’re the one in charge of doing the interpretation.”

    For his part, Nord, at Fermilab, cautions that it’s crucial that neural networks deliver not only results, but also error bars to go along with them, as every undergraduate is trained to do. In science, if you make a measurement and don’t report an estimate of the associated error, no one will take the results seriously, he said.

    Like many AI researchers, Nord is also concerned about the impenetrability of results produced by neural networks; often, a system delivers an answer without offering a clear picture of how that result was obtained.

    Yet not everyone feels that a lack of transparency is necessarily a problem. Lenka Zdeborová, a researcher at the Institute of Theoretical Physics at CEA Saclay in France, points out that human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat—“but you don’t know how you know,” she said. “Your own brain is in some sense a black box.”

    It’s not only astrophysicists and cosmologists who are migrating toward AI-fueled, data-driven science. Quantum physicists like Roger Melko of the Perimeter Institute for Theoretical Physics and the University of Waterloo in Ontario have used neural networks to solve some of the toughest and most important problems in that field, such as how to represent the mathematical “wave function” describing a many-particle system.

    Perimeter Institute in Waterloo, Canada


    AI is essential because of what Melko calls “the exponential curse of dimensionality.” That is, the possibilities for the form of a wave function grow exponentially with the number of particles in the system it describes. The difficulty is similar to trying to work out the best move in a game like chess or Go: You try to peer ahead to the next move, imagining what your opponent will play, and then choose the best response, but with each move, the number of possibilities proliferates.

    Of course, AI systems have mastered both of these games—chess, decades ago, and Go in 2016, when an AI system called AlphaGo defeated a top human player. They are similarly suited to problems in quantum physics, Melko says.

    The Mind of the Machine

    Whether Schawinski is right in claiming that he’s found a “third way” of doing science, or whether, as Hogg says, it’s merely traditional observation and data analysis “on steroids,” it’s clear AI is changing the flavor of scientific discovery, and it’s certainly accelerating it. How far will the AI revolution go in science?

    Occasionally, grand claims are made regarding the achievements of a “robo-scientist.” A decade ago, an AI robot chemist named Adam investigated the genome of baker’s yeast and worked out which genes are responsible for making certain amino acids. (Adam did this by observing strains of yeast that had certain genes missing, and comparing the results to the behavior of strains that had the genes.) Wired’s headline read, “Robot Makes Scientific Discovery All by Itself.”

    More recently, Lee Cronin, a chemist at the University of Glasgow, has been using a robot to randomly mix chemicals, to see what sorts of new compounds are formed.

    Monitoring the reactions in real-time with a mass spectrometer, a nuclear magnetic resonance machine, and an infrared spectrometer, the system eventually learned to predict which combinations would be the most reactive. Even if it doesn’t lead to further discoveries, Cronin has said, the robotic system could allow chemists to speed up their research by about 90 percent.

    Last year, another team of scientists at ETH Zurich used neural networks to deduce physical laws from sets of data. Their system, a sort of robo-Kepler, rediscovered the heliocentric model of the solar system from records of the position of the sun and Mars in the sky, as seen from Earth, and figured out the law of conservation of momentum by observing colliding balls. Since physical laws can often be expressed in more than one way, the researchers wonder if the system might offer new ways—perhaps simpler ways—of thinking about known laws.

    These are all examples of AI kick-starting the process of scientific discovery, though in every case, we can debate just how revolutionary the new approach is. Perhaps most controversial is the question of how much information can be gleaned from data alone—a pressing question in the age of stupendously large (and growing) piles of it. In The Book of Why (2018), the computer scientist Judea Pearl and the science writer Dana Mackenzie assert that data are “profoundly dumb.” Questions about causality “can never be answered from data alone,” they write. “Anytime you see a paper or a study that analyzes the data in a model-free way, you can be certain that the output of the study will merely summarize, and perhaps transform, but not interpret the data.” Schawinski sympathizes with Pearl’s position, but he described the idea of working with “data alone” as “a bit of a straw man.” He’s never claimed to deduce cause and effect that way, he said. “I’m merely saying we can do more with data than we often conventionally do.”

    Another oft-heard argument is that science requires creativity, and that—at least so far—we have no idea how to program that into a machine. (Simply trying everything, like Cronin’s robo-chemist, doesn’t seem especially creative.) “Coming up with a theory, with reasoning, I think demands creativity,” Polsterer said. “Every time you need creativity, you will need a human.” And where does creativity come from? Polsterer suspects it is related to boredom—something that, he says, a machine cannot experience. “To be creative, you have to dislike being bored. And I don’t think a computer will ever feel bored.” On the other hand, words like “creative” and “inspired” have often been used to describe programs like Deep Blue and AlphaGo. And the struggle to describe what goes on inside the “mind” of a machine is mirrored by the difficulty we have in probing our own thought processes.

    Schawinski recently left academia for the private sector; he now runs a startup called Modulos which employs a number of ETH scientists and, according to its website, works “in the eye of the storm of developments in AI and machine learning.” Whatever obstacles may lie between current AI technology and full-fledged artificial minds, he and other experts feel that machines are poised to do more and more of the work of human scientists. Whether there is a limit remains to be seen.

    “Will it be possible, in the foreseeable future, to build a machine that can discover physics or mathematics that the brightest humans alive are not able to do on their own, using biological hardware?” Schawinski wonders. “Will the future of science eventually necessarily be driven by machines that operate on a level that we can never reach? I don’t know. It’s a good question.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 10:21 am on January 28, 2019 Permalink | Reply
    Tags: , , , Black Hole Engines and Superbubble Shockwaves, BlueTides simulation on Blue Waters supercomputer, Cold dark matter halos, , , , Quanta Magazine, Simulation of the 14-billion-year history of the universe on a supercomputer, The Universe Is Not a Simulation but We Can Now Simulate It   

    From Quanta Magazine: “The Universe Is Not a Simulation, but We Can Now Simulate It” 

    Quanta Magazine
    From Quanta Magazine

    June 12, 2018 [Just found this.]
    Natalie Wolchover

    1
    From video by Mark Volgersberger/IllustrisTNG for Quanta
    The evolution of magnetic fields in a 10-Megaparsec section of the IllustrisTNG universe simulation. Regions of low magnetic energy appear in blue and purple, while orange and white correspond to more magnetically energetic regions inside dark matter halos and galaxies.

    In the early 2000s, a small community of coder-cosmologists set out to simulate the 14-billion-year history of the universe on a supercomputer. They aimed to create a proxy of the cosmos, a Cliffs Notes version in computer code that could run in months instead of giga-years, to serve as a laboratory for studying the real universe.

    The simulations failed spectacularly. Like mutant cells in a petri dish, mock galaxies grew all wrong, becoming excessively starry blobs instead of gently rotating spirals. When the researchers programmed in supermassive black holes at the centers of galaxies, the black holes either turned those galaxies into donuts or drifted out from galactic centers like monsters on the prowl.

    But recently, the scientists seem to have begun to master the science and art of cosmos creation. They are applying the laws of physics to a smooth, hot fluid of (simulated) matter, as existed in the infant universe, and seeing the fluid evolve into spiral galaxies and galaxy clusters like those in the cosmos today.

    “I was like, wow, I can’t believe it!” said Tiziana Di Matteo, a numerical cosmologist at Carnegie Mellon University, about seeing realistic spiral galaxies form for the first time in 2015 in the initial run of BlueTides, one of several major ongoing simulation series. “You kind of surprise yourself, because it’s just a bunch of lines of code, right?”

    2
    Tiziana Di Matteo, a professor of physics at Carnegie Mellon University, co-developed the MassiveBlack-II and BlueTides cosmological simulations.

    With the leap in mock-universe verisimilitude, researchers are now using their simulations as laboratories. After each run, they can peer into their codes and figure out how and why certain features of their simulated cosmos arise, potentially also explaining what’s going on in reality. The newly functional proxies have inspired explanations and hypotheses about the 84 percent of matter that’s invisible — the long-sought “dark matter” that seemingly engulfs galaxies. Formerly puzzling telescope observations about real galaxies that raised questions about the standard dark matter hypothesis are being explained in the state-of-the-art facsimiles.

    The simulations have also granted researchers such as Di Matteo virtual access to the supermassive black holes that anchor the centers of galaxies, whose formation in the early universe remains mysterious. “Now we are in an exciting place where we can actually use these models to make completely new predictions,” she said.

    Black Hole Engines and Superbubble Shockwaves

    Until about 15 years ago, most cosmological simulations didn’t even attempt to form realistic galaxies. They modeled only dark matter, which in the standard hypothesis interacts only gravitationally, making it much easier to code than the complicated atomic stuff we see.

    The dark-matter-only simulations found that roundish “halos” of invisible matter spontaneously formed with the right sizes and shapes to potentially cradle visible galaxies within them.

    Caterpillar Project A Milky-Way-size dark-matter halo and its subhalos circled, an enormous suite of simulations . Griffen et al. 2016

    Volker Springel, a leading coder-cosmologist at Heidelberg University in Germany, said, “These calculations were really instrumental to establish that the now-standard cosmological model, despite its two strange components — the dark matter and the dark energy — is actually a pretty promising prediction of what’s going on.”

    5
    Volker Springel, a professor at Heidelberg University, developed the simulation codes GADGET and AREPO, which is used in the state-of-the-art IllustrisTNG simulation [below]. HITS

    Researchers then started adding visible matter into their codes, stepping up the difficulty astronomically. Unlike dark matter halos, interacting atoms evolve complexly as the universe unfolds, giving rise to fantastic objects like stars and supernovas. Unable to code the physics in full, coders had to simplify and omit. Every team took a different approach to this abridgement, picking and programming what they saw as the key astrophysics.

    Then, in 2012, a study [AIP] by Cecilia Scannapieco of the Leibniz Institute for Astrophysics in Potsdam gave the field a wake-up call. “She convinced a bunch of people to run the same galaxy with all their codes,” said James Wadsley of McMaster University in Canada, who participated. “And everyone got it wrong.” All their galaxies looked different, and “everyone made too many stars.”

    3
    Henize 70 is a superbubble of hot expanding gas about 300 light-years across that is located within the Large Magellanic Cloud, a satellite of the Milky Way galaxy.
    Credit: FORS Team, 8.2-meter VLT, ESO

    ESO/FORS1 on the VLT


    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    Scannapieco’s study was both “embarrassing,” Wadsley said, and hugely motivational: “That’s when people doubled down and realized they needed black holes, and they needed the supernovae to work better” in order to create credible galaxies. In real galaxies, he and others explained, star production is diminishing. As the galaxies run low on fuel, their lights are burning out and not being replaced. But in the simulations, Wadsley said, late-stage galaxies were “still making stars like crazy,” because gas wasn’t getting kicked out.

    The first of the two critical updates that have fixed the problem in the latest generation of simulations is the addition of supermassive black holes at spiral galaxies’ centers.

    SgrA* NASA/Chandra supermassive black hole at the center of the Milky Way

    These immeasurably dense, bottomless pits in the space-time fabric, some weighing more than a billion suns, act as fuel-burning engines, messily eating surrounding stars, gas and dust and spewing the debris outward in lightsaber-like beams called jets. They’re the main reason present-day spiral galaxies form fewer stars than they used to.

    The other new key ingredient is supernovas — and the “superbubbles” formed from the combined shockwaves of hundreds of supernovas exploding in quick succession.

    This is an artist’s impression of the SN 1987A remnant. The image is based on real data and reveals the cold, inner regions of the remnant, in red, where tremendous amounts of dust were detected and imaged by ALMA. This inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    In a superbubble [see Henize 70 above], “a small galaxy over a few million years could blow itself apart,” said Wadsley, who integrated superbubbles into a code called GASOLINE2 in 2015. “They’re very kind of crazy extreme objects.” They occur because stars tend to live and die in clusters, forming by the hundreds of thousands as giant gas clouds collapse and later going supernova within about a million years of one another. Superbubbles sweep whole areas or even entire small galaxies clean of gas and dust, curbing star formation and helping to stir the pushed-out matter before it later recollapses. Their inclusion made small simulated galaxies much more realistic.

    4
    Jillian Bellovary, a numerical cosmologist at Queensborough Community College and the American Museum of Natural History in New York, put black holes into the GASOLINE simulation code. H.N. James.

    Jillian Bellovary, a wry young numerical cosmologist at Queensborough Community College and the American Museum of Natural History in New York, coded some of the first black holes, putting them into GASOLINE in 2008. Skipping or simplifying tons of physics, she programmed an equation dictating how much gas the black hole should consume as a function of the gas’s density and temperature, and a second equation telling the black hole how much energy to release. Others later built on Bellovary’s work, most importantly by figuring out how to keep black holes anchored at the centers of mock galaxies, while stopping them from blowing out so much gas that they’d form galactic donuts.

    Simulating all this physics for hundreds of thousands of galaxies at once takes immense computing power and cleverness. Modern supercomputers, having essentially maxed out the number of transistors they can pack upon a single chip, have expanded outward across as many as 100,000 parallel cores that crunch numbers in concert. Coders have had to figure out how to divvy up the cores — not an easy task when some parts of a simulated universe evolve quickly and complexly, while little happens elsewhere, and then conditions can switch on a dime. Researchers have found ways of dealing with this huge dynamic range with algorithms that adaptively allocate computer resources according to need.

    They’ve also fought and won a variety of logistical battles. For instance, “If you have two black holes eating the same gas,” Bellovary said, and they’re “on two different processors of the supercomputer, how do you have the black holes not eat the same particle?” Parallel processors “have to talk to each other,” she said.

    Saving Dark Matter

    The simulations finally work well enough to be used for science. With BlueTides, Di Matteo and collaborators are focusing on galaxy formation during the universe’s first 600 million years. Somehow, supermassive black holes wound up at the centers of dark matter halos during that period and helped pull rotating skirts of visible gas and dust around themselves. What isn’t known is how they got so big so fast. One possibility, as witnessed in BlueTides, is that supermassive black holes spontaneously formed from the gravitational collapse of gargantuan gas clouds in over-dense patches of the infant universe.

    BlueTides simulation on Blue Waters supercomputer

    U Illinois Urbana-Champaign Blue Waters Cray Linux XE/XK hybrid machine supercomputer

    “We’ve used the BlueTides simulations to actually predict what this first population of galaxies and black holes is like,” Di Matteo said. In the simulations, they see pickle-shaped proto-galaxies and miniature spirals taking shape around the newborn supermassive black holes. What future telescopes (including the James Webb Space Telescope, set to launch in 2020) observe as they peer deep into space and back in time to the birth of galaxies will in turn test the equations that went into the code.

    Another leader in this back-and-forth game is Phil Hopkins, a professor at the California Institute of Technology. His code, FIRE, simulates relatively small volumes of the cosmos at high resolution. Hopkins “has pushed the resolution in a way that not many other people have,” Wadsley said. “His galaxies look very good.” Hopkins and his team have created some of the most realistic small galaxies, like the “dwarf galaxy” satellites that orbit the Milky Way.


    Video: The formation of a Milky Way-size disk galaxy and its merger with another galaxy in the IllustrisTNG simulation. Credit: Shy Genel/IllustrisTNG

    These small, faint galaxies have always presented problems. The “missing satellite problem,” for instance, is the expectation, based on standard cold dark matter models, that hundreds of satellite galaxies should orbit every spiral galaxy. But the Milky Way has just dozens. This has caused some physicists to contemplate more complicated models of dark matter. However, when Hopkins and colleagues incorporated realistic superbubbles into their simulations, they saw many of those excess satellite galaxies go away. Hopkins has also found potential resolutions to two other problems, called “cusp-core” and “too-big-to-fail,” that have troubled the cold dark matter paradigm.

    With their upgraded simulations, Wadsley, Di Matteo and others are also strengthening the case that dark matter exists at all. Arguably the greatest source of lingering doubt about dark matter is a curious relationship between the visible parts of galaxies.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LUX Dark matter Experiment at SURF, Lead, SD, USA

    ADMX Axion Dark Matter Experiment, U Uashington

    Namely, the speeds at which stars circumnavigate the galaxy closely track with the amount of visible matter enclosed by their orbits — even though the stars are also driven by the gravity of dark matter halos. There’s so much dark matter supposedly accelerating the stars that you wouldn’t expect the stars’ motions to have much to do with the amount of visible matter. For this relationship to exist within the dark matter framework, the amounts of dark matter and visible matter in galaxies must be fine-tuned such that they are tightly correlated themselves and galactic rotation speeds track with either one.

    An alternative theory called modified Newtonian dynamics, or MOND, argues that there is no dark matter; rather, visible matter exerts a stronger gravitational force than expected at galactic outskirts.

    MOND UMd

    MOND Modified Newtonian Dynamics a Humble Introduction Marcus Nielbock

    MOND Rotation Curves with MOND Tully-Fisher

    Mordehai Milgrom, MOND theorist, is an Israeli physicist and professor in the department of Condensed Matter Physics at the Weizmann Institute in Rehovot, Israel http://cosmos.nautil.us

    By slightly tweaking the famous inverse-square law of gravity, MOND broadly matches observed galaxy rotation speeds (though it struggles to account for other phenomena attributed to dark matter).

    The fine-tuning problem appeared to sharpen in 2016, when the cosmologist Stacy McGaugh of Case Western Reserve University and collaborators showed [The Astronomical Journal]how tightly the relationship between stars’ rotation speeds and visible matter holds across a range of real galaxies. But McGaugh’s paper met with three quick rejoinders from the numerical cosmology community. Three teams (one including Wadsley; another [MNRAS], Di Matteo; and the third led by Julio Navarro of the University of Victoria) published the results of simulations indicating that the relation arises naturally in dark-matter-filled galaxies.

    Making the standard assumptions about cold dark matter halos, the researchers simulated galaxies like those in McGaugh’s sample. Their galaxies ended up exhibiting linear relationships very similar to the observed one, suggesting dark matter really does closely track visible matter. “We essentially fit their relation — pretty much on top,” said Wadsley. He and his then-student Ben Keller ran their simulation prior to seeing McGaugh’s paper, “so we felt that the fact that we could reproduce the relation without needing any tweaks to our model was fairly telling,” he said.

    In a simulation that’s running now, Wadsley is generating a bigger volume of mock universe to test whether the relation holds for the full range of galaxy types in McGaugh’s sample. If it does, the cold dark matter hypothesis is seemingly safe from this quandary. As for why dark matter and visible matter end up so tightly correlated in galaxies, based on the simulations, Navarro and colleagues attribute [MNRAS] it to angular momentum acting together with gravity during galaxy formation.

    Beyond questions of dark matter, galactic simulation codes continue to improve, and reflect on other unknowns. The much-lauded, ongoing IllustrisTNG simulation series by Springel and collaborators now includes magnetic fields on a large scale for the first time.

    IllustrisTNG simulation

    “Magnetic fields are like this ghost in astronomy,” Bellovary explained, playing a little-understood role in galactic dynamics. Springel thinks they might influence galactic winds — another enigma — and the simulations will help test this.

    A big goal, Hopkins said, is to combine many simulations that each specialize in different time periods or spatial scales. “What you want to do is just tile all the scales,” he said, “where you can use, at each stage, the smaller-scale theory and observations to give you the theory and inputs you need on all scales.”

    With the recent improvements, researchers say a philosophical debate has ensued about when to say “good enough.” Adding too many astrophysical bells and whistles into the simulations will eventually limit their usefulness by making it increasingly difficult to tell what’s causing what. As Wadsley put it, “We would just be observing a fake universe instead of a real one, but not understanding it.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:37 pm on September 15, 2018 Permalink | Reply
    Tags: , , Quanta Magazine, The End of Theoretical Physics As We Know It   

    From Quanta Magazine: “The End of Theoretical Physics As We Know It” 

    Quanta Magazine
    From Quanta Magazine

    August 27, 2018
    Sabine Hossenfelder

    1
    James O’Brien for Quanta Magazine

    Computer simulations and custom-built quantum analogues are changing what it means to search for the laws of nature.

    Theoretical physics has a reputation for being complicated. I beg to differ. That we are able to write down natural laws in mathematical form at all means that the laws we deal with are simple — much simpler than those of other scientific disciplines.

    Unfortunately, actually solving those equations is often not so simple. For example, we have a perfectly fine theory that describes the elementary particles called quarks and gluons, but no one can calculate how they come together to make a proton. The equations just can’t be solved by any known methods. Similarly, a merger of black holes or even the flow of a mountain stream can be described in deceptively simple terms, but it’s hideously difficult to say what’s going to happen in any particular case.

    Of course, we are relentlessly pushing the limits, searching for new mathematical strategies. But in recent years much of the pushing has come not from more sophisticated math but from more computing power.

    When the first math software became available in the 1980s, it didn’t do much more than save someone a search through enormous printed lists of solved integrals. But once physicists had computers at their fingertips, they realized they no longer had to solve the integrals in the first place, they could just plot the solution.

    In the 1990s, many physicists opposed this “just plot it” approach. Many were not trained in computer analysis, and sometimes they couldn’t tell physical effects from coding artifacts. Maybe this is why I recall many seminars in which a result was degraded as “merely numerical.” But over the past two decades, this attitude has markedly shifted, not least thanks to a new generation of physicists for whom coding is a natural extension of their mathematical skill.

    Accordingly, theoretical physics now has many subdisciplines dedicated to computer simulations of real-world systems, studies that would just not be possible any other way. Computer simulations are what we now use to study the formation of galaxies and supergalactic structures, to calculate the masses of particles that are composed of several quarks, to find out what goes on in the collision of large atomic nuclei, and to understand solar cycles, to name but a few areas of research that are mainly computer based.

    The next step of this shift away from purely mathematical modeling is already on the way: Physicists now custom design laboratory systems that stand in for other systems which they want to better understand. They observe the simulated system in the lab to draw conclusions about, and make predictions for, the system it represents.

    The best example may be the research area that goes by the name “quantum simulations.” These are systems composed of interacting, composite objects, like clouds of atoms. Physicists manipulate the interactions among these objects so the system resembles an interaction among more fundamental particles. For example, in circuit quantum electrodynamics, researchers use tiny superconducting circuits to simulate atoms, and then study how these artificial atoms interact with photons. Or in a lab in Munich, physicists use a superfluid of ultra-cold atoms to settle the debate over whether Higgs-like particles can exist in two dimensions of space (the answer is yes [Nature]).

    These simulations are not only useful to overcome mathematical hurdles in theories we already know. We can also use them to explore consequences of new theories that haven’t been studied before and whose relevance we don’t yet know.

    This is particularly interesting when it comes to the quantum behavior of space and time itself — an area where we still don’t have a good theory. In a recent experiment, for example, Raymond Laflamme, a physicist at the Institute for Quantum Computing at the University of Waterloo in Ontario, Canada, and his group used a quantum simulation to study so-called spin networks, structures that, in some theories, constitute the fundamental fabric of space-time. And Gia Dvali, a physicist at the University of Munich, has proposed a way to simulate the information processing of black holes with ultracold atom gases.

    A similar idea is being pursued in the field of analogue gravity, where physicists use fluids to mimic the behavior of particles in gravitational fields. Black hole space-times have attracted the bulk of attention, as with Jeff Steinhauer’s (still somewhat controversial) claim of having measured Hawking radiation in a black-hole analogue. But researchers have also studied the rapid expansion of the early universe, called “inflation,” with fluid analogues for gravity.

    In addition, physicists have studied hypothetical fundamental particles by observing stand-ins called quasiparticles. These quasiparticles behave like fundamental particles, but they emerge from the collective movement of many other particles. Understanding their properties allows us to learn more about their behavior, and thereby might also to help us find ways of observing the real thing.

    This line of research raises some big questions. First of all, if we can simulate what we now believe to be fundamental by using composite quasiparticles, then maybe what we currently think of as fundamental — space and time and the 25 particles that make up the Standard Model of particle physics — is made up of an underlying structure, too. Quantum simulations also make us wonder what it means to explain the behavior of a system to begin with. Does observing, measuring, and making a prediction by use of a simplified version of a system amount to an explanation?

    But for me, the most interesting aspect of this development is that it ultimately changes how we do physics. With quantum simulations, the mathematical model is of secondary relevance. We currently use the math to identify a suitable system because the math tells us what properties we should look for. But that’s not, strictly speaking, necessary. Maybe, over the course of time, experimentalists will just learn which system maps to which other system, as they have learned which system maps to which math. Perhaps one day, rather than doing calculations, we will just use observations of simplified systems to make predictions.

    At present, I am sure, most of my colleagues would be appalled by this future vision. But in my mind, building a simplified model of a system in the laboratory is conceptually not so different from what physicists have been doing for centuries: writing down simplified models of physical systems in the language of mathematics.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:39 am on August 15, 2018 Permalink | Reply
    Tags: , , , , Dark Energy May Be Incompatible With String Theory, , , Quanta Magazine,   

    From Quanta Magazine: “Dark Energy May Be Incompatible With String Theory” 

    Quanta Magazine
    From Quanta Magazine

    August 9, 2018
    Natalie Wolchover

    1
    String theory permits a “landscape” of possible universes, surrounded by a “swampland” of logically inconsistent universes. In all of the simple, viable stringy universes physicists have studied, the density of dark energy is either diminishing or has a stable negative value, unlike our universe, which appears to have a stable positive value. Maciej Rebisz for Quanta Magazine

    On June 25, Timm Wrase awoke in Vienna and groggily scrolled through an online repository of newly posted physics papers. One title startled him into full consciousness.

    The paper, by the prominent string theorist Cumrun Vafa of Harvard University and collaborators, conjectured a simple formula dictating which kinds of universes are allowed to exist and which are forbidden, according to string theory. The leading candidate for a “theory of everything” weaving the force of gravity together with quantum physics, string theory defines all matter and forces as vibrations of tiny strands of energy. The theory permits some 10500 different solutions: a vast, varied “landscape” of possible universes. String theorists like Wrase and Vafa have strived for years to place our particular universe somewhere in this landscape of possibilities.

    But now, Vafa and his colleagues were conjecturing that in the string landscape, universes like ours — or what ours is thought to be like — don’t exist. If the conjecture is correct, Wrase and other string theorists immediately realized, the cosmos must either be profoundly different than previously supposed or string theory must be wrong.

    After dropping his kindergartner off that morning, Wrase went to work at the Vienna University of Technology, where his colleagues were also buzzing about the paper. That same day, in Okinawa, Japan, Vafa presented the conjecture at the Strings 2018 conference, which was streamed by physicists worldwide. Debate broke out on- and off-site. “There were people who immediately said, ‘This has to be wrong,’ other people who said, ‘Oh, I’ve been saying this for years,’ and everything in the middle,” Wrase said. There was confusion, he added, but “also, of course, huge excitement. Because if this conjecture was right, then it has a lot of tremendous implications for cosmology.”

    Researchers have set to work trying to test the conjecture and explore its implications. Wrase has already written two papers, including one that may lead to a refinement of the conjecture, and both mostly while on vacation with his family. He recalled thinking, “This is so exciting. I have to work and study that further.”

    The conjectured formula — posed in the June 25 paper by Vafa, Georges Obied, Hirosi Ooguri and Lev Spodyneiko and further explored in a second paper released two days later by Vafa, Obied, Prateek Agrawal and Paul Steinhardt — says, simply, that as the universe expands, the density of energy in the vacuum of empty space must decrease faster than a certain rate. The rule appears to be true in all simple string theory-based models of universes. But it violates two widespread beliefs about the actual universe: It deems impossible both the accepted picture of the universe’s present-day expansion and the leading model of its explosive birth.

    Dark Energy in Question

    Since 1998, telescope observations have indicated that the cosmos is expanding ever-so-slightly faster all the time, implying that the vacuum of empty space must be infused with a dose of gravitationally repulsive “dark energy.”

    In addition, it looks like the amount of dark energy infused in empty space stays constant over time (as best anyone can tell).

    But the new conjecture asserts that the vacuum energy of the universe must be decreasing.

    Vafa and colleagues contend that universes with stable, constant, positive amounts of vacuum energy, known as “de Sitter universes,” aren’t possible. String theorists have struggled mightily since dark energy’s 1998 discovery to construct convincing stringy models of stable de Sitter universes. But if Vafa is right, such efforts are bound to sink in logical inconsistency; de Sitter universes lie not in the landscape, but in the “swampland.” “The things that look consistent but ultimately are not consistent, I call them swampland,” he explained recently. “They almost look like landscape; you can be fooled by them. You think you should be able to construct them, but you cannot.”

    According to this “de Sitter swampland conjecture,” in all possible, logical universes, the vacuum energy must either be dropping, its value like a ball rolling down a hill, or it must have obtained a stable negative value. (So-called “anti-de Sitter” universes, with stable, negative doses of vacuum energy, are easily constructed in string theory.)

    The conjecture, if true, would mean the density of dark energy in our universe cannot be constant, but must instead take a form called “quintessence” — an energy source that will gradually diminish over tens of billions of years. Several telescope experiments are underway now to more precisely probe whether the universe is expanding with a constant rate of acceleration, which would mean that as new space is created, a proportionate amount of new dark energy arises with it, or whether the cosmic acceleration is gradually changing, as in quintessence models. A discovery of quintessence would revolutionize fundamental physics and cosmology, including rewriting the cosmos’s history and future. Instead of tearing apart in a Big Rip, a quintessent universe would gradually decelerate, and in most models, would eventually stop expanding and contract in either a Big Crunch or Big Bounce.

    Paul Steinhardt, a cosmologist at Princeton University and one of Vafa’s co-authors, said that over the next few years, “all eyes should be on” measurements by the Dark Energy Survey, WFIRST and Euclid telescopes of whether the density of dark energy is changing.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    NASA/WFIRST

    ESA/Euclid spacecraft

    “If you find it’s not consistent with quintessence,” Steinhardt said, “it means either the swampland idea is wrong, or string theory is wrong, or both are wrong or — something’s wrong.”

    Inflation Under Siege

    No less dramatically, the new swampland conjecture also casts doubt on the widely believed story of the universe’s birth: the Big Bang theory known as cosmic inflation.

    Inflation

    4
    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:
    5

    According to this theory, a minuscule, energy-infused speck of space-time rapidly inflated to form the macroscopic universe we inhabit. The theory was devised to explain, in part, how the universe got so huge, smooth and flat.

    But the hypothetical “inflaton field” of energy that supposedly drove cosmic inflation doesn’t sit well with Vafa’s formula. To abide by the formula, the inflaton field’s energy would probably have needed to diminish too quickly to form a smooth- and flat-enough universe, he and other researchers explained. Thus, the conjecture disfavors many popular models of cosmic inflation. In the coming years, telescopes such as the Simons Observatory will look for definitive signatures of cosmic inflation, testing it against rival ideas.

    In the meantime, string theorists, who normally form a united front, will disagree about the conjecture. Eva Silverstein, a physics professor at Stanford University and a leader in the effort to construct string-theoretic models of inflation, thinks it is very likely to be false. So does her husband, the Stanford professor Shamit Kachru; he is the first “K” in KKLT, a famous 2003 paper (known by its authors’ initials) that suggested a set of stringy ingredients that might be used to construct de Sitter universes. Vafa’s formula says both Silverstein’s and Kachru’s constructions won’t work. “We’re besieged by these conjectures in our family,” Silverstein joked. But in her view, accelerating-expansion models are no more disfavored now, in light of the new papers, than before. “They essentially just speculate that those things don’t exist, citing very limited and in some cases highly dubious analyses,” she said.

    Matthew Kleban, a string theorist and cosmologist at New York University, also works on stringy models of inflation. He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,” since much of the string landscape has yet to be explored. And yet he acknowledges that, based on existing evidence, the conjecture could well be true. “It could be true about string theory, and then maybe string theory doesn’t describe the world,” Kleban said. “[Maybe] dark energy has falsified it. That obviously would be very interesting.”

    Mapping the Swampland

    Whether the de Sitter swampland conjecture and future experiments really have the power to falsify string theory remains to be seen. The discovery in the early 2000s that string theory has something like 10^500 solutions killed the dream that it might uniquely and inevitably predict the properties of our one universe. The theory seemed like it could support almost any observations and became very difficult to experimentally test or disprove.

    In 2005, Vafa and a network of collaborators began to think about how to pare the possibilities down by mapping out fundamental features of nature that absolutely have to be true. For example, their “weak gravity conjecture” asserts that gravity must always be the weakest force in any logical universe. Imagined universes that don’t satisfy such requirements get tossed from the landscape into the swampland. Many of these swampland conjectures have held up famously against attack, and some are now “on a very solid theoretical footing,” said Hirosi Ooguri, a theoretical physicist at the California Institute of Technology and one of Vafa’s first swampland collaborators. The weak gravity conjecture, for instance, has accumulated so much evidence that it’s now suspected to hold generally, independent of whether string theory is the correct theory of quantum gravity.

    The intuition about where landscape ends and swampland begins derives from decades of effort to construct stringy models of universes. The chief challenge of that project has been that string theory predicts the existence of 10 space-time dimensions — far more than are apparent in our 4-D universe. String theorists posit that the six extra spatial dimensions must be small — curled up tightly at every point. The landscape springs from all the different ways of configuring these extra dimensions. But although the possibilities are enormous, researchers like Vafa have found that general principles emerge. For instance, the curled-up dimensions typically want to gravitationally contract inward, whereas fields like electromagnetic fields tend to push everything apart. And in simple, stable configurations, these effects balance out by having negative vacuum energy, producing anti-de Sitter universes. Turning the vacuum energy positive is hard. “Usually in physics, we have simple examples of general phenomena,” Vafa said. “De Sitter is not such a thing.”

    The KKLT paper, by Kachru, Renata Kallosh, Andrei Linde and Sandip Trivedi, suggested stringy trappings like “fluxes,” “instantons” and “anti-D-branes” that could potentially serve as tools for configuring a positive, constant vacuum energy. However, these constructions are complicated, and over the years possible instabilities have been identified. Though Kachru said he does not have “any serious doubts,” many researchers have come to suspect the KKLT scenario does not produce stable de Sitter universes after all.

    Vafa thinks a concerted search for definitely stable de Sitter universe models is long overdue. His conjecture is, above all, intended to press the issue. In his view, string theorists have not felt sufficiently motivated to figure out whether string theory really is capable of describing our world, instead taking the attitude that because the string landscape is huge, there must be a place in it for us, even if no one knows where. “The bulk of the community in string theory still sides on the side of de Sitter constructions [existing],” he said, “because the belief is, ‘Look, we live in a de Sitter universe with positive energy; therefore we better have examples of that type.’”

    His conjecture has roused the community to action, with researchers like Wrase looking for stable de Sitter counterexamples, while others toy with little-explored stringy models of quintessent universes. “I would be equally interested to know if the conjecture is true or false,” Vafa said. “Raising the question is what we should be doing. And finding evidence for or against it — that’s how we make progress.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: