Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:02 am on October 12, 2017 Permalink | Reply
    Tags: , , , , , , , Quanta Magazine, The Math That’s Too Difficult for Physics   

    From Quanta: “The Math That’s Too Difficult for Physics” 

    Quanta Magazine
    Quanta Magazine

    November 18, 2016 [Wow!!]
    Kevin Hartnett

    1
    Christian Gwiozda

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event


    CERN CMS Higgs Event

    Higgs Always the last place your look.

    How do physicists reconstruct what really happened in a particle collision? Through calculations that are so challenging that, in some cases, they simply can’t be done. Yet.

    It’s one thing to smash protons together. It’s another to make scientific sense of the debris that’s left behind.

    This is the situation at CERN, the laboratory that houses the Large Hadron Collider, the largest and most powerful particle accelerator in the world.

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    In order to understand all the data produced by the collisions there, experimental physicists and theoretical physicists engage in a continual back and forth. Experimentalists come up with increasingly intricate experimental goals, such as measuring the precise properties of the Higgs boson. Ambitious goals tend to require elaborate theoretical calculations, which the theorists are responsible for. The experimental physicists’ “wish list is always too full of many complicated processes,” said Pierpaolo Mastrolia, a theoretical physicist at the University of Padua in Italy. “Therefore we identify some processes that can be computed in a reasonable amount of time.”

    By “processes,” Mastrolia is referring to the chain of events that unfolds after particles collide. For example, a pair of gluons might combine through a series of intermediate steps — particles morphing into other particles — to form a Higgs boson, which then decays into still more particles. In general, physicists prefer to study processes involving larger numbers of particles, since the added complexity assists in searches for physical effects that aren’t described by today’s best theories. But each additional particle requires more math.

    To do this math, physicists use a tool called a Feynman diagram, which is essentially an accounting device that has the look of a stick-figure drawing: Particles are represented by lines that collide at vertices to produce new particles.

    3
    Feynman Diagrams Depicting Possible Formations of the Higgs Boson. Image Credit: scienceblogs.com. astrobites

    Physicists then take the integral of every possible path an experiment could follow from beginning to end and add those integrals together. As the number of possible paths goes up, the number of integrals that theorists must compute — and the difficulty of calculating each individual integral — rises precipitously.

    When deciding on the kinds of collisions they want to study, physicists have two main choices to make. First, they decide on the number of particles they want to consider in the initial state (coming in) and the final state (going out). In most experiments, it’s two incoming particles and anywhere from one to a dozen outgoing particles (referred to as “legs” of the Feynman diagram). Then they decide on the number of “loops” they’ll take into account. Loops represent all the intermediate collisions that could take place between the initial and final states. Adding more loops increases the precision of the measurement. They also significantly add to the burden of calculating Feynman diagrams. Generally speaking, there’s a trade-off between loops and legs: If you want to take into account more loops, you need to consider fewer legs. If you want to consider more legs, you’re limited to just a few loops.

    “If you go to two loops, the largest number [of legs] going out is two. People are pushing toward three particles going out at two loops — that’s the boundary that’s really beyond the state of the art,” said Gavin Salam, a theoretical physicist at CERN.

    Physicists already have the tools to calculate probabilities for tree-level (zero loop) and one-loop diagrams featuring any number of particles going in and out. But accounting for more loops than that is still a major challenge and could ultimately be a limiting factor in the discoveries that can be achieved at the LHC.

    “Once we discover a particle and want to determine its properties, its spin, mass, angular momentum or couplings with other particles, then higher-order calculations” with loops become necessary, said Mastrolia.

    And that’s why many are excited about the emerging connections between Feynman diagrams and number theory that I describe in the recent article “Strange Numbers Found in Particle Collisions.” If mathematicians and physicists can identify patterns in the values generated from diagrams of two or more loops, their calculations would become much simpler — and experimentalists would have the mathematics they need to study the kinds of collisions they’re most interested in.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    Advertisements
     
  • richardmitnick 2:20 pm on October 8, 2017 Permalink | Reply
    Tags: , , , , , , , , Perimeter Institute of Theoretical Physics, , Quanta Magazine,   

    From Quanta: Women in STEM: “Mining Black Hole Collisions for New Physics” Asimina Arvanitaki 

    Quanta Magazine
    Quanta Magazine

    July 21, 2016
    Joshua Sokol

    The physicist Asimina Arvanitaki is thinking up ways to search gravitational wave data for evidence of dark matter particles orbiting black holes.

    1
    Asimina Arvanitaki during a July visit to the CERN particle physics laboratory in Geneva, Switzerland.
    Samuel Rubio for Quanta Magazine

    When physicists announced in February that they had detected gravitational waves firsthand, the foundations of physics scarcely rattled.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    The signal exactly matched the expectations physicists had arrived at after a century of tinkering with Einstein’s theory of general relativity. “There is a question: Can you do fundamental physics with it? Can you do things beyond the standard model with it?” said Savas Dimopoulos, a theoretical physicist at Stanford University. “And most people think the answer to that is no.”

    Asimina Arvanitaki is not one of those people. A theoretical physicist at Ontario’s Perimeter Institute of Theoretical Physics,


    Perimeter Institute in Waterloo, Canada

    Arvanitaki has been dreaming up ways to use black holes to explore nature’s fundamental particles and forces since 2010, when she published a paper with Dimopoulos, her mentor from graduate school, and others. Together, they sketched out a “string axiverse,” a pantheon of as yet undiscovered, weakly interacting particles. Axions such as these have long been a favored candidate to explain dark matter and other mysteries.

    In the intervening years, Arvanitaki and her colleagues have developed the idea through successive papers. But February’s announcement marked a turning point, where it all started to seem possible to test these ideas. Studying gravitational waves from the newfound population of merging black holes would allow physicists to search for those axions, since the axions would bind to black holes in what Arvanitaki describes as a “black hole atom.”

    “When it came up, we were like, ‘Oh my god, we’re going to do it now, we’re going to look for this,’” she said. “It’s a whole different ball game if you actually have data.”

    That’s Arvanitaki’s knack: matching what she calls “well-motivated,” field-hopping theoretical ideas with the precise experiment that could probe them. “By thinking away from what people are used to thinking about, you see that there is low-hanging fruit that lie in the interfaces,” she said. At the end of April, she was named the Stavros Niarchos Foundation’s Aristarchus Chair at the Perimeter Institute, the first woman to hold a research chair there.

    It’s a long way to come for someone raised in the small Grecian village of Koklas, where the graduating class at her high school — at which both of her parents taught — consisted of nine students. Quanta Magazine spoke with Arvanitaki about her plan to use black holes as particle detectors. An edited and condensed version of those discussions follows.

    QUANTA MAGZINE: When did you start to think that black holes might be good places to look for axions?

    ASIMINA ARVANITAKI: When we were writing the axiverse paper, Nemanja Kaloper, a physicist who is very good in general relativity, came and told us, “Hey, did you know there is this effect in general relativity called superradiance?” And we’re like, “No, this cannot be, I don’t think this happens. This cannot happen for a realistic system. You must be wrong.” And then he eventually convinced us that this could be possible, and then we spent like a year figuring out the dynamics.
    What is superradiance, and how does it work?

    An astrophysical black hole can rotate. There is a region around it called the “ergo region” where even light has to rotate. Imagine I take a piece of matter and throw it in a trajectory that goes through the ergo region. Now imagine you have some explosives in the matter, and it breaks apart into pieces. Part of it falls into the black hole and part escapes into infinity. The piece that is coming out has more total energy than the piece that went in the black hole.

    You can perform the same experiment by scattering radiation from a black hole. Take an electromagnetic wave pulse, scatter it from the black hole, and you see that the pulse you got back has a higher amplitude.

    So you can send a pulse of light near a black hole in such a way that it would take some energy and angular momentum from the black hole’s spin?

    This is old news, by the way, this is very old news. In ’72 Press and Teukolsky wrote a Nature paper that suggested the following cute thing. Let’s imagine you performed the same experiment as the light, but now imagine that you have the black hole surrounded by a giant mirror. What will happen in that case is the light will bounce on the mirror many times, the amplitude [of the light] grows exponentially, and the mirror eventually explodes due to radiation pressure. They called it the black hole bomb.

    The property that allows light to do this is that light is made of photons, and photons are bosons — particles that can sit in the same space at the same time with the same wave function. Now imagine that you have another boson that has a mass. It can [orbit] the black hole. The particle’s mass acts like a mirror, because it confines the particle in the vicinity of the black hole.

    In this way, axions might get stuck around a black hole?

    This process requires that the size of the particle is comparable to the black hole size. Turns out that [axion] mass can be anywhere from Hubble scale — with a quantum wavelength as big as the universe — or you could have a particle that’s tiny in size.

    So if they exist, axions can bind to black holes with a similar size and mass. What’s next?

    What happens is the number of particles in this bound orbit starts growing exponentially. At the same time the black hole spins down. If you solve for the wave functions of the bound orbits, what you find is that they look like hydrogen wave functions. Instead of electromagnetism binding your atom, what’s binding it is gravity. There are three quantum numbers you can describe, just the same. You can use the exact terminology that you can use in the hydrogen atom.

    How could we check to see if any of the black holes LIGO finds have axion clouds orbiting around black hole nuclei?

    This is a process that extracts energy and angular momentum from the black hole. If you were to measure spin versus mass of black holes, you should see that in a certain mass range for black holes you see no quickly rotating black holes.

    This is where Advanced LIGO comes in. You saw the event they saw. [Their measurements] allowed them to measure the masses of the merging objects, the mass of the final object, the spin of the final object, and to have some information about the spins of the initial objects.

    If I were to take the spins of the black holes before they merged, they could have been affected by superradiance. Now imagine a graph of black hole spin versus mass. Advanced LIGO could maybe get, if the things that we hear are correct, a thousand events per year. Now you have a thousand data points on this plot. So you may trace out the region that is affected by this particle just by those measurements.

    That would be supercool.

    That’s of course indirect. So the other cool thing is that it turns out there are signatures that have to do with the cloud of particles themselves. And essentially what they do is turn the black hole into a gravitational wave laser.

    Awesome. OK, what does that mean?

    2
    Samuel Rubio for Quanta Magazine

    Yeah, what that means is important. Just like you have transitions of electrons in an excited atom, you can have transitions of particles in the gravitational wave atom. The rate of emission of gravitational waves from these transitions is enhanced by the 1080 particles that you have. It would look like a very monochromatic line. It wouldn’t look like a transient. Imagine something now that emits a signal at a very fixed frequency.

    Where could LIGO expect to see signals like this?

    In Advanced LIGO, you actually see the birth of a black hole. You know when and where a black hole was born with a certain mass and a certain spin. So if you know the particle masses that you’re looking for, you can predict when the black hole will start growing the [axion] cloud around it. It could be that you see a merger in that day, and one or 10 years down the line, they go back to the same position and they see this laser turning on, they see this monochromatic line coming out from the cloud.

    You can also do a blind search. Because you have black holes that are roaming the universe by themselves, and they could still have some leftover cloud around them, you can do a blind search for monochromatic gravitational waves.

    Were you surprised to find out that axions and black holes could combine to produce such a dramatic effect?

    Oh my god yes. What are you talking about? We had panic attacks. You know how many panic attacks we had saying that this effect, no, this cannot be true, this is too good to be true? So yes, it was a surprise.

    The experiments you suggest draw from a lot of different theoretical ideas — like how we could look for high-frequency gravitational waves with tabletop sensors, or test whether dark matter oscillates using atomic clocks. When you’re thinking about making risky bets on physics beyond the standard model, what sorts of theories seem worth the effort?

    What is well motivated? Things that are not: “What if you had this?” People imagine: “What if dark matter was this thing? What if dark matter was the other thing?” For example, supersymmetry makes predictions about what types of dark matter should be there. String theory makes predictions about what types of particles you should have. There is always an underlying reason why these particles are there; it’s not just the endless theoretical possibilities that we have.

    And axions fit that definition?

    This is a particle that was proposed 30 years ago to explain the smallness of the observed electric dipole moment of the neutron. There are several experiments around the world looking for it already, at different wavelengths. So this particle, we’ve been looking for it for 30 years. This can be the dark matter. That particle solves an outstanding problem of the standard model, so that makes it a good particle to look for.

    Now, whether or not the particle is there I cannot answer for nature. Nature will have to answer.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 8:48 am on October 8, 2017 Permalink | Reply
    Tags: , , , , , , Quanta Magazine, U Toronta Dragon Fly Telescope Array, UDGs-“ultra-diffuse galaxies”   

    From Quanta: “Strange Dark Galaxy Puzzles Astrophysicists” 

    Quanta Magazine
    Quanta Magazine

    September 27, 2016
    Joshua Sokol

    The surprising discovery of a massive, Milky Way-size galaxy that is made of 99.99 percent dark matter has astronomers dreaming up new ideas about how galaxies form.

    1
    Astronomers have long known of small dark-matter dominated galaxies. None were supposed to be as big as ordinary spiral galaxies such as NGC 3810, seen here in negative. Photo illustration by Olena Shmahalo/Quanta Magazine. Source: NASA/ESA Hubble.

    NASA/ESA Hubble Telescope

    Among the thousand-plus galaxies in the Coma cluster, a massive clump of matter some 300 million light-years away, is at least one — and maybe a few hundred — that shouldn’t exist.

    Coma cluster via NASA/ESA Hubble

    Dragonfly 44 is a dim galaxy, with one star for every hundred in our Milky Way.

    2
    The ultra-diffuse galaxy Dragonfly 44. Image credit: Pieter van Dokkum / Roberto

    But it spans roughly as much space as the Milky Way.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt

    In addition, it’s heavy enough to rival our own galaxy in mass, according to results published in The Astrophysical Journal Letters at the end of August. That odd combination is crucial: Dragonfly 44 is so dark, so fluffy, and so heavy that some astronomers believe it will either force a revision of our theories of galaxy formation or help us understand the properties of dark matter, the mysterious stuff that interacts with normal matter via gravity and not much else. Or both.

    The discovery came almost by accident. The astronomers Pieter van Dokkum of Yale University and Roberto Abraham of the University of Toronto were interested in testing theories of how galaxies form by searching for objects that have been invisible to even the most advanced telescopes: faint, wispy and extended objects in the sky. So their team built the Dragonfly Telephoto Array, a collection of modified Canon lenses that focus light onto commercial camera sensors.

    U Toronta Dragon Fly Telescope Array

    This setup cut down on any scattered light inside the system that might hide a dim object.

    The plan was to study the faint fringes of nearby galaxies. But the famous Coma cluster — the collection of galaxies that long ago inspired astronomer Fritz Zwicky’s conjecture that such a thing as dark matter might exist — beckoned. “Partway through, we just could not resist looking at Coma,” Abraham said. “You could argue that this discovery emerged from a lack of discipline.” They planned to study the Coma cluster’s intracluster light — the faint glow of loose stars floating between the cluster’s galaxies.

    Instead, they found 47 faint smudges that wouldn’t go away. These smudges seemed to have diameters roughly the same size as the Milky Way. Yet according to the commonly accepted models of galaxy formation, anything that big shouldn’t be so dim.

    In these theories, clumps of dark matter seed the universe with light. First, clouds of dark matter coalesce into relatively dense dark-matter haloes.

    Dark matter halo Image credit: Virgo consortium / A. Amblard / ESA

    Then gas and fragments of other galaxies, drawn by the halo’s gravity, collect at the center. They spin out into a disk and collapse into luminous stars to form something we can see through telescopes. The whole process seems to be reasonably predictable for big galaxies such as our Milky Way. Having measured either a galaxy’s dark-matter halo or its assortment of stars, you should be able to predict the other to within a factor of two.

    3
    The dark galaxy Dragonfly 44. The scale bar represents a distance of 10 kiloparsecs, or about 33,000 light years. Pieter van Dokkum, Roberto Abraham, Gemini Observatory/AURA.

    “It’s not just dogma. It’s basically that there are no exceptions that we knew of,” said Jeremiah Ostriker, an astrophysicist at Columbia University.

    After Abraham and van Dokkum realized that they appeared to be looking at 47 exceptions, they did a search through the literature. They found that similar fuzzy blobs have been on the edge of discovery since the 1970s. Van Dokkum thinks astronomy’s transition from photographic plates — which were perhaps better suited to picking up extended, diffuse objects — to modern digital sensors may actually have hid them from further attention.

    Abraham and van Dokkum first noticed their smudges in the spring of 2014. Since then, similar “ultra-diffuse galaxies,” or UDGs, have been discovered in other galaxy groupings like the Virgo and Fornax clusters. And in the Coma cluster, one study suggested [The Astrophysical Journal Letters], there may be a thousand more of them, including 332 that are about as large as the Milky Way.

    Meanwhile, the Dragonfly team has been advancing the case that these new dim galaxies really are oddballs that challenge current theory. They’re failed galaxies, this argument holds. Dark matter planted the seeds of a spiral disk and stars, but somehow the luminous structure didn’t sprout.

    That argument has convinced outside experts like Ostriker, who finds van Dokkum’s prior record highly credible. “There are many, many other people who could have ‘discovered’ this where I’d be much more skeptical,” Ostriker said. “The simplest way of putting it is: His papers aren’t wrong.”

    Not everyone is so convinced. While these UDGs may be large, they’re not necessarily massive, argue some astronomers. One idea is that UDGs might be lightweight galaxies that look puffy because they are in the process of being torn apart by gravitational tides from the rest of the Coma cluster.

    Michelle Collins, an astronomer at the University of Surrey, argues that “the only other place we’ve seen things that are that extreme or more extreme are a handful of galaxies around the Local Group,” referring to small, dim “dwarf galaxies” that frequently orbit larger galaxies such as our Milky Way. “They are all things that are currently being ripped apart.” That would make most UDGs just large dwarf galaxies in the process of being ripped to shreds.

    Another possibility hinges on the idea that galaxies can “breathe.” At the end of 2015, Kareem El-Badry, who was at the time an undergraduate student at Yale University, proposed that galaxies can swell out and then collapse in size by over a factor of two. In this process, gas first falls into the galaxy, forming massive stars — the breathing in. The stars quickly end their lives in supernova explosions that blast the gas outside the galaxy — the breathing out. The gas eventually cools, and gravity pulls it back toward the galactic center. In a lone galaxy, this rhythm can continue indefinitely. But in the harsh environment of the Coma cluster, where hot gas fills the space between galaxies, the gas after the galaxy exhales could be stripped away, leaving the whole galaxy stuck in a puffy state.

    3
    Lucy Reading-Ikkanda for Quanta Magazine

    Yet another interpretation, suggested in March 2016 by Harvard University astrophysicists Nicola Amorisco and Avi Loeb, is that UDGs are ordinary galaxies that are just spinning fast. “In our scenario, it’s very natural,” Loeb said.

    That idea piggybacks on standard theories of galaxy formation, in which gas pours into a dark-matter halo to build a galaxy. As the material falls, it begins to rotate. The amount of rotation determines the size of the final galaxy. Without much spin, gravity pulls the galaxy into a compact shape. But galaxies that get a big rotational push can spin themselves out into large, lightweight disks.

    It could be, according to this model, that the UDGs are natural examples of the very fastest spinners. If so, their stretched-out disks wouldn’t be dense enough to form as many stars as a slower rotator like the Milky Way, explaining why they look so faint.

    These ideas may well explain some of the UDG population, according to Abraham. “Probably this is going to evolve into a mixed bag of things,” he said. But according to his team’s latest data, obtained from observations that spanned a total of 33.5 hours on the 10-meter Keck II telescope in Hawaii, there is no evidence that the Dragonfly 44 galaxy is rotating.


    Keck Observatory, Maunakea, Hawaii, USA.4,207 m (13,802 ft) above sea level

    In addition, they argue that the total mass of the galaxy is around a trillion suns — massive enough to prevent it being ripped apart like a dwarf galaxy, and heavier than the galaxies thought to periodically puff up.

    That mass measurement is the real sticking point, said Philip Hopkins, a theoretical astrophysicist at the California Institute of Technology who is preparing several papers on UDGs. It comes from two observations of different parts of Dragonfly 44. First, the motions of stars in the galaxy’s inner regions suggest that the area is massive, filled with dark matter. Second, the outskirts of the galaxy are home to a number of globular clusters — tight, ancient balls of stars. Just as the number of stars in a galaxy is ordinarily linked to the amount of dark matter, observations show that the more globular clusters a galaxy has, the higher the mass of its dark-matter halo. Dragonfly 44 has Milky Way-level clusters. Other UDGs seem to have lots of globular clusters, too.

    Because of this, even if these UDGs don’t have heavy dark-matter haloes, researchers will still be left to explain why they have far more globular clusters than the known relationship suggests they should. “Something is weird about these things,” Hopkins said. “Either way, it’s really cool.”

    The discovery has generated enough interest to earn the team precious time on the Hubble Space Telescope to study Dragonfly 44’s globular clusters. “The thing I find hilarious is we’re using humanity’s most powerful telescope in space to follow up a bunch of telephoto lenses,” Abraham said. To fully understand the relationship between dark matter and the globular clusters, though, they have to measure the motions of the clusters — for which they’ll need to wait until the James Webb Space Telescope launches in 2018 [revised to 2019].

    NASA/ESA/CSA Webb Telescope annotated

    In parallel, they’re looking to find and characterize more Dragonfly 44s, preferably a few located both outside of a cluster — and thus free of the harsh cluster environment — and closer to us. It’s an open question as to whether they exist elsewhere and, if so, what form they take. “The resolution of whether the UDGs are what we argue they are, or something else, would come from finding them outside of clusters of galaxies and seeing how they look there,” Loeb said. A few candidates have emerged, van Dokkum said, and they are now being followed up with Keck and Hubble.

    For theorists like Ostriker, that’s an exciting prospect. If the motion of stars in a galaxy like Dragonfly 44 can be studied up close, it would be a make-or-break test for current dark-matter theories, which make different predictions about how the missing mass should be distributed. The leading theory, called cold dark matter, suggests dark matter should surge at the heart of a galaxy. Right now, though, the dark-matter-dominated galaxies we have to study are nearby dwarf galaxies, and they don’t exhibit that characteristic. “Many of the properties that dark matter is supposed to have … these little galaxies don’t show,” Ostriker said. “But we say, ‘We don’t really know how these things were formed anyway,’ and we just change the subject.”

    By contrast, an otherwise normal-but-dark Milky Way would eliminate that loophole. In the universe’s other Milky Way-size galaxies, stars and gas can outweigh dark matter in the central regions by a factor of five to one. That makes disentangling the gravitational pull of dark matter alone tricky. But the center of Dragonfly 44’s disk is 98 percent dark matter, meaning a map of its central mass would give unprecedented insight into dark matter’s properties, Ostriker said.

    The way forward to understand UDGs isn’t clear yet, Abraham said, but hopefully at least some of the ideas now being proposed will persist through the next few years of observations. “In astronomy, it’s still valid to be just an explorer. In the case of Dragonfly, we’re like Leif Eriksson,” he said. “You’ve been on the ship for months, and suddenly somebody said, ‘Land ho!’ And it’s not on the map.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:21 pm on August 4, 2017 Permalink | Reply
    Tags: , , , , , , , , Quanta Magazine   

    From Quanta: “Scientists Unveil a New Inventory of the Universe’s Dark Contents” 

    Quanta Magazine
    Quanta Magazine

    August 3, 2017
    Natalie Wolchover

    In a much-anticipated analysis of its first year of data, the Dark Energy Survey (DES) telescope experiment has gauged the amount of dark energy and dark matter in the universe by measuring the clumpiness of galaxies — a rich and, so far, barely tapped source of information that many see as the future of cosmology.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    The analysis, posted on DES’s website today and based on observations of 26 million galaxies in a large swath of the southern sky, tweaks estimates only a little. It draws the pie chart of the universe as 74 percent dark energy and 21 percent dark matter, with galaxies and all other visible matter — everything currently known to physicists — filling the remaining 5 percent sliver.

    The results are based on data from the telescope’s first observing season, which began in August 2013 and lasted six months. Since then, three more rounds of data collection have passed; the experiment begins its fifth and final planned observing season this month. As the 400-person team analyzes more of this data in the coming years, they’ll begin to test theories about the nature of the two invisible substances that dominate the cosmos — particularly dark energy, “which is what we’re ultimately going after,” said Joshua Frieman, co-founder and director of DES and an astrophysicist at Fermi National Accelerator Laboratory (Fermilab) and the University of Chicago. Already, with their first-year data, the experimenters have incrementally improved the measurement of a key quantity that will reveal what dark energy is.

    Both terms — dark energy and dark matter — are mental place holders for unknown physics. “Dark energy” refers to whatever is causing the expansion of the universe to accelerate, as astronomers first discovered it to be doing in 1998. And great clouds of missing “dark matter” have been inferred from 80 years of observations of their apparent gravitational effect on visible matter (though whether dark matter consists of actual particles or something else, nobody knows).

    The balance of the two unknown substances sculpts the distribution of galaxies. “As the universe evolves, the gravity of dark matter is making it more clumpy, but dark energy makes it less clumpy because it’s pushing galaxies away from each other,” Frieman said. “So the present clumpiness of the universe is telling us about that cosmic tug-of-war between dark matter and dark energy.”

    2
    The Dark Energy Survey uses a 570-megapixel camera mounted on the Victor M. Blanco Telescope in Chile (left). The camera is made out of 74 individual light-gathering wafers.

    A Dark Map

    Until now, the best way to inventory the cosmos has been to look at the Cosmic Microwave Background [CMB]: pristine light from the infant universe that has long served as a wellspring of information for cosmologists, but which — after the Planck space telescope mapped it in breathtakingly high resolution in 2013 — has less and less to offer.

    CMB per ESA/Planck

    ESA/Planck

    Cosmic microwaves come from the farthest point that can be seen in every direction, providing a 2-D snapshot of the universe at a single moment in time, 380,000 years after the Big Bang (the cosmos was dark before that). Planck’s map of this light shows an extremely homogeneous young universe, with subtle density variations that grew into the galaxies and voids that fill the universe today.

    Galaxies, after undergoing billions of years of evolution, are more complex and harder to glean information from than the cosmic microwave background, but according to experts, they will ultimately offer a richer picture of the universe’s governing laws since they span the full three-dimensional volume of space. “There’s just a lot more information in a 3-D volume than on a 2-D surface,” said Scott Dodelson, co-chair of the DES science committee and an astrophysicist at Fermilab and the University of Chicago.

    To obtain that information, the DES team scrutinized a section of the universe spanning an area 1,300 square degrees wide in the sky — the total area of 6,500 full moons — and stretching back 8 billion years (the data were collected by the half-billion-pixel Dark Energy Camera mounted on the Victor M. Blanco Telescope in Chile). They statistically analyzed the separations between galaxies in this cosmic volume. They also examined the distortion in the galaxies’ apparent shapes — an effect known as “weak gravitational lensing” that indicates how much space-warping dark matter lies between the galaxies and Earth. These two probes — galaxy clustering and weak lensing — are two of the four approaches that DES will eventually use to inventory the cosmos. Already, the survey’s measurements are more precise than those of any previous galaxy survey, and for the first time, they rival Planck’s.

    4

    “This is entering a new era of cosmology from galaxy surveys,” Frieman said. With DES’s first-year data, “galaxy surveys have now caught up to the cosmic microwave background in terms of probing cosmology. That’s really exciting because we’ve got four more years where we’re going to go deeper and cover a larger area of the sky, so we know our error bars are going to shrink.”

    For cosmologists, the key question was whether DES’s new cosmic pie chart based on galaxy surveys would differ from estimates of dark energy and dark matter inferred from Planck’s map of the cosmic microwave background. Comparing the two would reveal whether cosmologists correctly understand how the universe evolved from its early state to its present one. “Planck measures how much dark energy there should be” at present by extrapolating from its state at 380,000 years old, Dodelson said. “We measure how much there is.”

    The DES scientists spent six months processing their data without looking at the results along the way — a safeguard against bias — then “unblinded” the results during a July 7 video conference. After team leaders went through a final checklist, a member of the team ran a computer script to generate the long-awaited plot: DES’s measurement of the fraction of the universe that’s matter (dark and visible combined), displayed together with the older estimate from Planck. “We were all watching his computer screen at the same time; we all saw the answer at the same time. That’s about as dramatic as it gets,” said Gary Bernstein, an astrophysicist at the University of Pennsylvania and co-chair of the DES science committee.

    Planck pegged matter at 33 percent of the cosmos today, plus or minus two or three percentage points. When DES’s plots appeared, applause broke out as the bull’s-eye of the new matter measurement centered on 26 percent, with error bars that were similar to, but barely overlapped with, Planck’s range.

    “We saw they didn’t quite overlap,” Bernstein said. “But everybody was just excited to see that we got an answer, first, that wasn’t insane, and which was an accurate answer compared to before.”

    Statistically speaking, there’s only a slight tension between the two results: Considering their uncertainties, the 26 and 33 percent appraisals are between 1 and 1.5 standard deviations or “sigma” apart, whereas in modern physics you need a five-sigma discrepancy to claim a discovery. The mismatch stands out to the eye, but for now, Frieman and his team consider their galaxy results to be consistent with expectations based on the cosmic microwave background. Whether the hint of a discrepancy strengthens or vanishes as more data accumulate will be worth watching as the DES team embarks on its next analysis, expected to cover its first three years of data.

    If the possible discrepancy between the cosmic-microwave and galaxy measurements turns out to be real, it could create enough of a tension to lead to the downfall of the “Lambda-CDM model” of cosmology, the standard theory of the universe’s evolution. Lambda-CDM is in many ways a simple model that starts with Albert Einstein’s general theory of relativity, then bolts on dark energy and dark matter. A replacement for Lambda-CDM might help researchers uncover the quantum theory of gravity that presumably underlies everything else.

    What Is Dark Energy?

    According to Lambda-CDM, dark energy is the “cosmological constant,” represented by the Greek symbol lambda Λ in Einstein’s theory; it’s the energy that infuses space itself, when you get rid of everything else. This energy has negative pressure, which pushes space away and causes it to expand. New dark energy arises in the newly formed spatial fabric, so that the density of dark energy always remains constant, even as the total amount of it relative to dark matter increases over time, causing the expansion of the universe to speed up.

    The universe’s expansion is indeed accelerating, as two teams of astronomers discovered in 1998 by observing light from distant supernovas. The discovery, which earned the leaders of the two teams the 2011 Nobel Prize in physics, suggested that the cosmological constant has a positive but “mystifyingly tiny” value, Bernstein said. “There’s no good theory that explains why it would be so tiny.” (This is the “cosmological constant problem” that has inspired anthropic reasoning and the dreaded multiverse hypothesis.)

    On the other hand, dark energy could be something else entirely. Frieman, whom colleagues jokingly refer to as a “fallen theorist,” studied alternative models of dark energy before co-founding DES in 2003 in hopes of testing his and other researchers’ ideas. The leading alternative theory envisions dark energy as a field that pervades space, similar to the “inflaton field” that most cosmologists think drove the explosive inflation of the universe during the Big Bang. The slowly diluting energy of the inflaton field would have exerted a negative pressure that expanded space, and Frieman and others have argued that dark energy might be a similar field that is dynamically evolving today.

    DES’s new analysis incrementally improves the measurement of a parameter that distinguishes between these two theories — the cosmological constant on the one hand, and a slowly changing energy field on the other. If dark energy is the cosmological constant, then the ratio of its negative pressure and density has to be fixed at −1. Cosmologists call this ratio w. If dark energy is an evolving field, then its density would change over time relative to its pressure, and w would be different from −1.

    Remarkably, DES’s first-year data, when combined with previous measurements, pegs w’s value at −1, plus or minus roughly 0.04. However, the present level of accuracy still isn’t enough to tell if we’re dealing with a cosmological constant rather than a dynamic field, which could have w within a hair of −1. “That means we need to keep going,” Frieman said.

    The DES scientists will tighten the error bars around w in their next analysis, slated for release next year; they’ll also measure the change in w over time, by probing its value at different cosmic distances. (Light takes time to reach us, so distant galaxies reveal the universe’s past). If dark energy is the cosmological constant, the change in w will be zero. A nonzero measurement would suggest otherwise.

    Larger galaxy surveys might be needed to definitively measure w and the other cosmological parameters. In the early 2020s, the ambitious Large Synoptic Survey Telescope (LSST) will start collecting light from 20 billion galaxies and other cosmological objects, creating a high-resolution map of the universe’s clumpiness that will yield a big jump in accuracy.

    LSST


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The data might confirm that we occupy a Lambda-CDM universe, infused with an inexplicably tiny cosmological constant and full of dark matter whose nature remains elusive. But Frieman doesn’t discount the possibility of discovering that dark energy is an evolving quantum field, which would invite a deeper understanding by going beyond Einstein’s theory and tying cosmology to quantum physics.

    “With these surveys — DES and LSST that comes after it — the prospects are quite bright,” Dodelson said. “It is more complicated to analyze these things because the cosmic microwave background is simpler, and that is good for young people in the field because there’s a lot of work to do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 11:07 am on June 11, 2017 Permalink | Reply
    Tags: , Bell test, Cosmic Bell test, Experiment Reaffirms Quantum Weirdness, John Bell, Quanta Magazine, , , , Superdeterminism   

    From Quanta: “Experiment Reaffirms Quantum Weirdness” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2017 [I wonder where this was hiding. It just appeared today in social media.]
    Natalie Wolchover

    Physicists are closing the door on an intriguing loophole around the quantum phenomenon Einstein called “spooky action at a distance.”

    1
    Olena Shmahalo/Quanta Magazine

    There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described today in Physical Review Letters — a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna — some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

    “Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.

    00:00/09:42

    According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another — at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

    In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

    Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula — one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

    In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators — devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent — if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs — this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

    The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

    This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice [Phys.Rev.Lett.]. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent — if only one bit specifies them once in every 22 runs — this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

    The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

    2
    Members of the cosmic Bell test team calibrating the telescope used to choose the settings of one of their two detectors located in far-apart buildings in Vienna. Jason Gallicchio

    Friedman, Alan Guth and colleagues at MIT were entertaining such speculations a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit.

    4
    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    5
    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

    In the first of a planned series of “cosmic Bell test” experiments, the team sent pairs of photons from the roof of Zeilinger’s lab in Vienna through the open windows of two other buildings and into optical modulators, tallying coincident detections as usual. But this time, they attempted to lower the chance that the modulator settings might somehow become correlated with the states of the photons in the moments before each measurement. They pointed a telescope out of each window, trained each telescope on a bright and conveniently located (but otherwise random) star, and, before each measurement, used the color of an incoming photon from each star to set the angle of the associated modulator. The colors of these photons were decided hundreds of years ago, when they left their stars, increasing the chance that they (and therefore the measurement settings) were independent of the states of the photons being measured.

    And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

    Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

    Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear — which would prove that Einstein was right to doubt spooky action.

    “For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

    There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning — that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

    But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement — and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:16 pm on May 16, 2017 Permalink | Reply
    Tags: , , , , Quanta Magazine, , Tim Maudlin   

    From Quanta: “A Defense of the Reality of Time” Tim Maudlin 

    Quanta Magazine
    Quanta Magazine

    May 16, 2017
    George Musser

    1
    Tim Maudlin. Edwin Tse for Quanta Magazine

    Time isn’t just another dimension, argues Tim Maudlin. To make his case, he’s had to reinvent geometry.

    Physicists and philosophers seem to like nothing more than telling us that everything we thought about the world is wrong. They take a peculiar pleasure in exposing common sense as nonsense. But Tim Maudlin thinks our direct impressions of the world are a better guide to reality than we have been led to believe.

    Not that he thinks they always are. Maudlin, who is a professor at New York University and one of the world’s leading philosophers of physics, made his name studying the strange behavior of “entangled” quantum particles, which display behavior that is as counterintuitive as can be; if anything, he thinks physicists have downplayed how transformative entanglement is.

    2
    Quantum entanglement. ATCA

    At the same time, though, he thinks physicists can be too hasty to claim that our conventional views are misguided, especially when it comes to the nature of time.

    He defends a homey and unfashionable view of time. It has a built-in arrow. It is fundamental rather than derived from some deeper reality. Change is real, as opposed to an illusion or an artifact of perspective. The laws of physics act within time to generate each moment. Mixing mathematics, physics and philosophy, Maudlin bats away the reasons that scientists and philosophers commonly give for denying this folk wisdom.

    The mathematical arguments are the target of his current project, the second volume of New Foundations for Physical Geometry (the first appeared in 2014). Modern physics, he argues, conceptualizes time in essentially the same way as space. Space, as we commonly understand it, has no innate direction — it is isotropic. When we apply spatial intuitions to time, we unwittingly assume that time has no intrinsic direction, either. New Foundations rethinks topology in a way that allows for a clearer distinction between time and space. Conventionally, topology — the first level of geometrical structure — is defined using open sets, which describe the neighborhood of a point in space or time. “Open” means a region has no sharp edge; every point in the set is surrounded by other points in the same set.

    Maudlin proposes instead to base topology on lines. He sees this as closer to our everyday geometrical intuitions, which are formed by thinking about motion. And he finds that, to match the results of standard topology, the lines need to be directed, just as time is. Maudlin’s approach differs from other approaches that extend standard topology to endow geometry with directionality; it is not an extension, but a rethinking that builds in directionality at the ground level.

    Maudlin discussed his ideas with Quanta Magazine in March. Here is a condensed and edited version of the interview.

    Why might one think that time has a direction to it? That seems to go counter to what physicists often say.

    I think that’s a little bit backwards. Go to the man on the street and ask whether time has a direction, whether the future is different from the past, and whether time doesn’t march on toward the future. That’s the natural view. The more interesting view is how the physicists manage to convince themselves that time doesn’t have a direction.
    They would reply that it’s a consequence of Einstein’s special theory of relativity, which holds that time is a fourth dimension.

    This notion that time is just a fourth dimension is highly misleading. In special relativity, the time directions are structurally different from the space directions. In the timelike directions, you have a further distinction into the future and the past, whereas any spacelike direction I can continuously rotate into any other spacelike direction. The two classes of timelike directions can’t be continuously transformed into one another.

    Standard geometry just wasn’t developed for the purpose of doing space-time. It was developed for the purpose of just doing spaces, and spaces have no directedness in them. And then you took this formal tool that you developed for this one purpose and then pushed it to this other purpose.

    When relativity was developed in the early part of the 20th century, did people begin to see this problem?

    I don’t think they saw it as a problem. The development was highly algebraic, and the more algebraic the technique, the further you get from having a geometrical intuition about what you’re doing. So if you develop the standard account of, say, the metric of space-time, and then you ask, “Well, what happens if I start putting negative numbers in this thing?” That’s a perfectly good algebraic question to ask. It’s not so clear what it means geometrically. And people do the same thing now when they say, “Well, what if time had two dimensions?” As a purely algebraic question, I can say that. But if you ask me what could it mean, physically, for time to have two dimensions, I haven’t the vaguest idea. Is it consistent with the nature of time that it be a two-dimensional thing? Because if you think that what time does is order events, then that order is a linear order, and you’re talking about a fundamentally one-dimensional kind of organization.
    And so you are trying to allow for the directionality of time by rethinking geometry. How does that work?

    I really was not starting from physics. I was starting from just trying to understand topology. When you teach, you’re forced to confront your own ignorance. I was trying to explain standard topology to some students when I was teaching a class on space and time, and I realized that I didn’t understand it. I couldn’t see the connection between the technical machinery and the concepts that I was using.

    Suppose I just hand you a bag of points. It doesn’t have a geometry. So I have to add some structure to give it anything that is recognizably geometrical. In the standard approach, I specify which sets of points are open sets. In my approach, I specify which sets of points are lines.

    How does this differ from ordinary geometry taught in high school?

    In this approach that’s based on lines, a very natural thing to do is to put directionality on the lines. It’s very easy to implement at the level of axioms. If you’re doing Euclidean geometry, this isn’t going to occur to you, because your idea in Euclidean geometry is if I have a continuous line from A to B, it’s just as well a continuous line B to A — that there’s no directionality in a Euclidean line.
    From the pure mathematical point of view, why might your approach be preferable?

    In my approach, you put down a linear structure on a set of points. If you put down lines according to my axioms, there’s then a natural definition of an open set, and it generates a topology.

    Another important conceptual advantage is that there’s no problem thinking of a line that’s discrete. People form lines where there are only finitely many people, and you can talk about who’s the next person in line, and who’s the person behind them, and so on. The notion of a line is neutral between it being discrete and being continuous. So you have this general approach.

    Why is this kind of modification important for physics?

    As soon as you start talking about space-time, the idea that time has a directionality is obviously something we begin with. There’s a tremendous difference between the past and the future. And so, as soon as you start to think geometrically of space-time, of something that has temporal characteristics, a natural thought is that you are thinking of something that does now have an intrinsic directionality. And if your basic geometrical objects can have directionality, then you can use them to represent this physical directionality.
    Physicists have other arguments for why time doesn’t have a direction.

    Often one will hear that there’s a time-reversal symmetry in the laws. But the normal way you describe a time-reversal symmetry presupposes there’s a direction of time. Someone will say the following: “According to Newtonian physics, if the glass can fall off the table and smash on the floor, then it’s physically possible for the shards on the floor to be pushed by the concerted effort of the floor, recombine into the glass and jump back up on the table.” That’s true. But notice, both of those descriptions are ones that presuppose there’s a direction of time. That is, they presuppose that there’s a difference between the glass falling and the glass jumping, and there’s a difference between the glass shattering and the glass recombining. And the difference between those two is always which direction is the future, and which direction is the past.

    So I’m certainly not denying that there is this time-reversibility. But the time-reversibility doesn’t imply that there isn’t a direction of time. It just says that for every event that the laws of physics allow, there is a corresponding event in which various things have been reversed, velocities have been reversed and so on. But in both of these cases, you think of them as allowing a process that’s running forward in time.

    Now that raises a puzzle: Why do we often see the one kind of thing and not the other kind of thing? And that’s the puzzle about thermodynamics and entropy and so on.

    If time has a direction, is the thermodynamic arrow of time still a problem?

    The problem there isn’t with the arrow. The problem is with understanding why things started out in a low-entropy state. Once you have that it starts in a low-entropy state, the normal thermodynamic arguments lead you to expect that most of the possible initial states are going to yield an increasing entropy. So the question is, why did things start out so low entropy?

    One choice is that the universe is only finite in time and had an initial state, and then there’s the question: “Can you explain why the initial state was low?” which is a subpart of the question, “Can you explain an initial state at all?” It didn’t come out of anything, so what would it mean to explain it in the first place?

    The other possibility is that there was something before the big bang. If you imagine the big bang is the bubbling-off of this universe from some antecedent proto-universe or from chaotically inflating space-time, then there’s going to be the physics of that bubbling-off, and you would hope the physics of the bubbling-off might imply that the bubbles would be of a certain character.
    Given that we still need to explain the initial low-entropy state, why do we need the internal directedness of time? If time didn’t have a direction, wouldn’t specification of a low-entropy state be enough to give it an effective direction?

    If time didn’t have a direction, it seems to me that would make time into just another spatial dimension, and if all we’ve got all are spatial dimensions, then it seems to me nothing’s happening in the universe. I can imagine a four-dimensional spatial object, but nothing occurs in it. This is the way people often talk about the, quote, “block universe” as being fixed or rigid or unchanging or something like that, because they’re thinking of it like a four-dimensional spatial object. If you had that, then I don’t see how any initial condition put on it — or any boundary condition put on it; you can’t say “initial” anymore — could create time. How can a boundary condition change the fundamental character of a dimension from spatial to temporal?

    Suppose on one boundary there’s low entropy; from that I then explain everything. You might wonder: “But why that boundary? Why not go from the other boundary, where presumably things are at equilibrium?” The peculiar characteristics at this boundary are not low entropy — there’s high entropy there — but that the microstate is one of the very special ones that leads to a long period of decreasing entropy. Now it seems to me that it has the special microstate because it developed from a low-entropy initial state. But now I’m using “initial” and “final,” and I’m appealing to certain causal notions and productive notions to do the explanatory work. If you don’t have a direction of time to distinguish the initial from the final state and to underwrite these causal locutions, I’m not quite sure how the explanations are supposed to go.

    But all of this seems so — what can I say? It seems so remote from the physical world. We’re sitting here and time is going on, and we know what it means to say that time is going on. I don’t know what it means to say that time really doesn’t pass and it’s only in virtue of entropy increasing that it seems to.

    You don’t sound like much of a fan of the block universe.

    There’s a sense in which I believe a certain understanding of the block universe. I believe that the past is equally real as the present, which is equally real as the future. Things that happened in the past were just as real. Pains in the past were pains, and in the future they’ll be real too, and there was one past and there will be one future. So if that’s all it means to believe in a block universe, fine.

    People often say, “I’m forced into believing in a block universe because of relativity.” The block universe, again, is some kind of rigid structure. The totality of concrete physical reality is specifying that four-dimensional structure and what happens everywhere in it. In Newtonian mechanics, this object is foliated by these planes of absolute simultaneity. And in relativity you don’t have that; you have this light-cone structure instead. So it has a different geometrical character. But I don’t see how that different geometrical character gets rid of time or gets rid of temporality.

    The idea that the block universe is static drives me crazy. What is it to say that something is static? It’s to say that as time goes on, it doesn’t change. But it’s not that the block universe is in time; time is in it. When you say it’s static, it somehow suggests that there is no change, nothing really changes, change is an illusion. It blows your mind. Physics has discovered some really strange things about the world, but it has not discovered that change is an illusion.
    What does it mean for time to pass? Is that synonymous with “time has a direction,” or is there something in addition?

    There’s something in addition. For time to pass means for events to be linearly ordered, by earlier and later. The causal structure of the world depends on its temporal structure. The present state of the universe produces the successive states. To understand the later states, you look at the earlier states and not the other way around. Of course, the later states can give you all kinds of information about the earlier states, and, from the later states and the laws of physics, you can infer the earlier states. But you normally wouldn’t say that the later states explain the earlier states. The direction of causation is also the direction of explanation.
    Am I accurate in getting from you that there’s a generation or production going on here — that there’s a machinery that sits grinding away, one moment giving rise to the next, giving rise to the next?

    Well, that’s certainly a deep part of the picture I have. The machinery is exactly the laws of nature. That gives a constraint on the laws of nature — namely, that they should be laws of temporal evolution. They should be laws that tell you, as time goes on, how will new states succeed old ones. The claim would be there are no fundamental laws that are purely spatial and that where you find spatial regularities, they have temporal explanations.

    Does this lead you to a different view of what a law even is?

    It leads me to a different view than the majority view. I think of laws as having a kind of primitive metaphysical status, that laws are not derivative on anything else. It’s, rather, the other way around: Other things are derivative from, produced by, explained by, derived from the laws operating. And there, the word “operating” has this temporal characteristic.
    Why is yours a minority view? Because it seems to me, if you ask most people on the street what the laws of physics do, they would say, “It’s part of a machinery.”

    I often say my philosophical views are just kind of the naïve views you would have if you took a physics class or a cosmology class and you took seriously what you were being told. In a physics class on Newtonian mechanics, they’ll write down some laws and they’ll say, “Here are the laws of Newtonian mechanics.” That’s really the bedrock from which you begin.

    I don’t think I hold really bizarre views. I take “time doesn’t pass” or “the passage of time is an illusion” to be a pretty bizarre view. Not to say it has to be false, but one that should strike you as not what you thought.
    What does this all have to say about whether time is fundamental or emergent?

    I’ve never been able to quite understand what the emergence of time, in its deeper sense, is supposed to be. The laws are usually differential equations in time. They talk about how things evolve. So if there’s no time, then things can’t evolve. How do we understand — and is the emergence a temporal emergence? It’s like, in a certain phase of the universe, there was no time; and then in other phases, there is time, where it seems as though time emerges temporally out of non-time, which then seems incoherent.

    Where do you stop offering analyses? Where do you stop — where is your spade turned, as Wittgenstein would say? And for me, again, the notion of temporality or of time seems like a very good place to think I’ve hit a fundamental feature of the universe that is not explicable in terms of anything else.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:29 pm on January 23, 2017 Permalink | Reply
    Tags: Biophysics, Centrosomes, Earth’s primordial soup, Macromolecules, Protocells?, Quanta Magazine, simple “chemically active” droplets grow to the size of cells and spontaneously divide, The first living cells?, Vestiges of evolutionary history   

    From Quanta: “Dividing Droplets Could Explain Life’s Origin” 

    Quanta Magazine
    Quanta Magazine

    January 19, 2017
    Natalie Wolchover

    Researchers have discovered that simple “chemically active” droplets grow to the size of cells and spontaneously divide, suggesting they might have evolved into the first living cells.

    1
    davidope for Quanta Magazine

    A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earth’s primordial soup.

    Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that “the general phenomenology of life formation is a lot easier than one might think.”

    The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed “protocells,” and how did they come alive? Proponents of the “membrane-first” hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

    In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of life’s humble beginnings, proposed that the mystery protocells might have been liquid droplets — naturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparin’s long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

    Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of “chemically active” droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This “active droplet” behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

    If chemically active droplets can grow to a set size and divide of their own accord, then “it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup,” said Frank Jülicher, a biophysicist in Dresden and a co-author of the new paper.

    The findings, reported in Nature Physics last month, paint a possible picture of life’s start by explaining “how cells made daughters,” said Zwicker, who is now a postdoctoral researcher at Harvard University. “This is, of course, key if you want to think about evolution.”

    Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it “a very promising direction.”

    However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

    Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say it’s plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

    Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years ago — tiny liquid aggregates of protein and RNA in cells of the worm C. elegans — explained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, “the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place,” he said.

    2
    When germline cells in the roundworm C. elegans divide, P granules, shown in green, condense in the daughter cell that will become a viable sperm or egg and dissolve in the other daughter cell. Courtesy of Clifford Brangwynne/Science

    “This Nature Physics paper takes that to the next level,” by revealing the features that droplets would have needed “to play a role as protocells,” Brangwynne added.

    Droplets in Dresden

    The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as “P granules” in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didn’t take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparin’s 1924 protocell theory. In a 2012 essay about Oparin’s life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about “may still be alive and well, safe within our cells, like flies in life’s evolving amber.”

    Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for life — a conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparin’s ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didn’t know either.

    In the wake of their discovery, Jülicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as “out-of-equilibrium” systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as there’s an energy source, molecules flow in and out of an active droplet. “In the context of early Earth, sunlight would be the driving force,” Jülicher said.

    Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwicker’s simulations grew to tens or hundreds of microns across depending on their properties — the scale of cells.

    4
    Lucy Reading-Ikkanda/Quanta Magazine

    The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplet’s growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jülicher saw simulations of Zwicker’s equations, “he immediately jumped on it and said, ‘That looks very much like division,’” Zwicker said. “And then this whole protocell idea emerged quickly.”

    Zwicker, Jülicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparin’s vision. “If you just think about droplets like Oparin did, then it’s not clear how evolution could act on these droplets,” Zwicker said. “For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.”

    Globule Ancestor

    Last spring, Jülicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

    Tang’s lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hyman’s lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. “That would be a big deal,” said Giomi, the Leiden biophysicist.

    When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jülicher.) Nonetheless, Deamer isn’t convinced of the effect’s significance. “There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide,” he said.

    Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

    Jülicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, “I can go along with that,” noting that he would define protocells as the first droplets that had membranes.

    The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

    The luckiest part of the whole process, in Jülicher’s opinion, was not that droplets turned into cells, but that the first droplet — our globule ancestor — formed to begin with. Droplets require a lot of chemical material to spontaneously arise or “nucleate,” and it’s unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jülicher said, there was a lot of soup, and it was stewing for eons.

    “It’s a very rare event. You have to wait a long time for it to happen,” he said. “And once it happens, then the next things happen more easily, and more systematically.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:07 pm on December 22, 2016 Permalink | Reply
    Tags: , Explorers Find Passage to Earth’s Dark Age, , Quanta Magazine   

    From Quanta: “Explorers Find Passage to Earth’s Dark Age” 

    Quanta Magazine
    Quanta Magazine

    December 22, 2016
    Natalie Wolchover

    1
    Earth scientists hope that their growing knowledge of the planet’s early history will shed light on poorly understood features seen today, from continents to geysers. Eric King

    Geochemical signals from deep inside Earth are beginning to shed light on the planet’s first 50 million years, a formative period long viewed as inaccessible to science.

    In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine.

    A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!”

    The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there — before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest.

    Other modern findings [Nature]about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy — a period that until recently was inaccessible to science.

    After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents.

    The anomalies “are giving us information about some of the earliest Earth processes,” Walker said. “It’s an alternative universe from what geochemists have been working with for the past 50 years.”

    2
    Matt Jackson and his family with a local farmer in northwest Iceland. Courtesy of Matt Jackson.

    The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation — and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years.

    “You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.”

    Pieces of the Puzzle

    On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes — places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly.

    A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas — a necessity when the goal is understanding a system as complex as Earth.

    Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face — some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics.

    Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism — well, he did, but it was crazy,” she said.

    3
    Earth scientists on a beach hike in Santa Barbara County, California. Natalie Wolchover/Quanta Magazine

    A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge — the underwater mountain range that lines the world ocean like a seam — and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.)

    The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere.

    The tectonic plates of the world were mapped in 1996, USGS.
    The tectonic plates of the world were mapped in 1996, USGS.

    Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up — perhaps these are the blobs? — and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them — all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise.

    The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized.

    Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history — very early on.”

    Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when.

    A Sketchy Timeline

    Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery.

    The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules — colorful, beautiful — under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing — just regular things.”

    In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation [Science Direct], though still hotly debated [Nature]in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place.

    5
    A panorama of the Taurus-Littrow Valley created from photographs by Apollo 17 astronaut Eugene Cernan. Astronaut Harrison Schmitt is shown using a rake to collect samples. NASA

    Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began.

    First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star — our sun — surrounded by a placenta of hot debris.

    Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars — the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets.

    6
    The planets of the solar system as depicted by a NASA computer illustration. Orbits and sizes are not shown to scale.
    Credit: NASA

    7
    Researchers use liquid chromatography to isolate elements for analysis. Rock samples dissolved in acid flow down ion-exchange columns, like the ones in Rick Carlson’s laboratory at the Carnegie Institution in Washington, to separate the elements. Mary Horan.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers — core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    Wasserburg dubbed the event the “lunar cataclysm.” [Science Direct]. Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said.

    As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden.

    In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment — and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” [Nature] posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans.

    With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost.

    Extending the Rock Record

    More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred.

    8
    Lucy Reading-Ikkanda for Quanta Magazine

    Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived.

    In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system — hafnium-tungsten — in ancient rocks, which would point back to even earlier times in Earth’s history.

    The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182.

    That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around — processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t?

    Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old.

    Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average — the first ever detection of a “positive” tungsten anomaly on the face of the Earth.

    The paper scooped Richard Walker of Maryland and his colleagues, who months later reported [Science] a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia.

    Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history.

    9
    The 60 million-year-old flood basalts of Baffin Bay, Greenland, sampled by the geochemist Hanika Rizo (center) and colleagues, contain isotope traces that originated more than 4.5 billion years ago. Don Francis (left); courtesy of Hanika Rizo (center and right).

    The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.”

    Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks — 62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks.

    More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s group reported [2016 AGU Fall Meeting] a negative tungsten anomaly — that is, a deficit of tungsten-182 relative to tungsten-184 — in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation.

    10
    Tim Elliott collecting samples of ancient crust rock in Yilgarn Craton in Western Australia. Tony Kemp

    Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass — delivered to the planet in a long tail of minor impacts — mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies.

    Other evidence complicates this hypothesis, however — namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize.

    Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material.

    “These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:05 pm on November 30, 2016 Permalink | Reply
    Tags: , , , , , , Quanta Magazine,   

    From Quanta: “The Case Against Dark Matter” 

    Quanta Magazine
    Quanta Magazine

    November 29, 2016
    Natalie Wolchover

    1
    Erik Verlinde
    Ilvy Njiokiktjien for Quanta Magazine

    For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical “dark matter” seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.

    But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. This month, a series of developments has revived a long-disfavored argument that dark matter doesn’t exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.

    The latest attempt to explain away dark matter is a much-discussed proposal by Erik Verlinde, a theoretical physicist at the University of Amsterdam who is known for bold and prescient, if sometimes imperfect, ideas. In a dense 51-page paper posted online on Nov. 7, Verlinde casts gravity as a byproduct of quantum interactions and suggests that the extra gravity attributed to dark matter is an effect of “dark energy” — the background energy woven into the space-time fabric of the universe.

    Instead of hordes of invisible particles, “dark matter is an interplay between ordinary matter and dark energy,” Verlinde said.

    To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called “qubits”), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.

    In his calculations, Verlinde rediscovered the equations of “modified Newtonian dynamics,” or MOND. This 30-year-old theory makes an ad hoc tweak to the famous “inverse-square” law of gravity in Newton’s and Einstein’s theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. “I have a way of understanding the MOND success from a more fundamental perspective,” Verlinde said.

    Many experts have called Verlinde’s paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxies published on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MOND’s case against dark matter.

    The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxy’s center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the “radial acceleration relation.” This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxy’s rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

    Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlinde’s theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or “lensing” of light from the galaxies — another phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MOND’s original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlinde’s theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

    Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlinde’s proposal at least demonstrates how something like MOND might be right after all. “One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior,” she said. “If [Verlinde’s] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.”

    The New MOND

    In Newton’s and Einstein’s theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pull — and orbit more slowly — the farther they are from the galactic center. Stars’ velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The “flattening” of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matter — explained, in that paradigm, by dark matter clouds or “halos” that surround galaxies and give an extra gravitational acceleration to their outlying stars.

    Searches for dark matter particles have proliferated — with hypothetical “weakly interacting massive particles” (WIMPs) and lighter-weight “axions” serving as prime candidates — but so far, experiments have found nothing.

    2
    Lucy Reading-Ikkanda for Quanta Magazine

    Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain level — precisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earth — he postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. “There’s this magic scale,” McGaugh said. “Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.”

    Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlinde’s theory revitalizes MOND by attempting to reveal the method behind the magic.

    Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.

    The particular brand of emergent gravity in Verlinde’s paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-time — an approach that Verlinde has now absorbed into his new work.

    In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum information — that is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become “entangled” with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. “The best way we understand quantum gravity currently is this holographic approach,” said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.

    The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that it’s exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

    De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energy — often called dark energy — which drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.

    Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newton’s inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

    Van Raamsdonk calls Verlinde’s idea “definitely an important direction.” But he says it’s too soon to tell whether everything in the paper — which draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysics — hangs together. Either way, Van Raamsdonk said, “I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.”

    One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. “To be fair, we’ve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe,” Swingle said, referring to work in AdS space. “We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.”


    Access mp4 video here .

    The Case for Dark Matter

    Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.

    One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another.

    4
    X-ray photo by Chandra X-ray Observatory of the Bullet Cluster (1E0657-56). Exposure time was 0.5 million seconds (~140 hours) and the scale is shown in megaparsecs. Redshift (z) = 0.3, meaning its light has wavelengths stretched by a factor of 1.3. Based on today’s theories this shows the cluster to be about 4 billion light years away.
    In this photograph, a rapidly moving galaxy cluster with a shock wave trailing behind it seems to have hit another cluster at high speed. The gases collide, and gravitational fields of the stars and galalxies interact. When the galaxies collided, based on black-body temperture readings, the temperature reached 160 million degrees and X-rays were emitted in great intensity, claiming title of the hottest known galactic cluster.
    Studies of the Bullet cluster, announced in August 2006, provide the best evidence to date for the existence of dark matter.
    http://cxc.harvard.edu/symposium_2005/proceedings/files/markevitch_maxim.pdf
    User:Mac_Davis

    5
    Superimposed mass density contours, caused by gravitational lensing of dark matter. Photograph taken with Hubble Space Telescope.
    Date 22 August 2006
    http://cxc.harvard.edu/symposium_2005/proceedings/files/markevitch_maxim.pdf
    User:Mac_Davis

    The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energy’s gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.

    But the crowning achievement for Verlinde’s theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe.

    CMB per ESA/Planck
    CMB per ESA/Planck

    The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks’ amplitudes. Verlinde expects that his version will work — once again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. “Having said this,” he said, “I have not calculated this all through.”

    While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues’ recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

    In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxy’s halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies’ rotation speeds, even though they’re set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies’ dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says it’s too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues’ galaxy data set. If not, then the standard dark matter paradigm is in big trouble. “Obviously this is something that the community needs to look at more carefully,” Zurek said.

    Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. “If somebody were to come to you and say, ‘The solar system doesn’t work on an inverse-square law, really it’s an inverse-cube law, but there’s dark matter that’s arranged just so that it always looks inverse-square,’ you would say that person is insane,” he said. “But that’s basically what we’re asking to be the case with dark matter here.”

    Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. “That said, you should always check that you’re not on a bandwagon,” she added. “Even though this paradigm explains everything, you should always check that there isn’t something else going on.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:18 am on September 16, 2016 Permalink | Reply
    Tags: , Quanta Magazine,   

    From Quanta: “The Strange Second Life of String Theory” 

    Quanta Magazine
    Quanta Magazine

    September 15, 2016
    K.C. Cole

    String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics.
    At the same time, it has blossomed into one of the most useful sets of tools in science.

    1
    Renee Rominger/Moonrise Whims for Quanta Magazine

    String theory strutted onto the scene some 30 years ago as perfection itself, a promise of elegant simplicity that would solve knotty problems in fundamental physics — including the notoriously intractable mismatch between Einstein’s smoothly warped space-time and the inherently jittery, quantized bits of stuff that made up everything in it.

    It seemed, to paraphrase Michael Faraday, much too wonderful not to be true: Simply replace infinitely small particles with tiny (but finite) vibrating loops of string. The vibrations would sing out quarks, electrons, gluons and photons, as well as their extended families, producing in harmony every ingredient needed to cook up the knowable world. Avoiding the infinitely small meant avoiding a variety of catastrophes. For one, quantum uncertainty couldn’t rip space-time to shreds. At last, it seemed, here was a workable theory of quantum gravity.

    Even more beautiful than the story told in words was the elegance of the math behind it, which had the power to make some physicists ecstatic.

    To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space. These dimensions were folded in on themselves — or “compactified” — into complex origami shapes. No one knew just how the dimensions were compactified — the possibilities for doing so appeared to be endless — but surely some configuration would turn out to be just what was needed to produce familiar forces and particles.

    For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”

    And then physicists began to realize that the dream of one singular theory was an illusion. The complexities of string theory, all the possible permutations, refused to reduce to a single one that described our world. “After a certain point in the early ’90s, people gave up on trying to connect to the real world,” Gross said. “The last 20 years have really been a great extension of theoretical tools, but very little progress on understanding what’s actually out there.”

    Many, in retrospect, realized they had raised the bar too high. Coming off the momentum of completing the solid and powerful “standard model” of particle physics in the 1970s, they hoped the story would repeat — only this time on a mammoth, all-embracing scale. “We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”

    Like many a maturing beauty, string theory has gotten rich in relationships, complicated, hard to handle and widely influential. Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.

    The mathematics that have come out of string theory have been put to use in fields such as cosmology and condensed matter physics — the study of materials and their properties. It’s so ubiquitous that “even if you shut down all the string theory groups, people in condensed matter, people in cosmology, people in quantum gravity will do it,” Dijkgraaf said.

    “It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”

    String theory today looks almost fractal. The more closely people explore any one corner, the more structure they find. Some dig deep into particular crevices; others zoom out to try to make sense of grander patterns. The upshot is that string theory today includes much that no longer seems stringy. Those tiny loops of string whose harmonics were thought to breathe form into every particle and force known to nature (including elusive gravity) hardly even appear anymore on chalkboards at conferences. At last year’s big annual string theory meeting, the Stanford University string theorist Eva Silverstein was amused to find she was one of the few giving a talk “on string theory proper,” she said. A lot of the time she works on questions related to cosmology.

    Even as string theory’s mathematical tools get adopted across the physical sciences, physicists have been struggling with how to deal with the central tension of string theory: Can it ever live up to its initial promise? Could it ever give researchers insight into how gravity and quantum mechanics might be reconciled — not in a toy universe, but in our own?

    “The problem is that string theory exists in the landscape of theoretical physics,” said Juan Maldacena, a mathematical physicist at the IAS and perhaps the most prominent figure in the field today. “But we still don’t know yet how it connects to nature as a theory of gravity.” Maldacena now acknowledges the breadth of string theory, and its importance to many fields of physics — even those that don’t require “strings” to be the fundamental stuff of the universe — when he defines string theory as “Solid Theoretical Research in Natural Geometric Structures.”

    An Explosion of Quantum Fields

    One high point for string theory as a theory of everything came in the late 1990s, when Maldacena revealed that a string theory including gravity in five dimensions was equivalent to a quantum field theory in four dimensions. This “AdS/CFT” duality appeared to provide a map for getting a handle on gravity — the most intransigent piece of the puzzle — by relating it to good old well-understood quantum field theory.

    This correspondence was never thought to be a perfect real-world model. The five-dimensional space in which it works has an “anti-de Sitter” geometry, a strange M.C. Escher-ish landscape that is not remotely like our universe.

    But researchers were surprised when they dug deep into the other side of the duality. Most people took for granted that quantum field theories — “bread and butter physics,” Dijkgraaf calls them — were well understood and had been for half a century. As it turned out, Dijkgraaf said, “we only understand them in a very limited way.”

    These quantum field theories were developed in the 1950s to unify special relativity and quantum mechanics. They worked well enough for long enough that it didn’t much matter that they broke down at very small scales and high energies. But today, when physicists revisit “the part you thought you understood 60 years ago,” said Nima Arkani-Hamed, a physicist at the IAS, you find “stunning structures” that came as a complete surprise. “Every aspect of the idea that we understood quantum field theory turns out to be wrong. It’s a vastly bigger beast.”

    Researchers have developed a huge number of quantum field theories in the past decade or so, each used to study different physical systems. Beem suspects there are quantum field theories that can’t be described even in terms of quantum fields. “We have opinions that sound as crazy as that, in large part, because of string theory.”

    This virtual explosion of new kinds of quantum field theories is eerily reminiscent of physics in the 1930s, when the unexpected appearance of a new kind of particle — the muon — led a frustrated I.I. Rabi to ask: “Who ordered that?” The flood of new particles was so overwhelming by the 1950s that it led Enrico Fermi to grumble: “If I could remember the names of all these particles, I would have been a botanist.”

    Physicists began to see their way through the thicket of new particles only when they found the more fundamental building blocks making them up, like quarks and gluons. Now many physicists are attempting to do the same with quantum field theory. In their attempts to make sense of the zoo, many learn all they can about certain exotic species.

    Conformal field theories (the right hand of AdS/CFT) are a starting point. In the simplest type of conformal field theory, you start with a version of quantum field theory where “the interactions between the particles are turned off,” said David Simmons-Duffin, a physicist at the IAS. If these specific kinds of field theories could be understood perfectly, answers to deep questions might become clear. “The idea is that if you understand the elephant’s feet really, really well, you can interpolate in between and figure out what the whole thing looks like.”

    Like many of his colleagues, Simmons-Duffin says he’s a string theorist mostly in the sense that it’s become an umbrella term for anyone doing fundamental physics in underdeveloped corners. He’s currently focusing on a physical system that’s described by a conformal field theory but has nothing to do with strings. In fact, the system is water at its “critical point,” where the distinction between gas and liquid disappears. It’s interesting because water’s behavior at the critical point is a complicated emergent system that arises from something simpler. As such, it could hint at dynamics behind the emergence of quantum field theories.

    Beem focuses on supersymmetric field theories, another toy model, as physicists call these deliberate simplifications. “We’re putting in some unrealistic features to make them easier to handle,” he said. Specifically, they are amenable to tractable mathematics, which “makes it so a lot of things are calculable.”

    Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”

    It’s the kind of work that makes people such as Sean Carroll, a theoretical physicist at the California Institute of Technology, wonder if the field has strayed too far from its early ambitions — to find, if not a “theory of everything,” at least a theory of quantum gravity. “Answering deep questions about quantum gravity has not really happened,” he said. “They have all these hammers and they go looking for nails.” That’s fine, he said, even acknowledging that generations might be needed to develop a new theory of quantum gravity. “But it isn’t fine if you forget that, ultimately, your goal is describing the real world.”

    It’s a question he has asked his friends. Why are they investigating detailed quantum field theories? “What’s the aspiration?” he asks. Their answers are logical, he says, but steps removed from developing a true description of our universe.

    nstead, he’s looking for a way to “find gravity inside quantum mechanics.” A paper he recently wrote with colleagues claims to take steps toward just that. It does not involve string theory.

    The Broad Power of Strings

    Perhaps the field that has gained the most from the flowering of string theory is mathematics itself. Sitting on a bench beside the IAS pond while watching a blue heron saunter in the reeds, Clay Córdova, a researcher there, explained how what seemed like intractable problems in mathematics were solved by imagining how the question might look to a string. For example, how many spheres could fit inside a Calabi-Yau manifold — the complex folded shape expected to describe how spacetime is compactified? Mathematicians had been stuck. But a two-dimensional string can wiggle around in such a complex space. As it wiggled, it could grasp new insights, like a mathematical multidimensional lasso. This was the kind of physical thinking Einstein was famous for: thought experiments about riding along with a light beam revealed E=mc2. Imagining falling off a building led to his biggest eureka moment of all: Gravity is not a force; it’s a property of space-time.

    2
    The amplituhedron is a multi-dimensional object that can be used to calculate particle interactions. Physicists such as Chris Beem are applying techniques from string theory in special geometries where “the amplituhedron is its best self,” he says. Nima Arkani-Hamed

    Using the physical intuition offered by strings, physicists produced a powerful formula for getting the answer to the embedded sphere question, and much more. “They got at these formulas using tools that mathematicians don’t allow,” Córdova said. Then, after string theorists found an answer, the mathematicians proved it on their own terms. “This is a kind of experiment,” he explained. “It’s an internal mathematical experiment.” Not only was the stringy solution not wrong, it led to Fields Medal-winning mathematics. “This keeps happening,” he said.

    String theory has also made essential contributions to cosmology. The role that string theory has played in thinking about mechanisms behind the inflationary expansion of the universe — the moments immediately after the Big Bang, where quantum effects met gravity head on — is “surprisingly strong,” said Silverstein, even though no strings are attached.

    Still, Silverstein and colleagues have used string theory to discover, among other things, ways to see potentially observable signatures of various inflationary ideas. The same insights could have been found using quantum field theory, she said, but they weren’t. “It’s much more natural in string theory, with its extra structure.”

    Inflationary models get tangled in string theory in multiple ways, not least of which is the multiverse — the idea that ours is one of a perhaps infinite number of universes, each created by the same mechanism that begat our own. Between string theory and cosmology, the idea of an infinite landscape of possible universes became not just acceptable, but even taken for granted by a large number of physicists. The selection effect, Silverstein said, would be one quite natural explanation for why our world is the way it is: In a very different universe, we wouldn’t be here to tell the story.

    This effect could be one answer to a big problem string theory was supposed to solve. As Gross put it: “What picks out this particular theory” — the Standard Model — from the “plethora of infinite possibilities?”

    Silverstein thinks the selection effect is actually a good argument for string theory. The infinite landscape of possible universes can be directly linked to “the rich structure that we find in string theory,” she said — the innumerable ways that string theory’s multidimensional space-time can be folded in upon itself.

    Building the New Atlas

    At the very least, the mature version of string theory — with its mathematical tools that let researchers view problems in new ways — has provided powerful new methods for seeing how seemingly incompatible descriptions of nature can both be true. The discovery of dual descriptions of the same phenomenon pretty much sums up the history of physics. A century and a half ago, James Clerk Maxwell saw that electricity and magnetism were two sides of a coin. Quantum theory revealed the connection between particles and waves. Now physicists have strings.

    “Once the elementary things we’re probing spaces with are strings instead of particles,” said Beem, the strings “see things differently.” If it’s too hard to get from A to B using quantum field theory, reimagine the problem in string theory, and “there’s a path,” Beem said.

    In cosmology, string theory “packages physical models in a way that’s easier to think about,” Silverstein said. It may take centuries to tie together all these loose strings to weave a coherent picture, but young researchers like Beem aren’t bothered a bit. His generation never thought string theory was going to solve everything. “We’re not stuck,” he said. “It doesn’t feel like we’re on the verge of getting it all sorted, but I know more each day than I did the day before – and so presumably we’re getting somewhere.”

    Stanford thinks of it as a big crossword puzzle. “It’s not finished, but as you start solving, you can tell that it’s a valid puzzle,” he said. “It’s passing consistency checks all the time.”

    “Maybe it’s not even possible to capture the universe in one easily defined, self-contained form, like a globe,” Dijkgraaf said, sitting in Robert Oppenheimer’s many windowed office from when he was Einstein’s boss, looking over the vast lawn at the IAS, the pond and the woods in the distance. Einstein, too, tried and failed to find a theory of everything, and it takes nothing away from his genius.

    “Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty,” Dijkgraaf said. “Using the atlas will require that physics be fluent in many languages, many approaches, all at the same time. Their work will come from many different directions, perhaps far-flung.”

    He finds it “totally disorienting” and also “fantastic.”

    Arkani-Hamed believes we are in the most exciting epoch of physics since quantum mechanics appeared in the 1920s. But nothing will happen quickly. “If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited,” he said. “But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: