Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:21 pm on August 4, 2017 Permalink | Reply
    Tags: , , , , , , , , Quanta Magazine   

    From Quanta: “Scientists Unveil a New Inventory of the Universe’s Dark Contents” 

    Quanta Magazine
    Quanta Magazine

    August 3, 2017
    Natalie Wolchover

    In a much-anticipated analysis of its first year of data, the Dark Energy Survey (DES) telescope experiment has gauged the amount of dark energy and dark matter in the universe by measuring the clumpiness of galaxies — a rich and, so far, barely tapped source of information that many see as the future of cosmology.

    Dark Energy Survey

    Dark Energy Camera [DECam], built at FNAL

    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    The analysis, posted on DES’s website today and based on observations of 26 million galaxies in a large swath of the southern sky, tweaks estimates only a little. It draws the pie chart of the universe as 74 percent dark energy and 21 percent dark matter, with galaxies and all other visible matter — everything currently known to physicists — filling the remaining 5 percent sliver.

    The results are based on data from the telescope’s first observing season, which began in August 2013 and lasted six months. Since then, three more rounds of data collection have passed; the experiment begins its fifth and final planned observing season this month. As the 400-person team analyzes more of this data in the coming years, they’ll begin to test theories about the nature of the two invisible substances that dominate the cosmos — particularly dark energy, “which is what we’re ultimately going after,” said Joshua Frieman, co-founder and director of DES and an astrophysicist at Fermi National Accelerator Laboratory (Fermilab) and the University of Chicago. Already, with their first-year data, the experimenters have incrementally improved the measurement of a key quantity that will reveal what dark energy is.

    Both terms — dark energy and dark matter — are mental place holders for unknown physics. “Dark energy” refers to whatever is causing the expansion of the universe to accelerate, as astronomers first discovered it to be doing in 1998. And great clouds of missing “dark matter” have been inferred from 80 years of observations of their apparent gravitational effect on visible matter (though whether dark matter consists of actual particles or something else, nobody knows).

    The balance of the two unknown substances sculpts the distribution of galaxies. “As the universe evolves, the gravity of dark matter is making it more clumpy, but dark energy makes it less clumpy because it’s pushing galaxies away from each other,” Frieman said. “So the present clumpiness of the universe is telling us about that cosmic tug-of-war between dark matter and dark energy.”

    The Dark Energy Survey uses a 570-megapixel camera mounted on the Victor M. Blanco Telescope in Chile (left). The camera is made out of 74 individual light-gathering wafers.

    A Dark Map

    Until now, the best way to inventory the cosmos has been to look at the Cosmic Microwave Background [CMB]: pristine light from the infant universe that has long served as a wellspring of information for cosmologists, but which — after the Planck space telescope mapped it in breathtakingly high resolution in 2013 — has less and less to offer.

    CMB per ESA/Planck


    Cosmic microwaves come from the farthest point that can be seen in every direction, providing a 2-D snapshot of the universe at a single moment in time, 380,000 years after the Big Bang (the cosmos was dark before that). Planck’s map of this light shows an extremely homogeneous young universe, with subtle density variations that grew into the galaxies and voids that fill the universe today.

    Galaxies, after undergoing billions of years of evolution, are more complex and harder to glean information from than the cosmic microwave background, but according to experts, they will ultimately offer a richer picture of the universe’s governing laws since they span the full three-dimensional volume of space. “There’s just a lot more information in a 3-D volume than on a 2-D surface,” said Scott Dodelson, co-chair of the DES science committee and an astrophysicist at Fermilab and the University of Chicago.

    To obtain that information, the DES team scrutinized a section of the universe spanning an area 1,300 square degrees wide in the sky — the total area of 6,500 full moons — and stretching back 8 billion years (the data were collected by the half-billion-pixel Dark Energy Camera mounted on the Victor M. Blanco Telescope in Chile). They statistically analyzed the separations between galaxies in this cosmic volume. They also examined the distortion in the galaxies’ apparent shapes — an effect known as “weak gravitational lensing” that indicates how much space-warping dark matter lies between the galaxies and Earth. These two probes — galaxy clustering and weak lensing — are two of the four approaches that DES will eventually use to inventory the cosmos. Already, the survey’s measurements are more precise than those of any previous galaxy survey, and for the first time, they rival Planck’s.


    “This is entering a new era of cosmology from galaxy surveys,” Frieman said. With DES’s first-year data, “galaxy surveys have now caught up to the cosmic microwave background in terms of probing cosmology. That’s really exciting because we’ve got four more years where we’re going to go deeper and cover a larger area of the sky, so we know our error bars are going to shrink.”

    For cosmologists, the key question was whether DES’s new cosmic pie chart based on galaxy surveys would differ from estimates of dark energy and dark matter inferred from Planck’s map of the cosmic microwave background. Comparing the two would reveal whether cosmologists correctly understand how the universe evolved from its early state to its present one. “Planck measures how much dark energy there should be” at present by extrapolating from its state at 380,000 years old, Dodelson said. “We measure how much there is.”

    The DES scientists spent six months processing their data without looking at the results along the way — a safeguard against bias — then “unblinded” the results during a July 7 video conference. After team leaders went through a final checklist, a member of the team ran a computer script to generate the long-awaited plot: DES’s measurement of the fraction of the universe that’s matter (dark and visible combined), displayed together with the older estimate from Planck. “We were all watching his computer screen at the same time; we all saw the answer at the same time. That’s about as dramatic as it gets,” said Gary Bernstein, an astrophysicist at the University of Pennsylvania and co-chair of the DES science committee.

    Planck pegged matter at 33 percent of the cosmos today, plus or minus two or three percentage points. When DES’s plots appeared, applause broke out as the bull’s-eye of the new matter measurement centered on 26 percent, with error bars that were similar to, but barely overlapped with, Planck’s range.

    “We saw they didn’t quite overlap,” Bernstein said. “But everybody was just excited to see that we got an answer, first, that wasn’t insane, and which was an accurate answer compared to before.”

    Statistically speaking, there’s only a slight tension between the two results: Considering their uncertainties, the 26 and 33 percent appraisals are between 1 and 1.5 standard deviations or “sigma” apart, whereas in modern physics you need a five-sigma discrepancy to claim a discovery. The mismatch stands out to the eye, but for now, Frieman and his team consider their galaxy results to be consistent with expectations based on the cosmic microwave background. Whether the hint of a discrepancy strengthens or vanishes as more data accumulate will be worth watching as the DES team embarks on its next analysis, expected to cover its first three years of data.

    If the possible discrepancy between the cosmic-microwave and galaxy measurements turns out to be real, it could create enough of a tension to lead to the downfall of the “Lambda-CDM model” of cosmology, the standard theory of the universe’s evolution. Lambda-CDM is in many ways a simple model that starts with Albert Einstein’s general theory of relativity, then bolts on dark energy and dark matter. A replacement for Lambda-CDM might help researchers uncover the quantum theory of gravity that presumably underlies everything else.

    What Is Dark Energy?

    According to Lambda-CDM, dark energy is the “cosmological constant,” represented by the Greek symbol lambda Λ in Einstein’s theory; it’s the energy that infuses space itself, when you get rid of everything else. This energy has negative pressure, which pushes space away and causes it to expand. New dark energy arises in the newly formed spatial fabric, so that the density of dark energy always remains constant, even as the total amount of it relative to dark matter increases over time, causing the expansion of the universe to speed up.

    The universe’s expansion is indeed accelerating, as two teams of astronomers discovered in 1998 by observing light from distant supernovas. The discovery, which earned the leaders of the two teams the 2011 Nobel Prize in physics, suggested that the cosmological constant has a positive but “mystifyingly tiny” value, Bernstein said. “There’s no good theory that explains why it would be so tiny.” (This is the “cosmological constant problem” that has inspired anthropic reasoning and the dreaded multiverse hypothesis.)

    On the other hand, dark energy could be something else entirely. Frieman, whom colleagues jokingly refer to as a “fallen theorist,” studied alternative models of dark energy before co-founding DES in 2003 in hopes of testing his and other researchers’ ideas. The leading alternative theory envisions dark energy as a field that pervades space, similar to the “inflaton field” that most cosmologists think drove the explosive inflation of the universe during the Big Bang. The slowly diluting energy of the inflaton field would have exerted a negative pressure that expanded space, and Frieman and others have argued that dark energy might be a similar field that is dynamically evolving today.

    DES’s new analysis incrementally improves the measurement of a parameter that distinguishes between these two theories — the cosmological constant on the one hand, and a slowly changing energy field on the other. If dark energy is the cosmological constant, then the ratio of its negative pressure and density has to be fixed at −1. Cosmologists call this ratio w. If dark energy is an evolving field, then its density would change over time relative to its pressure, and w would be different from −1.

    Remarkably, DES’s first-year data, when combined with previous measurements, pegs w’s value at −1, plus or minus roughly 0.04. However, the present level of accuracy still isn’t enough to tell if we’re dealing with a cosmological constant rather than a dynamic field, which could have w within a hair of −1. “That means we need to keep going,” Frieman said.

    The DES scientists will tighten the error bars around w in their next analysis, slated for release next year; they’ll also measure the change in w over time, by probing its value at different cosmic distances. (Light takes time to reach us, so distant galaxies reveal the universe’s past). If dark energy is the cosmological constant, the change in w will be zero. A nonzero measurement would suggest otherwise.

    Larger galaxy surveys might be needed to definitively measure w and the other cosmological parameters. In the early 2020s, the ambitious Large Synoptic Survey Telescope (LSST) will start collecting light from 20 billion galaxies and other cosmological objects, creating a high-resolution map of the universe’s clumpiness that will yield a big jump in accuracy.


    LSST Camera, built at SLAC

    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    The data might confirm that we occupy a Lambda-CDM universe, infused with an inexplicably tiny cosmological constant and full of dark matter whose nature remains elusive. But Frieman doesn’t discount the possibility of discovering that dark energy is an evolving quantum field, which would invite a deeper understanding by going beyond Einstein’s theory and tying cosmology to quantum physics.

    “With these surveys — DES and LSST that comes after it — the prospects are quite bright,” Dodelson said. “It is more complicated to analyze these things because the cosmic microwave background is simpler, and that is good for young people in the field because there’s a lot of work to do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 11:07 am on June 11, 2017 Permalink | Reply
    Tags: , Bell test, Cosmic Bell test, Experiment Reaffirms Quantum Weirdness, John Bell, Quanta Magazine, , , , Superdeterminism   

    From Quanta: “Experiment Reaffirms Quantum Weirdness” 

    Quanta Magazine
    Quanta Magazine

    February 7, 2017 [I wonder where this was hiding. It just appeared today in social media.]
    Natalie Wolchover

    Physicists are closing the door on an intriguing loophole around the quantum phenomenon Einstein called “spooky action at a distance.”

    Olena Shmahalo/Quanta Magazine

    There might be no getting around what Albert Einstein called “spooky action at a distance.” With an experiment described today in Physical Review Letters — a feat that involved harnessing starlight to control measurements of particles shot between buildings in Vienna — some of the world’s leading cosmologists and quantum physicists are closing the door on an intriguing alternative to “quantum entanglement.”

    “Technically, this experiment is truly impressive,” said Nicolas Gisin, a quantum physicist at the University of Geneva who has studied this loophole around entanglement.


    According to standard quantum theory, particles have no definite states, only relative probabilities of being one thing or another — at least, until they are measured, when they seem to suddenly roll the dice and jump into formation. Stranger still, when two particles interact, they can become “entangled,” shedding their individual probabilities and becoming components of a more complicated probability function that describes both particles together. This function might specify that two entangled photons are polarized in perpendicular directions, with some probability that photon A is vertically polarized and photon B is horizontally polarized, and some chance of the opposite. The two photons can travel light-years apart, but they remain linked: Measure photon A to be vertically polarized, and photon B instantaneously becomes horizontally polarized, even though B’s state was unspecified a moment earlier and no signal has had time to travel between them. This is the “spooky action” that Einstein was famously skeptical about in his arguments against the completeness of quantum mechanics in the 1930s and ’40s.

    In 1964, the Northern Irish physicist John Bell found a way to put this paradoxical notion to the test. He showed that if particles have definite states even when no one is looking (a concept known as “realism”) and if indeed no signal travels faster than light (“locality”), then there is an upper limit to the amount of correlation that can be observed between the measured states of two particles. But experiments have shown time and again that entangled particles are more correlated than Bell’s upper limit, favoring the radical quantum worldview over local realism.

    Only there’s a hitch: In addition to locality and realism, Bell made another, subtle assumption to derive his formula — one that went largely ignored for decades. “The three assumptions that go into Bell’s theorem that are relevant are locality, realism and freedom,” said Andrew Friedman of the Massachusetts Institute of Technology, a co-author of the new paper. “Recently it’s been discovered that you can keep locality and realism by giving up just a little bit of freedom.” This is known as the “freedom-of-choice” loophole.

    In a Bell test, entangled photons A and B are separated and sent to far-apart optical modulators — devices that either block photons or let them through to detectors, depending on whether the modulators are aligned with or against the photons’ polarization directions. Bell’s inequality puts an upper limit on how often, in a local-realistic universe, photons A and B will both pass through their modulators and be detected. (Researchers find that entangled photons are correlated more often than this, violating the limit.) Crucially, Bell’s formula assumes that the two modulators’ settings are independent of the states of the particles being tested. In experiments, researchers typically use random-number generators to set the devices’ angles of orientation. However, if the modulators are not actually independent — if nature somehow restricts the possible settings that can be chosen, correlating these settings with the states of the particles in the moments before an experiment occurs — this reduced freedom could explain the outcomes that are normally attributed to quantum entanglement.

    The universe might be like a restaurant with 10 menu items, Friedman said. “You think you can order any of the 10, but then they tell you, ‘We’re out of chicken,’ and it turns out only five of the things are really on the menu. You still have the freedom to choose from the remaining five, but you were overcounting your degrees of freedom.” Similarly, he said, “there might be unknowns, constraints, boundary conditions, conservation laws that could end up limiting your choices in a very subtle way” when setting up an experiment, leading to seeming violations of local realism.

    This possible loophole gained traction in 2010, when Michael Hall, now of Griffith University in Australia, developed a quantitative way of reducing freedom of choice [Phys.Rev.Lett.]. In Bell tests, measuring devices have two possible settings (corresponding to one bit of information: either 1 or 0), and so it takes two bits of information to specify their settings when they are truly independent. But Hall showed that if the settings are not quite independent — if only one bit specifies them once in every 22 runs — this halves the number of possible measurement settings available in those 22 runs. This reduced freedom of choice correlates measurement outcomes enough to exceed Bell’s limit, creating the illusion of quantum entanglement.

    The idea that nature might restrict freedom while maintaining local realism has become more attractive in light of emerging connections between information and the geometry of space-time. Research on black holes, for instance, suggests that the stronger the gravity in a volume of space-time, the fewer bits can be stored in that region. Could gravity be reducing the number of possible measurement settings in Bell tests, secretly striking items from the universe’s menu?

    Members of the cosmic Bell test team calibrating the telescope used to choose the settings of one of their two detectors located in far-apart buildings in Vienna. Jason Gallicchio

    Friedman, Alan Guth and colleagues at MIT were entertaining such speculations a few years ago when Anton Zeilinger, a famous Bell test experimenter at the University of Vienna, came for a visit.

    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    Zeilinger also had his sights on the freedom-of-choice loophole. Together, they and their collaborators developed an idea for how to distinguish between a universe that lacks local realism and one that curbs freedom.

    In the first of a planned series of “cosmic Bell test” experiments, the team sent pairs of photons from the roof of Zeilinger’s lab in Vienna through the open windows of two other buildings and into optical modulators, tallying coincident detections as usual. But this time, they attempted to lower the chance that the modulator settings might somehow become correlated with the states of the photons in the moments before each measurement. They pointed a telescope out of each window, trained each telescope on a bright and conveniently located (but otherwise random) star, and, before each measurement, used the color of an incoming photon from each star to set the angle of the associated modulator. The colors of these photons were decided hundreds of years ago, when they left their stars, increasing the chance that they (and therefore the measurement settings) were independent of the states of the photons being measured.

    And yet, the scientists found that the measurement outcomes still violated Bell’s upper limit, boosting their confidence that the polarized photons in the experiment exhibit spooky action at a distance after all.

    Nature could still exploit the freedom-of-choice loophole, but the universe would have had to delete items from the menu of possible measurement settings at least 600 years before the measurements occurred (when the closer of the two stars sent its light toward Earth). “Now one needs the correlations to have been established even before Shakespeare wrote, ‘Until I know this sure uncertainty, I’ll entertain the offered fallacy,’” Hall said.

    Next, the team plans to use light from increasingly distant quasars to control their measurement settings, probing further back in time and giving the universe an even smaller window to cook up correlations between future device settings and restrict freedoms. It’s also possible (though extremely unlikely) that the team will find a transition point where measurement settings become uncorrelated and violations of Bell’s limit disappear — which would prove that Einstein was right to doubt spooky action.

    “For us it seems like kind of a win-win,” Friedman said. “Either we close the loophole more and more, and we’re more confident in quantum theory, or we see something that could point toward new physics.”

    There’s a final possibility that many physicists abhor. It could be that the universe restricted freedom of choice from the very beginning — that every measurement was predetermined by correlations established at the Big Bang. “Superdeterminism,” as this is called, is “unknowable,” said Jan-Åke Larsson, a physicist at Linköping University in Sweden; the cosmic Bell test crew will never be able to rule out correlations that existed before there were stars, quasars or any other light in the sky. That means the freedom-of-choice loophole can never be completely shut.

    But given the choice between quantum entanglement and superdeterminism, most scientists favor entanglement — and with it, freedom. “If the correlations are indeed set [at the Big Bang], everything is preordained,” Larsson said. “I find it a boring worldview. I cannot believe this would be true.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:16 pm on May 16, 2017 Permalink | Reply
    Tags: , , , , Quanta Magazine, , Tim Maudlin   

    From Quanta: “A Defense of the Reality of Time” Tim Maudlin 

    Quanta Magazine
    Quanta Magazine

    May 16, 2017
    George Musser

    Tim Maudlin. Edwin Tse for Quanta Magazine

    Time isn’t just another dimension, argues Tim Maudlin. To make his case, he’s had to reinvent geometry.

    Physicists and philosophers seem to like nothing more than telling us that everything we thought about the world is wrong. They take a peculiar pleasure in exposing common sense as nonsense. But Tim Maudlin thinks our direct impressions of the world are a better guide to reality than we have been led to believe.

    Not that he thinks they always are. Maudlin, who is a professor at New York University and one of the world’s leading philosophers of physics, made his name studying the strange behavior of “entangled” quantum particles, which display behavior that is as counterintuitive as can be; if anything, he thinks physicists have downplayed how transformative entanglement is.

    Quantum entanglement. ATCA

    At the same time, though, he thinks physicists can be too hasty to claim that our conventional views are misguided, especially when it comes to the nature of time.

    He defends a homey and unfashionable view of time. It has a built-in arrow. It is fundamental rather than derived from some deeper reality. Change is real, as opposed to an illusion or an artifact of perspective. The laws of physics act within time to generate each moment. Mixing mathematics, physics and philosophy, Maudlin bats away the reasons that scientists and philosophers commonly give for denying this folk wisdom.

    The mathematical arguments are the target of his current project, the second volume of New Foundations for Physical Geometry (the first appeared in 2014). Modern physics, he argues, conceptualizes time in essentially the same way as space. Space, as we commonly understand it, has no innate direction — it is isotropic. When we apply spatial intuitions to time, we unwittingly assume that time has no intrinsic direction, either. New Foundations rethinks topology in a way that allows for a clearer distinction between time and space. Conventionally, topology — the first level of geometrical structure — is defined using open sets, which describe the neighborhood of a point in space or time. “Open” means a region has no sharp edge; every point in the set is surrounded by other points in the same set.

    Maudlin proposes instead to base topology on lines. He sees this as closer to our everyday geometrical intuitions, which are formed by thinking about motion. And he finds that, to match the results of standard topology, the lines need to be directed, just as time is. Maudlin’s approach differs from other approaches that extend standard topology to endow geometry with directionality; it is not an extension, but a rethinking that builds in directionality at the ground level.

    Maudlin discussed his ideas with Quanta Magazine in March. Here is a condensed and edited version of the interview.

    Why might one think that time has a direction to it? That seems to go counter to what physicists often say.

    I think that’s a little bit backwards. Go to the man on the street and ask whether time has a direction, whether the future is different from the past, and whether time doesn’t march on toward the future. That’s the natural view. The more interesting view is how the physicists manage to convince themselves that time doesn’t have a direction.
    They would reply that it’s a consequence of Einstein’s special theory of relativity, which holds that time is a fourth dimension.

    This notion that time is just a fourth dimension is highly misleading. In special relativity, the time directions are structurally different from the space directions. In the timelike directions, you have a further distinction into the future and the past, whereas any spacelike direction I can continuously rotate into any other spacelike direction. The two classes of timelike directions can’t be continuously transformed into one another.

    Standard geometry just wasn’t developed for the purpose of doing space-time. It was developed for the purpose of just doing spaces, and spaces have no directedness in them. And then you took this formal tool that you developed for this one purpose and then pushed it to this other purpose.

    When relativity was developed in the early part of the 20th century, did people begin to see this problem?

    I don’t think they saw it as a problem. The development was highly algebraic, and the more algebraic the technique, the further you get from having a geometrical intuition about what you’re doing. So if you develop the standard account of, say, the metric of space-time, and then you ask, “Well, what happens if I start putting negative numbers in this thing?” That’s a perfectly good algebraic question to ask. It’s not so clear what it means geometrically. And people do the same thing now when they say, “Well, what if time had two dimensions?” As a purely algebraic question, I can say that. But if you ask me what could it mean, physically, for time to have two dimensions, I haven’t the vaguest idea. Is it consistent with the nature of time that it be a two-dimensional thing? Because if you think that what time does is order events, then that order is a linear order, and you’re talking about a fundamentally one-dimensional kind of organization.
    And so you are trying to allow for the directionality of time by rethinking geometry. How does that work?

    I really was not starting from physics. I was starting from just trying to understand topology. When you teach, you’re forced to confront your own ignorance. I was trying to explain standard topology to some students when I was teaching a class on space and time, and I realized that I didn’t understand it. I couldn’t see the connection between the technical machinery and the concepts that I was using.

    Suppose I just hand you a bag of points. It doesn’t have a geometry. So I have to add some structure to give it anything that is recognizably geometrical. In the standard approach, I specify which sets of points are open sets. In my approach, I specify which sets of points are lines.

    How does this differ from ordinary geometry taught in high school?

    In this approach that’s based on lines, a very natural thing to do is to put directionality on the lines. It’s very easy to implement at the level of axioms. If you’re doing Euclidean geometry, this isn’t going to occur to you, because your idea in Euclidean geometry is if I have a continuous line from A to B, it’s just as well a continuous line B to A — that there’s no directionality in a Euclidean line.
    From the pure mathematical point of view, why might your approach be preferable?

    In my approach, you put down a linear structure on a set of points. If you put down lines according to my axioms, there’s then a natural definition of an open set, and it generates a topology.

    Another important conceptual advantage is that there’s no problem thinking of a line that’s discrete. People form lines where there are only finitely many people, and you can talk about who’s the next person in line, and who’s the person behind them, and so on. The notion of a line is neutral between it being discrete and being continuous. So you have this general approach.

    Why is this kind of modification important for physics?

    As soon as you start talking about space-time, the idea that time has a directionality is obviously something we begin with. There’s a tremendous difference between the past and the future. And so, as soon as you start to think geometrically of space-time, of something that has temporal characteristics, a natural thought is that you are thinking of something that does now have an intrinsic directionality. And if your basic geometrical objects can have directionality, then you can use them to represent this physical directionality.
    Physicists have other arguments for why time doesn’t have a direction.

    Often one will hear that there’s a time-reversal symmetry in the laws. But the normal way you describe a time-reversal symmetry presupposes there’s a direction of time. Someone will say the following: “According to Newtonian physics, if the glass can fall off the table and smash on the floor, then it’s physically possible for the shards on the floor to be pushed by the concerted effort of the floor, recombine into the glass and jump back up on the table.” That’s true. But notice, both of those descriptions are ones that presuppose there’s a direction of time. That is, they presuppose that there’s a difference between the glass falling and the glass jumping, and there’s a difference between the glass shattering and the glass recombining. And the difference between those two is always which direction is the future, and which direction is the past.

    So I’m certainly not denying that there is this time-reversibility. But the time-reversibility doesn’t imply that there isn’t a direction of time. It just says that for every event that the laws of physics allow, there is a corresponding event in which various things have been reversed, velocities have been reversed and so on. But in both of these cases, you think of them as allowing a process that’s running forward in time.

    Now that raises a puzzle: Why do we often see the one kind of thing and not the other kind of thing? And that’s the puzzle about thermodynamics and entropy and so on.

    If time has a direction, is the thermodynamic arrow of time still a problem?

    The problem there isn’t with the arrow. The problem is with understanding why things started out in a low-entropy state. Once you have that it starts in a low-entropy state, the normal thermodynamic arguments lead you to expect that most of the possible initial states are going to yield an increasing entropy. So the question is, why did things start out so low entropy?

    One choice is that the universe is only finite in time and had an initial state, and then there’s the question: “Can you explain why the initial state was low?” which is a subpart of the question, “Can you explain an initial state at all?” It didn’t come out of anything, so what would it mean to explain it in the first place?

    The other possibility is that there was something before the big bang. If you imagine the big bang is the bubbling-off of this universe from some antecedent proto-universe or from chaotically inflating space-time, then there’s going to be the physics of that bubbling-off, and you would hope the physics of the bubbling-off might imply that the bubbles would be of a certain character.
    Given that we still need to explain the initial low-entropy state, why do we need the internal directedness of time? If time didn’t have a direction, wouldn’t specification of a low-entropy state be enough to give it an effective direction?

    If time didn’t have a direction, it seems to me that would make time into just another spatial dimension, and if all we’ve got all are spatial dimensions, then it seems to me nothing’s happening in the universe. I can imagine a four-dimensional spatial object, but nothing occurs in it. This is the way people often talk about the, quote, “block universe” as being fixed or rigid or unchanging or something like that, because they’re thinking of it like a four-dimensional spatial object. If you had that, then I don’t see how any initial condition put on it — or any boundary condition put on it; you can’t say “initial” anymore — could create time. How can a boundary condition change the fundamental character of a dimension from spatial to temporal?

    Suppose on one boundary there’s low entropy; from that I then explain everything. You might wonder: “But why that boundary? Why not go from the other boundary, where presumably things are at equilibrium?” The peculiar characteristics at this boundary are not low entropy — there’s high entropy there — but that the microstate is one of the very special ones that leads to a long period of decreasing entropy. Now it seems to me that it has the special microstate because it developed from a low-entropy initial state. But now I’m using “initial” and “final,” and I’m appealing to certain causal notions and productive notions to do the explanatory work. If you don’t have a direction of time to distinguish the initial from the final state and to underwrite these causal locutions, I’m not quite sure how the explanations are supposed to go.

    But all of this seems so — what can I say? It seems so remote from the physical world. We’re sitting here and time is going on, and we know what it means to say that time is going on. I don’t know what it means to say that time really doesn’t pass and it’s only in virtue of entropy increasing that it seems to.

    You don’t sound like much of a fan of the block universe.

    There’s a sense in which I believe a certain understanding of the block universe. I believe that the past is equally real as the present, which is equally real as the future. Things that happened in the past were just as real. Pains in the past were pains, and in the future they’ll be real too, and there was one past and there will be one future. So if that’s all it means to believe in a block universe, fine.

    People often say, “I’m forced into believing in a block universe because of relativity.” The block universe, again, is some kind of rigid structure. The totality of concrete physical reality is specifying that four-dimensional structure and what happens everywhere in it. In Newtonian mechanics, this object is foliated by these planes of absolute simultaneity. And in relativity you don’t have that; you have this light-cone structure instead. So it has a different geometrical character. But I don’t see how that different geometrical character gets rid of time or gets rid of temporality.

    The idea that the block universe is static drives me crazy. What is it to say that something is static? It’s to say that as time goes on, it doesn’t change. But it’s not that the block universe is in time; time is in it. When you say it’s static, it somehow suggests that there is no change, nothing really changes, change is an illusion. It blows your mind. Physics has discovered some really strange things about the world, but it has not discovered that change is an illusion.
    What does it mean for time to pass? Is that synonymous with “time has a direction,” or is there something in addition?

    There’s something in addition. For time to pass means for events to be linearly ordered, by earlier and later. The causal structure of the world depends on its temporal structure. The present state of the universe produces the successive states. To understand the later states, you look at the earlier states and not the other way around. Of course, the later states can give you all kinds of information about the earlier states, and, from the later states and the laws of physics, you can infer the earlier states. But you normally wouldn’t say that the later states explain the earlier states. The direction of causation is also the direction of explanation.
    Am I accurate in getting from you that there’s a generation or production going on here — that there’s a machinery that sits grinding away, one moment giving rise to the next, giving rise to the next?

    Well, that’s certainly a deep part of the picture I have. The machinery is exactly the laws of nature. That gives a constraint on the laws of nature — namely, that they should be laws of temporal evolution. They should be laws that tell you, as time goes on, how will new states succeed old ones. The claim would be there are no fundamental laws that are purely spatial and that where you find spatial regularities, they have temporal explanations.

    Does this lead you to a different view of what a law even is?

    It leads me to a different view than the majority view. I think of laws as having a kind of primitive metaphysical status, that laws are not derivative on anything else. It’s, rather, the other way around: Other things are derivative from, produced by, explained by, derived from the laws operating. And there, the word “operating” has this temporal characteristic.
    Why is yours a minority view? Because it seems to me, if you ask most people on the street what the laws of physics do, they would say, “It’s part of a machinery.”

    I often say my philosophical views are just kind of the naïve views you would have if you took a physics class or a cosmology class and you took seriously what you were being told. In a physics class on Newtonian mechanics, they’ll write down some laws and they’ll say, “Here are the laws of Newtonian mechanics.” That’s really the bedrock from which you begin.

    I don’t think I hold really bizarre views. I take “time doesn’t pass” or “the passage of time is an illusion” to be a pretty bizarre view. Not to say it has to be false, but one that should strike you as not what you thought.
    What does this all have to say about whether time is fundamental or emergent?

    I’ve never been able to quite understand what the emergence of time, in its deeper sense, is supposed to be. The laws are usually differential equations in time. They talk about how things evolve. So if there’s no time, then things can’t evolve. How do we understand — and is the emergence a temporal emergence? It’s like, in a certain phase of the universe, there was no time; and then in other phases, there is time, where it seems as though time emerges temporally out of non-time, which then seems incoherent.

    Where do you stop offering analyses? Where do you stop — where is your spade turned, as Wittgenstein would say? And for me, again, the notion of temporality or of time seems like a very good place to think I’ve hit a fundamental feature of the universe that is not explicable in terms of anything else.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 4:29 pm on January 23, 2017 Permalink | Reply
    Tags: Biophysics, Centrosomes, Earth’s primordial soup, Macromolecules, Protocells?, Quanta Magazine, simple “chemically active” droplets grow to the size of cells and spontaneously divide, The first living cells?, Vestiges of evolutionary history   

    From Quanta: “Dividing Droplets Could Explain Life’s Origin” 

    Quanta Magazine
    Quanta Magazine

    January 19, 2017
    Natalie Wolchover

    Researchers have discovered that simple “chemically active” droplets grow to the size of cells and spontaneously divide, suggesting they might have evolved into the first living cells.

    davidope for Quanta Magazine

    A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earth’s primordial soup.

    Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that “the general phenomenology of life formation is a lot easier than one might think.”

    The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed “protocells,” and how did they come alive? Proponents of the “membrane-first” hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

    In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of life’s humble beginnings, proposed that the mystery protocells might have been liquid droplets — naturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparin’s long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

    Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of “chemically active” droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This “active droplet” behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

    If chemically active droplets can grow to a set size and divide of their own accord, then “it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup,” said Frank Jülicher, a biophysicist in Dresden and a co-author of the new paper.

    The findings, reported in Nature Physics last month, paint a possible picture of life’s start by explaining “how cells made daughters,” said Zwicker, who is now a postdoctoral researcher at Harvard University. “This is, of course, key if you want to think about evolution.”

    Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it “a very promising direction.”

    However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

    Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say it’s plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

    Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years ago — tiny liquid aggregates of protein and RNA in cells of the worm C. elegans — explained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, “the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place,” he said.

    When germline cells in the roundworm C. elegans divide, P granules, shown in green, condense in the daughter cell that will become a viable sperm or egg and dissolve in the other daughter cell. Courtesy of Clifford Brangwynne/Science

    “This Nature Physics paper takes that to the next level,” by revealing the features that droplets would have needed “to play a role as protocells,” Brangwynne added.

    Droplets in Dresden

    The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as “P granules” in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didn’t take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparin’s 1924 protocell theory. In a 2012 essay about Oparin’s life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about “may still be alive and well, safe within our cells, like flies in life’s evolving amber.”

    Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for life — a conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparin’s ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didn’t know either.

    In the wake of their discovery, Jülicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as “out-of-equilibrium” systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as there’s an energy source, molecules flow in and out of an active droplet. “In the context of early Earth, sunlight would be the driving force,” Jülicher said.

    Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwicker’s simulations grew to tens or hundreds of microns across depending on their properties — the scale of cells.

    Lucy Reading-Ikkanda/Quanta Magazine

    The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplet’s growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jülicher saw simulations of Zwicker’s equations, “he immediately jumped on it and said, ‘That looks very much like division,’” Zwicker said. “And then this whole protocell idea emerged quickly.”

    Zwicker, Jülicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparin’s vision. “If you just think about droplets like Oparin did, then it’s not clear how evolution could act on these droplets,” Zwicker said. “For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.”

    Globule Ancestor

    Last spring, Jülicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

    Tang’s lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hyman’s lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. “That would be a big deal,” said Giomi, the Leiden biophysicist.

    When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jülicher.) Nonetheless, Deamer isn’t convinced of the effect’s significance. “There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide,” he said.

    Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

    Jülicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, “I can go along with that,” noting that he would define protocells as the first droplets that had membranes.

    The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

    The luckiest part of the whole process, in Jülicher’s opinion, was not that droplets turned into cells, but that the first droplet — our globule ancestor — formed to begin with. Droplets require a lot of chemical material to spontaneously arise or “nucleate,” and it’s unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jülicher said, there was a lot of soup, and it was stewing for eons.

    “It’s a very rare event. You have to wait a long time for it to happen,” he said. “And once it happens, then the next things happen more easily, and more systematically.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:07 pm on December 22, 2016 Permalink | Reply
    Tags: , Explorers Find Passage to Earth’s Dark Age, , Quanta Magazine   

    From Quanta: “Explorers Find Passage to Earth’s Dark Age” 

    Quanta Magazine
    Quanta Magazine

    December 22, 2016
    Natalie Wolchover

    Earth scientists hope that their growing knowledge of the planet’s early history will shed light on poorly understood features seen today, from continents to geysers. Eric King

    Geochemical signals from deep inside Earth are beginning to shed light on the planet’s first 50 million years, a formative period long viewed as inaccessible to science.

    In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine.

    A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!”

    The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there — before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest.

    Other modern findings [Nature]about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy — a period that until recently was inaccessible to science.

    After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents.

    The anomalies “are giving us information about some of the earliest Earth processes,” Walker said. “It’s an alternative universe from what geochemists have been working with for the past 50 years.”

    Matt Jackson and his family with a local farmer in northwest Iceland. Courtesy of Matt Jackson.

    The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation — and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years.

    “You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.”

    Pieces of the Puzzle

    On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes — places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly.

    A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas — a necessity when the goal is understanding a system as complex as Earth.

    Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face — some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics.

    Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism — well, he did, but it was crazy,” she said.

    Earth scientists on a beach hike in Santa Barbara County, California. Natalie Wolchover/Quanta Magazine

    A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge — the underwater mountain range that lines the world ocean like a seam — and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.)

    The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere.

    The tectonic plates of the world were mapped in 1996, USGS.
    The tectonic plates of the world were mapped in 1996, USGS.

    Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up — perhaps these are the blobs? — and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them — all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise.

    The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized.

    Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history — very early on.”

    Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when.

    A Sketchy Timeline

    Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery.

    The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules — colorful, beautiful — under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing — just regular things.”

    In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation [Science Direct], though still hotly debated [Nature]in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place.

    A panorama of the Taurus-Littrow Valley created from photographs by Apollo 17 astronaut Eugene Cernan. Astronaut Harrison Schmitt is shown using a rake to collect samples. NASA

    Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began.

    First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star — our sun — surrounded by a placenta of hot debris.

    Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars — the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets.

    The planets of the solar system as depicted by a NASA computer illustration. Orbits and sizes are not shown to scale.
    Credit: NASA

    Researchers use liquid chromatography to isolate elements for analysis. Rock samples dissolved in acid flow down ion-exchange columns, like the ones in Rick Carlson’s laboratory at the Carnegie Institution in Washington, to separate the elements. Mary Horan.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers — core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    Wasserburg dubbed the event the “lunar cataclysm.” [Science Direct]. Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said.

    As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden.

    In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment — and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” [Nature] posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans.

    With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost.

    Extending the Rock Record

    More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred.

    Lucy Reading-Ikkanda for Quanta Magazine

    Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived.

    In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system — hafnium-tungsten — in ancient rocks, which would point back to even earlier times in Earth’s history.

    The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182.

    That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around — processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t?

    Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old.

    Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average — the first ever detection of a “positive” tungsten anomaly on the face of the Earth.

    The paper scooped Richard Walker of Maryland and his colleagues, who months later reported [Science] a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia.

    Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history.

    The 60 million-year-old flood basalts of Baffin Bay, Greenland, sampled by the geochemist Hanika Rizo (center) and colleagues, contain isotope traces that originated more than 4.5 billion years ago. Don Francis (left); courtesy of Hanika Rizo (center and right).

    The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.”

    Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks — 62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks.

    More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s group reported [2016 AGU Fall Meeting] a negative tungsten anomaly — that is, a deficit of tungsten-182 relative to tungsten-184 — in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation.

    Tim Elliott collecting samples of ancient crust rock in Yilgarn Craton in Western Australia. Tony Kemp

    Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass — delivered to the planet in a long tail of minor impacts — mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies.

    Other evidence complicates this hypothesis, however — namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize.

    Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material.

    “These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 7:05 pm on November 30, 2016 Permalink | Reply
    Tags: , , , , , , Quanta Magazine,   

    From Quanta: “The Case Against Dark Matter” 

    Quanta Magazine
    Quanta Magazine

    November 29, 2016
    Natalie Wolchover

    Erik Verlinde
    Ilvy Njiokiktjien for Quanta Magazine

    For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical “dark matter” seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.

    But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. This month, a series of developments has revived a long-disfavored argument that dark matter doesn’t exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.

    The latest attempt to explain away dark matter is a much-discussed proposal by Erik Verlinde, a theoretical physicist at the University of Amsterdam who is known for bold and prescient, if sometimes imperfect, ideas. In a dense 51-page paper posted online on Nov. 7, Verlinde casts gravity as a byproduct of quantum interactions and suggests that the extra gravity attributed to dark matter is an effect of “dark energy” — the background energy woven into the space-time fabric of the universe.

    Instead of hordes of invisible particles, “dark matter is an interplay between ordinary matter and dark energy,” Verlinde said.

    To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called “qubits”), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.

    In his calculations, Verlinde rediscovered the equations of “modified Newtonian dynamics,” or MOND. This 30-year-old theory makes an ad hoc tweak to the famous “inverse-square” law of gravity in Newton’s and Einstein’s theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. “I have a way of understanding the MOND success from a more fundamental perspective,” Verlinde said.

    Many experts have called Verlinde’s paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxies published on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MOND’s case against dark matter.

    The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxy’s center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the “radial acceleration relation.” This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxy’s rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

    Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlinde’s theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or “lensing” of light from the galaxies — another phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MOND’s original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlinde’s theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

    Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlinde’s proposal at least demonstrates how something like MOND might be right after all. “One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior,” she said. “If [Verlinde’s] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.”

    The New MOND

    In Newton’s and Einstein’s theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pull — and orbit more slowly — the farther they are from the galactic center. Stars’ velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The “flattening” of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matter — explained, in that paradigm, by dark matter clouds or “halos” that surround galaxies and give an extra gravitational acceleration to their outlying stars.

    Searches for dark matter particles have proliferated — with hypothetical “weakly interacting massive particles” (WIMPs) and lighter-weight “axions” serving as prime candidates — but so far, experiments have found nothing.

    Lucy Reading-Ikkanda for Quanta Magazine

    Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain level — precisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earth — he postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. “There’s this magic scale,” McGaugh said. “Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.”

    Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlinde’s theory revitalizes MOND by attempting to reveal the method behind the magic.

    Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.

    The particular brand of emergent gravity in Verlinde’s paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-time — an approach that Verlinde has now absorbed into his new work.

    In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum information — that is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become “entangled” with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. “The best way we understand quantum gravity currently is this holographic approach,” said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.

    The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that it’s exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

    De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energy — often called dark energy — which drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.

    Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newton’s inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

    Van Raamsdonk calls Verlinde’s idea “definitely an important direction.” But he says it’s too soon to tell whether everything in the paper — which draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysics — hangs together. Either way, Van Raamsdonk said, “I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.”

    One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. “To be fair, we’ve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe,” Swingle said, referring to work in AdS space. “We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.”

    Access mp4 video here .

    The Case for Dark Matter

    Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.

    One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another.

    X-ray photo by Chandra X-ray Observatory of the Bullet Cluster (1E0657-56). Exposure time was 0.5 million seconds (~140 hours) and the scale is shown in megaparsecs. Redshift (z) = 0.3, meaning its light has wavelengths stretched by a factor of 1.3. Based on today’s theories this shows the cluster to be about 4 billion light years away.
    In this photograph, a rapidly moving galaxy cluster with a shock wave trailing behind it seems to have hit another cluster at high speed. The gases collide, and gravitational fields of the stars and galalxies interact. When the galaxies collided, based on black-body temperture readings, the temperature reached 160 million degrees and X-rays were emitted in great intensity, claiming title of the hottest known galactic cluster.
    Studies of the Bullet cluster, announced in August 2006, provide the best evidence to date for the existence of dark matter.

    Superimposed mass density contours, caused by gravitational lensing of dark matter. Photograph taken with Hubble Space Telescope.
    Date 22 August 2006

    The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energy’s gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.

    But the crowning achievement for Verlinde’s theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe.

    CMB per ESA/Planck
    CMB per ESA/Planck

    The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks’ amplitudes. Verlinde expects that his version will work — once again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. “Having said this,” he said, “I have not calculated this all through.”

    While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues’ recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

    In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxy’s halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies’ rotation speeds, even though they’re set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies’ dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says it’s too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues’ galaxy data set. If not, then the standard dark matter paradigm is in big trouble. “Obviously this is something that the community needs to look at more carefully,” Zurek said.

    Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. “If somebody were to come to you and say, ‘The solar system doesn’t work on an inverse-square law, really it’s an inverse-cube law, but there’s dark matter that’s arranged just so that it always looks inverse-square,’ you would say that person is insane,” he said. “But that’s basically what we’re asking to be the case with dark matter here.”

    Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. “That said, you should always check that you’re not on a bandwagon,” she added. “Even though this paradigm explains everything, you should always check that there isn’t something else going on.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 7:18 am on September 16, 2016 Permalink | Reply
    Tags: , Quanta Magazine,   

    From Quanta: “The Strange Second Life of String Theory” 

    Quanta Magazine
    Quanta Magazine

    September 15, 2016
    K.C. Cole

    String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics.
    At the same time, it has blossomed into one of the most useful sets of tools in science.

    Renee Rominger/Moonrise Whims for Quanta Magazine

    String theory strutted onto the scene some 30 years ago as perfection itself, a promise of elegant simplicity that would solve knotty problems in fundamental physics — including the notoriously intractable mismatch between Einstein’s smoothly warped space-time and the inherently jittery, quantized bits of stuff that made up everything in it.

    It seemed, to paraphrase Michael Faraday, much too wonderful not to be true: Simply replace infinitely small particles with tiny (but finite) vibrating loops of string. The vibrations would sing out quarks, electrons, gluons and photons, as well as their extended families, producing in harmony every ingredient needed to cook up the knowable world. Avoiding the infinitely small meant avoiding a variety of catastrophes. For one, quantum uncertainty couldn’t rip space-time to shreds. At last, it seemed, here was a workable theory of quantum gravity.

    Even more beautiful than the story told in words was the elegance of the math behind it, which had the power to make some physicists ecstatic.

    To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space. These dimensions were folded in on themselves — or “compactified” — into complex origami shapes. No one knew just how the dimensions were compactified — the possibilities for doing so appeared to be endless — but surely some configuration would turn out to be just what was needed to produce familiar forces and particles.

    For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”

    And then physicists began to realize that the dream of one singular theory was an illusion. The complexities of string theory, all the possible permutations, refused to reduce to a single one that described our world. “After a certain point in the early ’90s, people gave up on trying to connect to the real world,” Gross said. “The last 20 years have really been a great extension of theoretical tools, but very little progress on understanding what’s actually out there.”

    Many, in retrospect, realized they had raised the bar too high. Coming off the momentum of completing the solid and powerful “standard model” of particle physics in the 1970s, they hoped the story would repeat — only this time on a mammoth, all-embracing scale. “We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”

    Like many a maturing beauty, string theory has gotten rich in relationships, complicated, hard to handle and widely influential. Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.

    The mathematics that have come out of string theory have been put to use in fields such as cosmology and condensed matter physics — the study of materials and their properties. It’s so ubiquitous that “even if you shut down all the string theory groups, people in condensed matter, people in cosmology, people in quantum gravity will do it,” Dijkgraaf said.

    “It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”

    String theory today looks almost fractal. The more closely people explore any one corner, the more structure they find. Some dig deep into particular crevices; others zoom out to try to make sense of grander patterns. The upshot is that string theory today includes much that no longer seems stringy. Those tiny loops of string whose harmonics were thought to breathe form into every particle and force known to nature (including elusive gravity) hardly even appear anymore on chalkboards at conferences. At last year’s big annual string theory meeting, the Stanford University string theorist Eva Silverstein was amused to find she was one of the few giving a talk “on string theory proper,” she said. A lot of the time she works on questions related to cosmology.

    Even as string theory’s mathematical tools get adopted across the physical sciences, physicists have been struggling with how to deal with the central tension of string theory: Can it ever live up to its initial promise? Could it ever give researchers insight into how gravity and quantum mechanics might be reconciled — not in a toy universe, but in our own?

    “The problem is that string theory exists in the landscape of theoretical physics,” said Juan Maldacena, a mathematical physicist at the IAS and perhaps the most prominent figure in the field today. “But we still don’t know yet how it connects to nature as a theory of gravity.” Maldacena now acknowledges the breadth of string theory, and its importance to many fields of physics — even those that don’t require “strings” to be the fundamental stuff of the universe — when he defines string theory as “Solid Theoretical Research in Natural Geometric Structures.”

    An Explosion of Quantum Fields

    One high point for string theory as a theory of everything came in the late 1990s, when Maldacena revealed that a string theory including gravity in five dimensions was equivalent to a quantum field theory in four dimensions. This “AdS/CFT” duality appeared to provide a map for getting a handle on gravity — the most intransigent piece of the puzzle — by relating it to good old well-understood quantum field theory.

    This correspondence was never thought to be a perfect real-world model. The five-dimensional space in which it works has an “anti-de Sitter” geometry, a strange M.C. Escher-ish landscape that is not remotely like our universe.

    But researchers were surprised when they dug deep into the other side of the duality. Most people took for granted that quantum field theories — “bread and butter physics,” Dijkgraaf calls them — were well understood and had been for half a century. As it turned out, Dijkgraaf said, “we only understand them in a very limited way.”

    These quantum field theories were developed in the 1950s to unify special relativity and quantum mechanics. They worked well enough for long enough that it didn’t much matter that they broke down at very small scales and high energies. But today, when physicists revisit “the part you thought you understood 60 years ago,” said Nima Arkani-Hamed, a physicist at the IAS, you find “stunning structures” that came as a complete surprise. “Every aspect of the idea that we understood quantum field theory turns out to be wrong. It’s a vastly bigger beast.”

    Researchers have developed a huge number of quantum field theories in the past decade or so, each used to study different physical systems. Beem suspects there are quantum field theories that can’t be described even in terms of quantum fields. “We have opinions that sound as crazy as that, in large part, because of string theory.”

    This virtual explosion of new kinds of quantum field theories is eerily reminiscent of physics in the 1930s, when the unexpected appearance of a new kind of particle — the muon — led a frustrated I.I. Rabi to ask: “Who ordered that?” The flood of new particles was so overwhelming by the 1950s that it led Enrico Fermi to grumble: “If I could remember the names of all these particles, I would have been a botanist.”

    Physicists began to see their way through the thicket of new particles only when they found the more fundamental building blocks making them up, like quarks and gluons. Now many physicists are attempting to do the same with quantum field theory. In their attempts to make sense of the zoo, many learn all they can about certain exotic species.

    Conformal field theories (the right hand of AdS/CFT) are a starting point. In the simplest type of conformal field theory, you start with a version of quantum field theory where “the interactions between the particles are turned off,” said David Simmons-Duffin, a physicist at the IAS. If these specific kinds of field theories could be understood perfectly, answers to deep questions might become clear. “The idea is that if you understand the elephant’s feet really, really well, you can interpolate in between and figure out what the whole thing looks like.”

    Like many of his colleagues, Simmons-Duffin says he’s a string theorist mostly in the sense that it’s become an umbrella term for anyone doing fundamental physics in underdeveloped corners. He’s currently focusing on a physical system that’s described by a conformal field theory but has nothing to do with strings. In fact, the system is water at its “critical point,” where the distinction between gas and liquid disappears. It’s interesting because water’s behavior at the critical point is a complicated emergent system that arises from something simpler. As such, it could hint at dynamics behind the emergence of quantum field theories.

    Beem focuses on supersymmetric field theories, another toy model, as physicists call these deliberate simplifications. “We’re putting in some unrealistic features to make them easier to handle,” he said. Specifically, they are amenable to tractable mathematics, which “makes it so a lot of things are calculable.”

    Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”

    It’s the kind of work that makes people such as Sean Carroll, a theoretical physicist at the California Institute of Technology, wonder if the field has strayed too far from its early ambitions — to find, if not a “theory of everything,” at least a theory of quantum gravity. “Answering deep questions about quantum gravity has not really happened,” he said. “They have all these hammers and they go looking for nails.” That’s fine, he said, even acknowledging that generations might be needed to develop a new theory of quantum gravity. “But it isn’t fine if you forget that, ultimately, your goal is describing the real world.”

    It’s a question he has asked his friends. Why are they investigating detailed quantum field theories? “What’s the aspiration?” he asks. Their answers are logical, he says, but steps removed from developing a true description of our universe.

    nstead, he’s looking for a way to “find gravity inside quantum mechanics.” A paper he recently wrote with colleagues claims to take steps toward just that. It does not involve string theory.

    The Broad Power of Strings

    Perhaps the field that has gained the most from the flowering of string theory is mathematics itself. Sitting on a bench beside the IAS pond while watching a blue heron saunter in the reeds, Clay Córdova, a researcher there, explained how what seemed like intractable problems in mathematics were solved by imagining how the question might look to a string. For example, how many spheres could fit inside a Calabi-Yau manifold — the complex folded shape expected to describe how spacetime is compactified? Mathematicians had been stuck. But a two-dimensional string can wiggle around in such a complex space. As it wiggled, it could grasp new insights, like a mathematical multidimensional lasso. This was the kind of physical thinking Einstein was famous for: thought experiments about riding along with a light beam revealed E=mc2. Imagining falling off a building led to his biggest eureka moment of all: Gravity is not a force; it’s a property of space-time.

    The amplituhedron is a multi-dimensional object that can be used to calculate particle interactions. Physicists such as Chris Beem are applying techniques from string theory in special geometries where “the amplituhedron is its best self,” he says. Nima Arkani-Hamed

    Using the physical intuition offered by strings, physicists produced a powerful formula for getting the answer to the embedded sphere question, and much more. “They got at these formulas using tools that mathematicians don’t allow,” Córdova said. Then, after string theorists found an answer, the mathematicians proved it on their own terms. “This is a kind of experiment,” he explained. “It’s an internal mathematical experiment.” Not only was the stringy solution not wrong, it led to Fields Medal-winning mathematics. “This keeps happening,” he said.

    String theory has also made essential contributions to cosmology. The role that string theory has played in thinking about mechanisms behind the inflationary expansion of the universe — the moments immediately after the Big Bang, where quantum effects met gravity head on — is “surprisingly strong,” said Silverstein, even though no strings are attached.

    Still, Silverstein and colleagues have used string theory to discover, among other things, ways to see potentially observable signatures of various inflationary ideas. The same insights could have been found using quantum field theory, she said, but they weren’t. “It’s much more natural in string theory, with its extra structure.”

    Inflationary models get tangled in string theory in multiple ways, not least of which is the multiverse — the idea that ours is one of a perhaps infinite number of universes, each created by the same mechanism that begat our own. Between string theory and cosmology, the idea of an infinite landscape of possible universes became not just acceptable, but even taken for granted by a large number of physicists. The selection effect, Silverstein said, would be one quite natural explanation for why our world is the way it is: In a very different universe, we wouldn’t be here to tell the story.

    This effect could be one answer to a big problem string theory was supposed to solve. As Gross put it: “What picks out this particular theory” — the Standard Model — from the “plethora of infinite possibilities?”

    Silverstein thinks the selection effect is actually a good argument for string theory. The infinite landscape of possible universes can be directly linked to “the rich structure that we find in string theory,” she said — the innumerable ways that string theory’s multidimensional space-time can be folded in upon itself.

    Building the New Atlas

    At the very least, the mature version of string theory — with its mathematical tools that let researchers view problems in new ways — has provided powerful new methods for seeing how seemingly incompatible descriptions of nature can both be true. The discovery of dual descriptions of the same phenomenon pretty much sums up the history of physics. A century and a half ago, James Clerk Maxwell saw that electricity and magnetism were two sides of a coin. Quantum theory revealed the connection between particles and waves. Now physicists have strings.

    “Once the elementary things we’re probing spaces with are strings instead of particles,” said Beem, the strings “see things differently.” If it’s too hard to get from A to B using quantum field theory, reimagine the problem in string theory, and “there’s a path,” Beem said.

    In cosmology, string theory “packages physical models in a way that’s easier to think about,” Silverstein said. It may take centuries to tie together all these loose strings to weave a coherent picture, but young researchers like Beem aren’t bothered a bit. His generation never thought string theory was going to solve everything. “We’re not stuck,” he said. “It doesn’t feel like we’re on the verge of getting it all sorted, but I know more each day than I did the day before – and so presumably we’re getting somewhere.”

    Stanford thinks of it as a big crossword puzzle. “It’s not finished, but as you start solving, you can tell that it’s a valid puzzle,” he said. “It’s passing consistency checks all the time.”

    “Maybe it’s not even possible to capture the universe in one easily defined, self-contained form, like a globe,” Dijkgraaf said, sitting in Robert Oppenheimer’s many windowed office from when he was Einstein’s boss, looking over the vast lawn at the IAS, the pond and the woods in the distance. Einstein, too, tried and failed to find a theory of everything, and it takes nothing away from his genius.

    “Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty,” Dijkgraaf said. “Using the atlas will require that physics be fluent in many languages, many approaches, all at the same time. Their work will come from many different directions, perhaps far-flung.”

    He finds it “totally disorienting” and also “fantastic.”

    Arkani-Hamed believes we are in the most exciting epoch of physics since quantum mechanics appeared in the 1920s. But nothing will happen quickly. “If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited,” he said. “But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 3:00 pm on September 9, 2016 Permalink | Reply
    Tags: , , , Quanta Magazine   

    From Quanta: “Colliding Black Holes Tell New Story of Stars” 

    Quanta Magazine
    Quanta Magazine

    September 6, 2016
    Natalie Wolchover

    Just months after their discovery, gravitational waves coming from the mergers of black holes are shaking up astrophysics.

    Ana Kova for Quanta Magazine

    At a talk last month in Santa Barbara, California, addressing some of the world’s leading astrophysicists, Selma de Mink cut to the chase. “How did they form?” she began.

    “They,” as everybody knew, were the two massive black holes that, more than 1 billion years ago and in a remote corner of the cosmos, spiraled together and merged, making waves in the fabric of space and time. These “gravitational waves” rippled outward and, on Sept. 14, 2015, swept past Earth, strumming the ultrasensitive detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO).

    LSC LIGO Scientific Collaboration
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LIGO’s discovery, announced in February, triumphantly vindicated Albert Einstein’s 1916 prediction that gravitational waves exist.

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib
    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    By tuning in to these tiny tremors in space-time and revealing for the first time the invisible activity of black holes — objects so dense that not even light can escape their gravitational pull — LIGO promised to open a new window on the universe, akin, some said, to when Galileo first pointed a telescope at the sky.

    Already, the new gravitational-wave data has shaken up the field of astrophysics. In response, three dozen experts spent two weeks in August sorting through the implications at the Kavli Institute for Theoretical Physics (KITP) in Santa Barbara.

    Jump-starting the discussions, de Mink, an assistant professor of astrophysics at the University of Amsterdam, explained that of the two — and possibly more — black-hole mergers that LIGO has detected so far, the first and mightiest event, labeled GW150914, presented the biggest puzzle. LIGO was expected to spot pairs of black holes weighing in the neighborhood of 10 times the mass of the sun, but these packed roughly 30 solar masses apiece. “They are there — massive black holes, much more massive than we thought they were,” de Mink said to the room. “So, how did they form?”

    The mystery, she explained, is twofold: How did the black holes get so massive, considering that stars, some of which collapse to form black holes, typically blow off most of their mass before they die, and how did they get so close to each other — close enough to merge within the lifetime of the universe? “These are two things that are sort of mutually exclusive,” de Mink said. A pair of stars that are born huge and close together will normally mingle and then merge before ever collapsing into black holes, failing to kick up detectable gravitational waves.

    Nailing down the story behind GW150914 “is challenging all our understanding,” said Matteo Cantiello, an astrophysicist at KITP. Experts must retrace the uncertain steps from the moment of the merger back through the death, life and birth of a pair of stars — a sequence that involves much unresolved astrophysics. “This will really reinvigorate certain old questions in our understanding of stars,” said Eliot Quataert, a professor of astronomy at the University of California, Berkeley, and one of the organizers of the KITP program. Understanding LIGO’s data will demand a reckoning of when and why stars go supernova; which ones turn into which kinds of stellar remnants; how stars’ composition, mass and rotation affect their evolution; how their magnetic fields operate; and more.

    The work has just begun, but already LIGO’s first few detections have pushed two theories of binary black-hole formation to the front of the pack. Over the two weeks in Santa Barbara, a rivalry heated up between the new “chemically homogeneous” model for the formation of black-hole binaries, proposed by de Mink and colleagues earlier this year, and the classic “common envelope” model espoused by many other experts. Both theories (and a cluster of competitors) might be true somewhere in the cosmos, but probably only one of them accounts for the vast majority of black-hole mergers. “In science,” said Daniel Holz of the University of Chicago, a common-envelope proponent, “there’s usually only one dominant process — for anything.”

    Star Stories

    The R136 star cluster at the heart of the Tarantula Nebula gives rise to many massive stars, which are thought to be the progenitors of black-hole binaries. NASA, ESA, F. Paresce, R. O’Connell and the Wide Field Camera 3 Science Oversight Committee

    The story of GW150914 almost certainly starts with massive stars — those that are at least eight times as heavy as the sun and which, though rare, play a starring role in galaxies. Massive stars are the ones that explode as supernovas, spewing matter into space to be recycled as new stars; only their cores then collapse into black holes and neutron stars, which drive exotic and influential phenomena such as gamma-ray bursts, pulsars and X-ray binaries. De Mink and collaborators showed in 2012 that most known massive stars live in binary systems. Binary massive stars, in her telling, “dance” and “kiss” and suck each other’s hydrogen fuel “like vampires,” depending on the circumstances. But which circumstances lead them to shrink down to points that recede behind veils of darkness, and then collide?

    The conventional common-envelope story, developed over decades starting with the 1970s work of the Soviet scientists Aleksandr Tutukov and Lev Yungelson, tells of a pair of massive stars that are born in a wide orbit. As the first star runs out of fuel in its core, its outer layers of hydrogen puff up, forming a “red supergiant.” Much of this hydrogen gas gets sucked away by the second star, vampire-style, and the core of the first star eventually collapses into a black hole. The interaction draws the pair closer, so that when the second star puffs up into a supergiant, it engulfs the two of them in a common envelope. The companions sink ever closer as they wade through the hydrogen gas. Eventually, the envelope is lost to space, and the core of the second star, like the first, collapses into a black hole. The two black holes are close enough to someday merge.

    Because the stars shed so much mass, this model is expected to yield pairs of black holes on the lighter side, weighing in the ballpark of 10 solar masses. LIGO’s second signal, from the merger of eight- and 14-solar-mass black holes, is a home run for the model. But some experts say that the first event, GW150914, is a stretch.

    In a June paper in Nature, Holz and collaborators Krzysztof Belczynski, Tomasz Bulik and Richard O’Shaughnessy argued that common envelopes can theoretically produce mergers of 30-solar-mass black holes if the progenitor stars weigh something like 90 solar masses and contain almost no metal (which accelerates mass loss). Such heavy binary systems are likely to be relatively rare in the universe, raising doubts in some minds about whether LIGO would have observed such an outlier so soon. In Santa Barbara, scientists agreed that if LIGO detects many very heavy mergers relative to lighter ones, this will weaken the case for the common-envelope scenario.

    Lucy Reading-Ikkanda for Quanta Magazine

    This weakness of the conventional theory has created an opening for new ideas. One such idea began brewing in 2014, when de Mink and Ilya Mandel, an astrophysicist at the University of Birmingham and a member of the LIGO collaboration, realized that a type of binary-star system that de Mink has studied for years might be just the ticket to forming massive binary black holes.

    The chemically homogeneous model begins with a pair of massive stars that are rotating around each other extremely rapidly and so close together that they become “tidally locked,” like tango dancers. In tango, “you are extremely close, so your bodies face each other all the time,” said de Mink, a dancer herself. “And that means you are spinning around each other, but it also forces you to spin around your own axis as well.” This spinning stirs the stars, making them hot and homogeneous throughout. And this process might allow the stars to undergo fusion throughout their whole interiors, rather than just their cores, until both stars use up all their fuel. Because the stars never expand, they do not intermingle or shed mass. Instead, each collapses wholesale under its own weight into a massive black hole. The black holes dance for a few billion years, gradually spiraling closer and closer until, in a space-time-buckling split second, they coalesce.

    De Mink and Mandel made their case for the chemically homogeneous model in a paper posted online in January. Another paper proposing the same idea, by researchers at the University of Bonn led by the graduate student Pablo Marchant, appeared days later. When LIGO announced the detection of GW150914 the following month, the chemically homogeneous theory shot to prominence. “What I’m discussing was a pretty crazy story up to the moment that it made, very nicely, black holes of the right mass,” de Mink said.

    However, aside from some provisional evidence, the existence of stirred stars is speculative. And some experts question the model’s efficacy. Simulations suggest that the chemically homogeneous model struggles to explain smaller black-hole binaries like those in LIGO’s second signal. Worse, doubt has arisen as to how well the theory really accounts for GW150914, which is supposed to be its main success story. “It’s a very elegant model,” Holz said. “It’s very compelling. The problem is that it doesn’t seem to fully work.”

    All Spun Up

    Along with the masses of the colliding black holes, LIGO’s gravitational-wave signals also reveal whether the black holes were spinning. At first, researchers paid less attention to the spin measurement, in part because gravitational waves only register spin if black holes are spinning around the same axis that they orbit each other around, saying nothing about spin in other directions. However, in a May paper, researchers at the Institute for Advanced Study in Princeton, N.J., and the Hebrew University of Jerusalem argued that the kind of spin that LIGO measures is exactly the kind black holes would be expected to have if they formed via the chemically homogeneous channel. (Tango dancers spin and orbit each other in the same direction.) And yet, the 30-solar-mass black holes in GW150914 were measured to have very low spin, if any, seemingly striking a blow against the tango scenario.

    “Is spin a problem for the chemically homogeneous channel?” Sterl Phinney, a professor of astrophysics at the California Institute of Technology, prompted the Santa Barbara group one afternoon. After some debate, the scientists agreed that the answer was yes.

    However, mere days later, de Mink, Marchant, and Cantiello found a possible way out for the theory. Cantiello, who has recently made strides in studying stellar magnetic fields, realized that the tangoing stars in the chemically homogeneous channel are essentially spinning balls of charge that would have powerful magnetic fields, and these magnetic fields are likely to cause the star’s outer layers to stream into strong poles. In the same way that a spinning figure skater slows down when she extends her arms, these poles would act like brakes, gradually reducing the stars’ spin. The trio has since been working to see if their simulations bear out this picture. Quataert called the idea “plausible but perhaps a little weaselly.”

    Lucy Reading-Ikkanda for Quanta Magazine; Source: LIGO

    On the last day of the program, setting the stage for an eventful autumn as LIGO comes back online with higher sensitivity and more gravitational-wave signals roll in, the scientists signed “Phinney’s Declaration,” a list of concrete statements about what their various theories predict. “Though all models for black hole binaries may be created equal (except those inferior ones proposed by our competitors),” begins the declaration, drafted by Phinney, “we hope that observational data will soon make them decidedly unequal.”

    As the data pile up, an underdog theory of black-hole binary formation could conceivably gain traction — for instance, the notion that binaries form through dynamical interactions inside dense star-forming regions called “globular clusters.” LIGO’s first run suggested that black-hole mergers are more common than the globular-cluster model predicts. But perhaps the experiment just got lucky last time and the estimated merger rate will drop.

    Adding to the mix, a group of cosmologists recently theorized that GW150914 might have come from the merger of primordial black holes, which were never stars to begin with but rather formed shortly after the Big Bang from the collapse of energetic patches of space-time. Intriguingly, the researchers argued in a recent paper in Physical Review Letters that such 30-solar-mass primordial black holes could comprise some or all of the missing “dark matter” that pervades the cosmos. There’s a way of testing the idea against astrophysical signals called fast radio bursts.

    It’s perhaps too soon to dwell on such an enticing possibility; astrophysicists point out that it would require suspiciously good luck for black holes from the Big Bang to happen to merge at just the right time for us to detect them, 13.8 billion years later. This is another example of the new logic that researchers must confront at the dawn of gravitational-wave astronomy. “We’re at a really fun stage,” de Mink said. “This is the first time we’re thinking in these pictures.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 8:57 am on September 9, 2016 Permalink | Reply
    Tags: , , , , Genetic Engineering to Clash With Evolution, Quanta Magazine   

    From Quanta: “Genetic Engineering to Clash With Evolution” 

    Quanta Magazine
    Quanta Magazine

    September 8, 2016
    Brooke Borel

    In a crowded auditorium at New York’s Cold Spring Harbor Laboratory in August, Philipp Messer, a population geneticist at Cornell University, took the stage to discuss a powerful and controversial new application for genetic engineering: gene drives.

    Gene drives can force a trait through a population, defying the usual rules of inheritance. A specific trait ordinarily has a 50-50 chance of being passed along to the next generation. A gene drive could push that rate to nearly 100 percent. The genetic dominance would then continue in all future generations. You want all the fruit flies in your lab to have light eyes? Engineer a drive for eye color, and soon enough, the fruit flies’ offspring will have light eyes, as will their offspring, and so on for all future generations. Gene drives may work in any species that reproduces sexually, and they have the potential to revolutionize disease control, agriculture, conservation and more. Scientists might be able to stop mosquitoes from spreading malaria, for example, or eradicate an invasive species.

    The technology represents the first time in history that humans have the ability to engineer the genes of a wild population. As such, it raises intense ethical and practical concerns, not only from critics but from the very scientists who are working with it.

    Messer’s presentation highlighted a potential snag for plans to engineer wild ecosystems: Nature usually finds a way around our meddling. Pathogens evolve antibiotic resistance; insects and weeds evolve to thwart pesticides. Mosquitoes and invasive species reprogrammed with gene drives can be expected to adapt as well, especially if the gene drive is harmful to the organism — it’ll try to survive by breaking the drive.

    “In the long run, even with a gene drive, evolution wins in the end,” said Kevin Esvelt, an evolutionary engineer at the Massachusetts Institute of Technology. “On an evolutionary timescale, nothing we do matters. Except, of course, extinction. Evolution doesn’t come back from that one.”

    Gene drives are a young technology, and none have been released into the wild. A handful of laboratory studies show that gene drives work in practice — in fruit flies, mosquitoes and yeast. Most of these experiments have found that the organisms begin to develop evolutionary resistance that should hinder the gene drives. But these proof-of-concept studies follow small populations of organisms. Large populations with more genetic diversity — like the millions of swarms of insects in the wild — pose the most opportunities for resistance to emerge.

    It’s impossible — and unethical — to test a gene drive in a vast wild population to sort out the kinks. Once a gene drive has been released, there may be no way to take it back. (Some researchers have suggested the possibility of releasing a second gene drive to shut down a rogue one. But that approach is hypothetical, and even if it worked, the ecological damage done in the meantime would remain unchanged.)

    The next best option is to build models to approximate how wild populations might respond to the introduction of a gene drive. Messer and other researchers are doing just that. “For us, it was clear that there was this discrepancy — a lot of geneticists have done a great job at trying to build these systems, but they were not concerned that much with what is happening on a population level,” Messer said. Instead, he wants to learn “what will happen on the population level, if you set these things free and they can evolve for many generations — that’s where resistance comes into play.”

    At the meeting at Cold Spring Harbor Laboratory, Messer discussed a computer model his team developed, which they described in a paper posted in June on the scientific preprint site biorxiv.org. The work is one of three theoretical papers on gene drive resistance submitted to biorxiv.org in the last five months — the others are from a researcher at the University of Texas, Austin, and a joint team from Harvard University and MIT. (The authors are all working to publish their research through traditional peer-reviewed journals.) According to Messer, his model suggests “resistance will evolve almost inevitably in standard gene drive systems.”

    It’s still unclear where all this interplay between resistance and gene drives will end up. It could be that resistance will render the gene drive impotent. On the one hand, this may mean that releasing the drive was a pointless exercise; on the other hand, some researchers argue, resistance could be an important natural safety feature. Evolution is unpredictable by its very nature, but a handful of biologists are using mathematical models and careful lab experiments to try to understand how this powerful genetic tool will behave when it’s set loose in the wild.

    Lucy Reading-Ikkanda for Quanta Magazine

    Resistance Isn’t Futile

    Gene drives aren’t exclusively a human technology. They occasionally appear in nature. Researchers first thought of harnessing the natural versions of gene drives decades ago, proposing to re-create them with “crude means, like radiation” or chemicals, said Anna Buchman, a postdoctoral researcher in molecular biology at the University of California, Riverside. These genetic oddities, she adds, “could be manipulated to spread genes through a population or suppress a population.”

    In 2003, Austin Burt, an evolutionary geneticist at Imperial College London, proposed a more finely tuned approach called a homing endonuclease gene drive, which would zero in on a specific section of DNA and alter it.

    Burt mentioned the potential problem of resistance — and suggested some solutions — both in his seminal paper and in subsequent work. But for years, it was difficult to engineer a drive in the lab, because the available technology was cumbersome.

    With the advent of genetic engineering, Burt’s idea became reality. In 2012, scientists unveiled CRISPR, a gene-editing tool that has been described as a molecular word processor. It has given scientists the power to alter genetic information in every organism they have tried it on. CRISPR locates a specific bit of genetic code and then breaks both strands of the DNA at that site, allowing genes to be deleted, added or replaced.

    CRISPR provides a relatively easy way to release a gene drive. First, researchers insert a CRISPR-powered gene drive into an organism. When the organism mates, its CRISPR-equipped chromosome cleaves the matching chromosome coming from the other parent. The offspring’s genetic machinery then attempts to sew up this cut. When it does, it copies over the relevant section of DNA from the first parent — the section that contains the CRISPR gene drive. In this way, the gene drive duplicates itself so that it ends up on both chromosomes, and this will occur with nearly every one of the original organism’s offspring.

    Just three years after CRISPR’s unveiling, scientists at the University of California, San Diego, used CRISPR to insert inheritable gene drives into the DNA of fruit flies, thus building the system Burt had proposed. Now scientists can order the essential biological tools on the internet and build a working gene drive in mere weeks. “Anyone with some genetics knowledge and a few hundred dollars can do it,” Messer said. “That makes it even more important that we really study this technology.”

    Although there are many different ways gene drives could work in practice, two approaches have garnered the most attention: replacement and suppression. A replacement gene drive alters a specific trait. For example, an anti-malaria gene drive might change a mosquito’s genome so that the insect no longer had the ability to pick up the malaria parasite. In this situation, the new genes would quickly spread through a wild population so that none of the mosquitoes could carry the parasite, effectively stopping the spread of the disease.

    A suppression gene drive would wipe out an entire population. For example, a gene drive that forced all offspring to be male would make reproduction impossible.

    But wild populations may resist gene drives in unpredictable ways. “We know from past experiences that mosquitoes, especially the malaria mosquitoes, have such peculiar biology and behavior,” said Flaminia Catteruccia, a molecular entomologist at the Harvard T.H. Chan School of Public Health. “Those mosquitoes are much more resilient than we make them. And engineering them will prove more difficult than we think.” In fact, such unpredictability could likely be found in any species.

    A sample of malaria-infected blood contains two Plasmodium falciparum parasites. CDC/PHIL

    The three new biorxiv.org papers use different models to try to understand this unpredictability, at least at its simplest level.

    The Cornell group used a basic mathematical model to map how evolutionary resistance will emerge in a replacement gene drive. It focuses on how DNA heals itself after CRISPR breaks it (the gene drive pushes a CRISPR construct into each new organism, so it can cut, copy and paste itself again). The DNA repairs itself automatically after a break. Exactly how it does so is determined by chance. One option is called nonhomologous end joining, in which the two ends that were broken get stitched back together in a random way. The result is similar to what you would get if you took a sentence, deleted a phrase, and then replaced it with an arbitrary set of words from the dictionary — you might still have a sentence, but it probably wouldn’t make sense. The second option is homology-directed repair, which uses a genetic template to heal the broken DNA. This is like deleting a phrase from a sentence, but then copying a known phrase as a replacement — one that you know will fit the context.

    Nonhomologous end joining is a recipe for resistance. Because the CRISPR system is designed to locate a specific stretch of DNA, it won’t recognize a section that has the equivalent of a nonsensical word in the middle. The gene drive won’t get into the DNA, and it won’t get passed on to the next generation. With homology-directed repair, the template could include the gene drive, ensuring that it would carry on.

    The Cornell model tested both scenarios. “What we found was it really is dependent on two things: the nonhomologous end-joining rate and the population size,” said Robert Unckless, an evolutionary geneticist at the University of Kansas who co-authored the paper as a postdoctoral researcher at Cornell. “If you can’t get nonhomologous end joining under control, resistance is inevitable. But resistance could take a while to spread, which means you might be able to achieve whatever goal you want to achieve.” For example, if the goal is to create a bubble of disease-proof mosquitoes around a city, the gene drive might do its job before resistance sets in.

    The team from Harvard and MIT also looked at nonhomologous end joining, but they took it a step further by suggesting a way around it: by designing a gene drive that targets multiple sites in the same gene. “If any of them cut at their sites, then it’ll be fine — the gene drive will copy,” said Charleston Noble, a doctoral student at Harvard and the first author of the paper. “You have a lot of chances for it to work.”

    The gene drive could also target an essential gene, Noble said — one that the organism can’t afford to lose. The organism may want to kick out the gene drive, but not at the cost of altering a gene that’s essential to life.

    The third biorxiv.org paper, from the UT Austin team, took a different approach. It looked at how resistance could emerge at the population level through behavior, rather than within the target sequence of DNA. The target population could simply stop breeding with the engineered individuals, for example, thus stopping the gene drive.

    “The math works out that if a population is inbred, at least to some degree, the gene drive isn’t going to work out as well as in a random population,” said James Bull, the author of the paper and an evolutionary biologist at Austin. “It’s not just sequence evolution. There could be all kinds of things going on here, by which populations block [gene drives],” Bull added. “I suspect this is the tip of the iceberg.”

    Resistance is constrained only by the limits of evolutionary creativity. It could emerge from any spot along the target organism’s genome. And it extends to the surrounding environment as well. For example, if a mosquito is engineered to withstand malaria, the parasite itself may grow resistant and mutate into a newly infectious form, Noble said.

    Not a Bug, but a Feature?

    If the point of a gene drive is to push a desired trait through a population, then resistance would seem to be a bad thing. If a drive stops working before an entire population of mosquitoes is malaria-proof, for example, then the disease will still spread. But at the Cold Spring Harbor Laboratory meeting, Messer suggested the opposite: “Let’s embrace resistance. It could provide a valuable safety control mechanism.” It’s possible that the drive could move just far enough to stop a disease in a particular region, but then stop before it spread to all of the mosquitoes worldwide, carrying with it an unknowable probability of unforeseen environmental ruin.

    Not everyone is convinced that this optimistic view is warranted. “It’s a false security,” said Ethan Bier, a geneticist at the University of California, San Diego. He said that while such a strategy is important to study, he worries that researchers will be fooled into thinking that forms of resistance offer “more of a buffer and safety net than they do.”

    And while mathematical models are helpful, researchers stress that models can’t replace actual experimentation. Ecological systems are just too complicated. “We have no experience engineering systems that are going to evolve outside of our control. We have never done that before,” Esvelt said. “So that’s why a lot of these modeling studies are important — they can give us a handle on what might happen. But I’m also hesitant to rely on modeling and trying to predict in advance when systems are so complicated.”

    Messer hopes to put his theoretical work into a real-world setting, at least in the lab. He is currently directing a gene drive experiment at Cornell that tracks multiple cages of around 5,000 fruit flies each — more animals than past studies have used to research gene drive resistance. The gene drive is designed to distribute a fluorescent protein through the population. The proteins will glow red under a special light, a visual cue showing how far the drive gets before resistance weeds it out.

    Others are also working on resistance experiments: Esvelt and Catteruccia, for example, are working with George Church, a geneticist at Harvard Medical School, to develop a gene drive in mosquitoes that they say will be immune to resistance. They plan to insert multiple drives in the same gene — the strategy suggested by the Harvard/MIT paper.

    Such experiments will likely guide the next generation of computer models, to help tailor them more precisely to a large wild population.

    “I think it’s been interesting because there is this sort of going back and forth between theory and empirical work,” Unckless said. “We’re still in the early days of it, but hopefully it’ll be worthwhile for both sides, and we’ll make some informed and ethically correct decisions about what to do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 3:40 pm on August 29, 2016 Permalink | Reply
    Tags: , , Jammed Cells Expose the Physics of Cancer, Quanta Magazine   

    From Quanta: “Jammed Cells Expose the Physics of Cancer” 

    Quanta Magazine
    Quanta Magazine

    August 16, 2016
    Gabriel Popkin

    The subtle mechanics of densely packed cells may help explain why some cancerous tumors stay put while others break off and spread through the body.

    Ashley Mackenzie for Quanta Magazine

    In 1995, while he was a graduate student at McGill University in Montreal, the biomedical scientist Peter Friedl saw something so startling it kept him awake for several nights. Coordinated groups of cancer cells he was growing in his adviser’s lab started moving through a network of fibers meant to mimic the spaces between cells in the human body.

    For more than a century, scientists had known that individual cancer cells can metastasize, leaving a tumor and migrating through the bloodstream and lymph system to distant parts of the body. But no one had seen what Friedl had caught in his microscope: a phalanx of cancer cells moving as one. It was so new and strange that at first he had trouble getting it published. “It was rejected because the relevance [to metastasis] wasn’t clear,” he said. Friedl and his co-authors eventually published a short paper in the journal Cancer Research.

    Two decades later, biologists have become increasingly convinced that mobile clusters of tumor cells, though rarer than individual circulating cells, are seeding many — perhaps most — of the deadly metastatic invasions that cause 90 percent of all cancer deaths. But it wasn’t until 2013 that Friedl, now at Radboud University in the Netherlands, really felt that he understood what he and his colleagues were seeing. Things finally fell into place for him when he read a paper [Science Direct] by Jeffrey Fredberg, a professor of bioengineering and physiology at Harvard University, which proposed that cells could be “jammed” — packed together so tightly that they become a unit, like coffee beans stuck in a hopper.

    Fredberg’s research focused on lung cells, but Friedl thought his own migrating cancer cells might also be jammed. “I realized we had exactly the same thing, in 3-D and in motion,” he said. “That got me very excited, because it was an available concept that we could directly put onto our finding.” He soon published one of the first papers applying the concept of jamming to experimental measurements of cancer cells.

    Physicists have long provided doctors with tumor-fighting tools such as radiation and proton beams. But only recently has anyone seriously considered the notion that purely physical concepts might help us understand the basic biology of one of the world’s deadliest phenomena. In the past few years, physicists studying metastasis have generated surprisingly precise predictions of cell behavior. Though it’s early days, proponents are optimistic that phase transitions such as jamming will play an increasingly important role in the fight against cancer. “Certainly in the physics community there’s momentum,” Fredberg said. “If the physicists are on board with it, the biologists are going to have to. Cells obey the rules of physics — there’s no choice.”

    The Jam Index

    In the broadest sense, physical principles have been applied to cancer since long before physics existed as a discipline. The ancient Greek physician Hippocrates gave cancer its name when he referred to it as a “crab,” comparing the shape of a tumor and its surrounding veins to a carapace and legs.

    But those solid tumors do not kill more than 8 million people annually. Once tumor cells strike out on their own and metastasize to new sites in the body, drugs and other therapies rarely do more than prolong a patient’s life for a few years.

    Biologists often view cancer primarily as a genetic program gone wrong, with mutations and epigenetic changes producing cells that don’t behave the way they should: Genes associated with cell division and growth may be turned up, and genes for programmed cell death may be turned down. To a small but growing number of physicists, however, the shape-shifting and behavior changes in cancer cells evoke not an errant genetic program but a phase transition.

    The phase transition — a change in a material’s internal organization between ordered and disordered states — is a bedrock concept in physics. Anyone who has watched ice melt or water boil has witnessed a phase transition. Physicists have also identified such transitions in magnets, crystals, flocking birds and even cells (and cellular components) placed in artificial environments.

    But compared to a homogeneous material like water or a magnet — or even a collection of identical cells in a dish — cancer is a hot mess. Cancers vary widely depending on the individual and the organ they develop in. Even a single tumor comprises a mind-boggling jumble of cells with different shapes, sizes and protein compositions. Such complexities can make biologists wary of a general theoretical framework. But they don’t daunt physicists. “Biologists are more trained to look at complexity and differences,” said the physicist Krastan Blagoev, who directs a National Science Foundation program that funds work on theoretical physics in living systems. “Physicists try to look at what’s common and extract behaviors from the commonness.”

    In a demonstration of this approach, the physicists Andrea Liu, now of the University of Pennsylvania, and Sidney Nagel of the University of Chicago published a brief commentary in Nature in 1998 about the process of jamming. They described familiar examples: traffic jams, piles of sand, and coffee beans stuck together in a grocery-store hopper. These are all individual items held together by an external force so that they resemble a solid. Liu and Nagel put forward the provocative suggestion that jamming could be a previously unrecognized phase transition, a notion that physicists, after more than a decade of debate, have now accepted.

    Though not the first mention of jamming in the scientific literature, Liu and Nagel’s paper set off what Fredberg calls “a deluge” among physicists. (The paper has been cited more than 1,400 times.) Fredberg realized that cells in lung tissue, which he had spent much of his career studying, are closely packed in a similar way to coffee beans and sand. In 2009 he and colleagues published [Nature Physics] the first paper suggesting that jamming could hold cells in tissues in place, and that an unjamming transition could mobilize some of those cells, a possibility that could have implications for asthma and other diseases.

    Lucy Reading-Ikkanda for Quanta Magazine

    The paper appeared amid a growing recognition of the importance of mechanics, and not just genetics, in directing cell behavior, Fredberg said. “People had always thought that the mechanical implications were at the most downstream end of the causal cascade, and at the most upstream end are genetic and epigenetic factors,” he said. “Then people discovered that physical forces and mechanical events actually can be upstream of genetic events — that cells are very aware of their mechanical microenvironments.”

    Lisa Manning, a physicist at Syracuse University, read Fredberg’s paper and decided to put his idea into action. She and colleagues used a two-dimensional model of cells that are connected along edges and at vertices, filling all space. The model yielded an order parameter — a measurable number that quantifies a material’s internal order — that they called the “shape index.” The shape index relates the perimeter of a two-dimensional slice of the cell and its total surface area. “We made what I would consider a ridiculously strict prediction: When that number is equal to 3.81 or below, the tissue is a solid, and when that number is above 3.81, that tissue is a fluid,” Manning said. “I asked Jeff Fredberg to go look at this, and he did [Nature Materials], and it worked perfectly.”

    Fredberg saw that lung cells with a shape index above 3.81 started to mobilize and squeeze past each other. Manning’s prediction “came out of pure theory, pure thought,” he said. “It’s really an astounding validation of a physical theory.” A program officer with the Physical Sciences in Oncology program at the National Cancer Institute learned about the results and encouraged Fredberg to do a similar analysis using cancer cells. The program has given him funding to look for signatures of jamming in breast-cancer cells.

    Meanwhile, Josef Käs, a physicist at Leipzig University in Germany, wondered if jamming could help explain puzzling behavior in cancer cells. He knew from his own studies and those of others that breast and cervical tumors, while mostly stiff, also contain soft, mobile cells that stream into the surrounding environment. If an unjamming transition was fluidizing these cancer cells, Käs immediately envisioned a potential response: Perhaps an analysis of biopsies based on measurements of tumor cells’ state of jamming, rather than a nearly century-old visual inspection procedure, could determine whether a tumor is about to metastasize.

    Käs is now using a laser-based tool to look for signatures of jamming in tumors, and he hopes to have results later this year. In a separate study that is just beginning, he is working with Manning and her colleagues at Syracuse to look for phase transitions not just in cancer cells themselves, but also in the matrix of fibers that surrounds tumors.

    More speculatively, Käs thinks the idea could also yield new avenues for therapies that are gentler than the shock-and-awe approach clinicians typically use to subdue a tumor. “If you can jam a whole tumor, then you have a benign tumor — that I believe,” he said. “If you find something which basically jams cancer cells efficiently and buys you another 20 years, that might be better than very disruptive chemotherapies.” Yet Käs is quick to clarify that he is not sure how a clinician would induce jamming.

    Castaway Cooperators

    Beyond the clinic, jamming could help resolve a growing conceptual debate in cancer biology, proponents say. Oncologists have suspected for several decades that metastasis usually requires a transition between sticky epithelial cells, which make up the bulk of solid tumors, and thinner, more mobile mesenchymal cells that are often found circulating solo in cancer patients’ bloodstreams. As more and more studies deliver results showing activity similar to that of Friedl’s migrating cell clusters, however, researchers have begun to question [Science] whether go-it-alone mesenchymal cells, which Friedl calls “lonely riders,” could really be the main culprits behind the metastatic disease that kills millions.

    Some believe jamming could help get oncology out of this conceptual jam. A phase transition between jammed and unjammed states could fluidize and mobilize tumor cells as a group, without requiring them to transform from one cell type to a drastically different one, Friedl said. This could allow metastasizing cells to cooperate with one another, potentially giving them an advantage in colonizing a new site.

    The key to developing this idea is to allow for a range of intermediate cell states between two extremes. “In the past, theories for how cancer might behave mechanically have either been theories for solids or theories for fluids,” Manning said. “Now we need to take into account the fact that they’re right on the edge.”

    Hints of intermediate states between epithelial and mesenchymal are also emerging from physics research not motivated by phase-transition concepts. Herbert Levine, a biophysicist at Rice University, and his late colleague Eshel Ben-Jacob of Tel Aviv University recently created a model of metastasis based on concepts borrowed from nonlinear dynamics. It predicts the existence of clusters of circulating cells that have traits of both epithelial and mesenchymal cells. Cancer biologists have never seen such transitional cell states, but some are now seeking them in lab studies. “We wouldn’t have thought about it” on our own, said Kenneth Pienta, a prostate cancer specialist at Johns Hopkins University. “We have been directly affected by theoretical physics.”

    Biology’s Phase Transition

    Models of cell jamming, while useful, remain imperfect. For example, Manning’s models have been confined to two dimensions until now, even though tumors are three-dimensional. Manning is currently working on a 3-D version of her model of cellular motility. So far it seems to predict a fluid-to-solid transition similar to that of the 2-D model, she said.

    In addition, cells are not as simple as coffee beans. Cells in a tumor or tissue can change their own mechanical properties in often complex ways, using genetic programs and other feedback loops, and if jamming is to provide a solid conceptual foundation for aspects of cancer, it will need to account for this ability. “Cells are not passive,” said Valerie Weaver, the director of the Center for Bioengineering and Tissue Regeneration at the University of California, San Francisco. “Cells are responding.”

    Weaver also said that the predictions made by jamming models resemble what biologists call extrusion, a process by which dead epithelial cells are squeezed out of crowded tissue — the disfunction of which has recently been implicated in certain types of cancer. Manning believes that cell jamming likely provides an overarching mechanical explanation for many of the cell behaviors involved in cancer, including extrusion.

    Space-filling tissue models like the one Manning uses, which produce the jamming behavior, also have trouble accounting for all the details of how cells interact with their neighbors and with their environment, Levine said. He has taken a different approach, modeling some of the differences in the ways cells can react when they’re being crowded by other cells. “Jamming will take you some distance,” he said, adding, “I think we will get stuck if we just limit ourselves to thinking of these physics transitions.”

    Manning acknowledges that jamming alone cannot describe everything going on in cancer, but at least in certain types of cancer, it may play an important role, she said. “The message we’re not trying to put out there is that mechanics is the only game in town,” she said. “In some instances we might do a better job than traditional biochemical markers [in determining whether a particular cancer is dangerous]; in some cases we might not. But for something like cancer we want to have all hands on deck.”

    With this in mind, physicists have suggested other novel approaches to understanding cancer. A number of physicists, including Ricard Solé of Pompeu Fabra University in Barcelona, Jack Tuszynski of the University of Alberta, and Salvatore Torquato of Princeton University, have published theory papers suggesting ways that phase transitions could help explain aspects of cancer, and how experimentalists could test such predictions.

    Others, however, feel that phase transitions may not be the right tool. Robert Austin, a biological physicist at Princeton University, cautions that phase transitions can be surprisingly complex. Even for a seemingly elementary case such as freezing water, physicists have yet to compute exactly when a transition will occur, he notes — and cancer is far more complicated than water.

    And from a practical point of view, all the theory papers in the world won’t make a difference if physicists cannot get biologists and clinicians interested in their ideas. Jamming is a hot topic in physics, but most biologists have not yet heard of it, Fredberg said. The two communities can talk to each other at physics-and-cancer workshops during meetings hosted by the American Physical Society, the American Association for Cancer Research or the National Cancer Institute. But language and culture gaps remain. “I can come up with some phase diagrams, but in the end you have to translate it into a language which is relevant to oncologists,” Käs said.

    Those gaps will narrow if jamming and phase transition theory continue to successfully explain what researchers see in cells and tissues, Fredberg said. “If there’s really increasing evidence that the way cells move collectively revolves around jamming, it’s just a matter of time until that works its way into the biological literature.”

    And that, Friedl said, will give biologists a powerful new conceptual tool. “The challenge, but also the fascination, comes from identifying how living biology hijacks the physical principle and brings it to life and reinvents it using molecular strategies of cells.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: