Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:29 pm on January 23, 2017 Permalink | Reply
    Tags: Biophysics, Centrosomes, Earth’s primordial soup, Macromolecules, Protocells?, Quanta Magazine, simple “chemically active” droplets grow to the size of cells and spontaneously divide, The first living cells?, Vestiges of evolutionary history   

    From Quanta: “Dividing Droplets Could Explain Life’s Origin” 

    Quanta Magazine
    Quanta Magazine

    January 19, 2017
    Natalie Wolchover

    Researchers have discovered that simple “chemically active” droplets grow to the size of cells and spontaneously divide, suggesting they might have evolved into the first living cells.

    1
    davidope for Quanta Magazine

    A collaboration of physicists and biologists in Germany has found a simple mechanism that might have enabled liquid droplets to evolve into living cells in early Earth’s primordial soup.

    Origin-of-life researchers have praised the minimalism of the idea. Ramin Golestanian, a professor of theoretical physics at the University of Oxford who was not involved in the research, called it a big achievement that suggests that “the general phenomenology of life formation is a lot easier than one might think.”

    The central question about the origin of life has been how the first cells arose from primitive precursors. What were those precursors, dubbed “protocells,” and how did they come alive? Proponents of the “membrane-first” hypothesis have argued that a fatty-acid membrane was needed to corral the chemicals of life and incubate biological complexity. But how could something as complex as a membrane start to self-replicate and proliferate, allowing evolution to act on it?

    In 1924, Alexander Oparin, the Russian biochemist who first envisioned a hot, briny primordial soup as the source of life’s humble beginnings, proposed that the mystery protocells might have been liquid droplets — naturally forming, membrane-free containers that concentrate chemicals and thereby foster reactions. In recent years, droplets have been found to perform a range of essential functions inside modern cells, reviving Oparin’s long-forgotten speculation about their role in evolutionary history. But neither he nor anyone else could explain how droplets might have proliferated, growing and dividing and, in the process, evolving into the first cells.

    Now, the new work by David Zwicker and collaborators at the Max Planck Institute for the Physics of Complex Systems and the Max Planck Institute of Molecular Cell Biology and Genetics, both in Dresden, suggests an answer. The scientists studied the physics of “chemically active” droplets, which cycle chemicals in and out of the surrounding fluid, and discovered that these droplets tend to grow to cell size and divide, just like cells. This “active droplet” behavior differs from the passive and more familiar tendencies of oil droplets in water, which glom together into bigger and bigger droplets without ever dividing.

    If chemically active droplets can grow to a set size and divide of their own accord, then “it makes it more plausible that there could have been spontaneous emergence of life from nonliving soup,” said Frank Jülicher, a biophysicist in Dresden and a co-author of the new paper.

    The findings, reported in Nature Physics last month, paint a possible picture of life’s start by explaining “how cells made daughters,” said Zwicker, who is now a postdoctoral researcher at Harvard University. “This is, of course, key if you want to think about evolution.”

    Luca Giomi, a theoretical biophysicist at Leiden University in the Netherlands who studies the possible physical mechanisms behind the origin of life, said the new proposal is significantly simpler than other mechanisms of protocell division that have been considered, calling it “a very promising direction.”

    However, David Deamer, a biochemist at the University of California, Santa Cruz, and a longtime champion of the membrane-first hypothesis, argues that while the newfound mechanism of droplet division is interesting, its relevance to the origin of life remains to be seen. The mechanism is a far cry, he noted, from the complicated, multistep process by which modern cells divide.

    Could simple dividing droplets have evolved into the teeming menagerie of modern life, from amoebas to zebras? Physicists and biologists familiar with the new work say it’s plausible. As a next step, experiments are under way in Dresden to try to observe the growth and division of active droplets made of synthetic polymers that are modeled after the droplets found in living cells. After that, the scientists hope to observe biological droplets dividing in the same way.

    Clifford Brangwynne, a biophysicist at Princeton University who was part of the Dresden-based team that identified the first subcellular droplets eight years ago — tiny liquid aggregates of protein and RNA in cells of the worm C. elegans — explained that it would not be surprising if these were vestiges of evolutionary history. Just as mitochondria, organelles that have their own DNA, came from ancient bacteria that infected cells and developed a symbiotic relationship with them, “the condensed liquid phases that we see in living cells might reflect, in a similar sense, a sort of fossil record of the physicochemical driving forces that helped set up cells in the first place,” he said.

    2
    When germline cells in the roundworm C. elegans divide, P granules, shown in green, condense in the daughter cell that will become a viable sperm or egg and dissolve in the other daughter cell. Courtesy of Clifford Brangwynne/Science

    “This Nature Physics paper takes that to the next level,” by revealing the features that droplets would have needed “to play a role as protocells,” Brangwynne added.

    Droplets in Dresden

    The Dresden droplet discoveries began in 2009, when Brangwynne and collaborators demystified the nature of little dots known as “P granules” in C. elegans germline cells, which undergo division into sperm and egg cells. During this division process, the researchers observed that P granules grow, shrink and move across the cells via diffusion. The discovery that they are liquid droplets, reported in Science, prompted a wave of activity as other subcellular structures were also identified as droplets. It didn’t take long for Brangwynne and Tony Hyman, head of the Dresden biology lab where the initial experiments took place, to make the connection to Oparin’s 1924 protocell theory. In a 2012 essay about Oparin’s life and seminal book, The Origin of Life, Brangwynne and Hyman wrote that the droplets he theorized about “may still be alive and well, safe within our cells, like flies in life’s evolving amber.”

    Oparin most famously hypothesized that lightning strikes or geothermal activity on early Earth could have triggered the synthesis of organic macromolecules necessary for life — a conjecture later made independently by the British scientist John Haldane and triumphantly confirmed by the Miller-Urey experiment in the 1950s. Another of Oparin’s ideas, that liquid aggregates of these macromolecules might have served as protocells, was less celebrated, in part because he had no clue as to how the droplets might have reproduced, thereby enabling evolution. The Dresden group studying P granules didn’t know either.

    In the wake of their discovery, Jülicher assigned his new student, Zwicker, the task of unraveling the physics of centrosomes, organelles involved in animal cell division that also seemed to behave like droplets. Zwicker modeled the centrosomes as “out-of-equilibrium” systems that are chemically active, continuously cycling constituent proteins into and out of the surrounding liquid cytoplasm. In his model, these proteins have two chemical states. Proteins in state A dissolve in the surrounding liquid, while those in state B are insoluble, aggregating inside a droplet. Sometimes, proteins in state B spontaneously switch to state A and flow out of the droplet. An energy source can trigger the reverse reaction, causing a protein in state A to overcome a chemical barrier and transform into state B; when this insoluble protein bumps into a droplet, it slinks easily inside, like a raindrop in a puddle. Thus, as long as there’s an energy source, molecules flow in and out of an active droplet. “In the context of early Earth, sunlight would be the driving force,” Jülicher said.

    Zwicker discovered that this chemical influx and efflux will exactly counterbalance each other when an active droplet reaches a certain volume, causing the droplet to stop growing. Typical droplets in Zwicker’s simulations grew to tens or hundreds of microns across depending on their properties — the scale of cells.

    4
    Lucy Reading-Ikkanda/Quanta Magazine

    The next discovery was even more unexpected. Although active droplets have a stable size, Zwicker found that they are unstable with respect to shape: When a surplus of B molecules enters a droplet on one part of its surface, causing it to bulge slightly in that direction, the extra surface area from the bulging further accelerates the droplet’s growth as more molecules can diffuse inside. The droplet elongates further and pinches in at the middle, which has low surface area. Eventually, it splits into a pair of droplets, which then grow to the characteristic size. When Jülicher saw simulations of Zwicker’s equations, “he immediately jumped on it and said, ‘That looks very much like division,’” Zwicker said. “And then this whole protocell idea emerged quickly.”

    Zwicker, Jülicher and their collaborators, Rabea Seyboldt, Christoph Weber and Tony Hyman, developed their theory over the next three years, extending Oparin’s vision. “If you just think about droplets like Oparin did, then it’s not clear how evolution could act on these droplets,” Zwicker said. “For evolution, you have to make copies of yourself with slight modifications, and then natural selection decides how things get more complex.”

    Globule Ancestor

    Last spring, Jülicher began meeting with Dora Tang, head of a biology lab at the Max Planck Institute of Molecular Cell Biology and Genetics, to discuss plans to try to observe active-droplet division in action.

    Tang’s lab synthesizes artificial cells made of polymers, lipids and proteins that resemble biochemical molecules. Over the next few months, she and her team will look for division of liquid droplets made of polymers that are physically similar to the proteins in P granules and centrosomes. The next step, which will be made in collaboration with Hyman’s lab, is to try to observe centrosomes or other biological droplets dividing, and to determine if they utilize the mechanism identified in the paper by Zwicker and colleagues. “That would be a big deal,” said Giomi, the Leiden biophysicist.

    When Deamer, the membrane-first proponent, read the new paper, he recalled having once observed something like the predicted behavior in hydrocarbon droplets he had extracted from a meteorite. When he illuminated the droplets in near-ultraviolet light, they began moving and dividing. (He sent footage of the phenomenon to Jülicher.) Nonetheless, Deamer isn’t convinced of the effect’s significance. “There is no obvious way for the mechanism of division they reported to evolve into the complex process by which living cells actually divide,” he said.

    Other researchers disagree, including Tang. She says that once droplets started to divide, they could easily have gained the ability to transfer genetic information, essentially divvying up a batch of protein-coding RNA or DNA into equal parcels for their daughter cells. If this genetic material coded for useful proteins that increased the rate of droplet division, natural selection would favor the behavior. Protocells, fueled by sunlight and the law of increasing entropy, would gradually have grown more complex.

    Jülicher and colleagues argue that somewhere along the way, protocell droplets could have acquired membranes. Droplets naturally collect crusts of lipids that prefer to lie at the interface between the droplets and the surrounding liquid. Somehow, genes might have started coding for these membranes as a kind of protection. When this idea was put to Deamer, he said, “I can go along with that,” noting that he would define protocells as the first droplets that had membranes.

    The primordial plotline hinges, of course, on the outcome of future experiments, which will determine how robust and relevant the predicted droplet division mechanism really is. Can chemicals be found with the right two states, A and B, to bear out the theory? If so, then a viable path from nonlife to life starts to come into focus.

    The luckiest part of the whole process, in Jülicher’s opinion, was not that droplets turned into cells, but that the first droplet — our globule ancestor — formed to begin with. Droplets require a lot of chemical material to spontaneously arise or “nucleate,” and it’s unclear how so many of the right complex macromolecules could have accumulated in the primordial soup to make it happen. But then again, Jülicher said, there was a lot of soup, and it was stewing for eons.

    “It’s a very rare event. You have to wait a long time for it to happen,” he said. “And once it happens, then the next things happen more easily, and more systematically.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    Advertisements
     
  • richardmitnick 12:07 pm on December 22, 2016 Permalink | Reply
    Tags: , Explorers Find Passage to Earth’s Dark Age, , Quanta Magazine   

    From Quanta: “Explorers Find Passage to Earth’s Dark Age” 

    Quanta Magazine
    Quanta Magazine

    December 22, 2016
    Natalie Wolchover

    1
    Earth scientists hope that their growing knowledge of the planet’s early history will shed light on poorly understood features seen today, from continents to geysers. Eric King

    Geochemical signals from deep inside Earth are beginning to shed light on the planet’s first 50 million years, a formative period long viewed as inaccessible to science.

    In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine.

    A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!”

    The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there — before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest.

    Other modern findings [Nature]about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy — a period that until recently was inaccessible to science.

    After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents.

    The anomalies “are giving us information about some of the earliest Earth processes,” Walker said. “It’s an alternative universe from what geochemists have been working with for the past 50 years.”

    2
    Matt Jackson and his family with a local farmer in northwest Iceland. Courtesy of Matt Jackson.

    The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation — and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years.

    “You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.”

    Pieces of the Puzzle

    On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes — places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly.

    A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas — a necessity when the goal is understanding a system as complex as Earth.

    Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face — some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics.

    Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism — well, he did, but it was crazy,” she said.

    3
    Earth scientists on a beach hike in Santa Barbara County, California. Natalie Wolchover/Quanta Magazine

    A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge — the underwater mountain range that lines the world ocean like a seam — and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.)

    The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere.

    The tectonic plates of the world were mapped in 1996, USGS.
    The tectonic plates of the world were mapped in 1996, USGS.

    Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up — perhaps these are the blobs? — and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them — all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise.

    The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized.

    Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history — very early on.”

    Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when.

    A Sketchy Timeline

    Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery.

    The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules — colorful, beautiful — under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing — just regular things.”

    In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation [Science Direct], though still hotly debated [Nature]in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place.

    5
    A panorama of the Taurus-Littrow Valley created from photographs by Apollo 17 astronaut Eugene Cernan. Astronaut Harrison Schmitt is shown using a rake to collect samples. NASA

    Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began.

    First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star — our sun — surrounded by a placenta of hot debris.

    Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars — the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets.

    6
    The planets of the solar system as depicted by a NASA computer illustration. Orbits and sizes are not shown to scale.
    Credit: NASA

    7
    Researchers use liquid chromatography to isolate elements for analysis. Rock samples dissolved in acid flow down ion-exchange columns, like the ones in Rick Carlson’s laboratory at the Carnegie Institution in Washington, to separate the elements. Mary Horan.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers — core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks.

    The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

    Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

    Wasserburg dubbed the event the “lunar cataclysm.” [Science Direct]. Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said.

    As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden.

    In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment — and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” [Nature] posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans.

    With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost.

    Extending the Rock Record

    More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred.

    8
    Lucy Reading-Ikkanda for Quanta Magazine

    Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived.

    In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system — hafnium-tungsten — in ancient rocks, which would point back to even earlier times in Earth’s history.

    The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182.

    That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around — processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t?

    Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old.

    Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average — the first ever detection of a “positive” tungsten anomaly on the face of the Earth.

    The paper scooped Richard Walker of Maryland and his colleagues, who months later reported [Science] a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia.

    Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history.

    9
    The 60 million-year-old flood basalts of Baffin Bay, Greenland, sampled by the geochemist Hanika Rizo (center) and colleagues, contain isotope traces that originated more than 4.5 billion years ago. Don Francis (left); courtesy of Hanika Rizo (center and right).

    The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.”

    Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks — 62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks.

    More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s group reported [2016 AGU Fall Meeting] a negative tungsten anomaly — that is, a deficit of tungsten-182 relative to tungsten-184 — in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation.

    10
    Tim Elliott collecting samples of ancient crust rock in Yilgarn Craton in Western Australia. Tony Kemp

    Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass — delivered to the planet in a long tail of minor impacts — mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies.

    Other evidence complicates this hypothesis, however — namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize.

    Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material.

    “These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:05 pm on November 30, 2016 Permalink | Reply
    Tags: , , , , , , Quanta Magazine,   

    From Quanta: “The Case Against Dark Matter” 

    Quanta Magazine
    Quanta Magazine

    November 29, 2016
    Natalie Wolchover

    1
    Erik Verlinde
    Ilvy Njiokiktjien for Quanta Magazine

    For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical “dark matter” seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.

    But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. This month, a series of developments has revived a long-disfavored argument that dark matter doesn’t exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.

    The latest attempt to explain away dark matter is a much-discussed proposal by Erik Verlinde, a theoretical physicist at the University of Amsterdam who is known for bold and prescient, if sometimes imperfect, ideas. In a dense 51-page paper posted online on Nov. 7, Verlinde casts gravity as a byproduct of quantum interactions and suggests that the extra gravity attributed to dark matter is an effect of “dark energy” — the background energy woven into the space-time fabric of the universe.

    Instead of hordes of invisible particles, “dark matter is an interplay between ordinary matter and dark energy,” Verlinde said.

    To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called “qubits”), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.

    In his calculations, Verlinde rediscovered the equations of “modified Newtonian dynamics,” or MOND. This 30-year-old theory makes an ad hoc tweak to the famous “inverse-square” law of gravity in Newton’s and Einstein’s theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. “I have a way of understanding the MOND success from a more fundamental perspective,” Verlinde said.

    Many experts have called Verlinde’s paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxies published on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MOND’s case against dark matter.

    The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxy’s center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the “radial acceleration relation.” This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxy’s rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

    Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlinde’s theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or “lensing” of light from the galaxies — another phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MOND’s original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlinde’s theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

    Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlinde’s proposal at least demonstrates how something like MOND might be right after all. “One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior,” she said. “If [Verlinde’s] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.”

    The New MOND

    In Newton’s and Einstein’s theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pull — and orbit more slowly — the farther they are from the galactic center. Stars’ velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The “flattening” of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matter — explained, in that paradigm, by dark matter clouds or “halos” that surround galaxies and give an extra gravitational acceleration to their outlying stars.

    Searches for dark matter particles have proliferated — with hypothetical “weakly interacting massive particles” (WIMPs) and lighter-weight “axions” serving as prime candidates — but so far, experiments have found nothing.

    2
    Lucy Reading-Ikkanda for Quanta Magazine

    Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain level — precisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earth — he postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. “There’s this magic scale,” McGaugh said. “Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.”

    Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlinde’s theory revitalizes MOND by attempting to reveal the method behind the magic.

    Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.

    The particular brand of emergent gravity in Verlinde’s paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-time — an approach that Verlinde has now absorbed into his new work.

    In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum information — that is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become “entangled” with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. “The best way we understand quantum gravity currently is this holographic approach,” said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.

    The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that it’s exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

    De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energy — often called dark energy — which drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.

    Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newton’s inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

    Van Raamsdonk calls Verlinde’s idea “definitely an important direction.” But he says it’s too soon to tell whether everything in the paper — which draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysics — hangs together. Either way, Van Raamsdonk said, “I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.”

    One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. “To be fair, we’ve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe,” Swingle said, referring to work in AdS space. “We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.”


    Access mp4 video here .

    The Case for Dark Matter

    Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.

    One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another.

    4
    X-ray photo by Chandra X-ray Observatory of the Bullet Cluster (1E0657-56). Exposure time was 0.5 million seconds (~140 hours) and the scale is shown in megaparsecs. Redshift (z) = 0.3, meaning its light has wavelengths stretched by a factor of 1.3. Based on today’s theories this shows the cluster to be about 4 billion light years away.
    In this photograph, a rapidly moving galaxy cluster with a shock wave trailing behind it seems to have hit another cluster at high speed. The gases collide, and gravitational fields of the stars and galalxies interact. When the galaxies collided, based on black-body temperture readings, the temperature reached 160 million degrees and X-rays were emitted in great intensity, claiming title of the hottest known galactic cluster.
    Studies of the Bullet cluster, announced in August 2006, provide the best evidence to date for the existence of dark matter.
    http://cxc.harvard.edu/symposium_2005/proceedings/files/markevitch_maxim.pdf
    User:Mac_Davis

    5
    Superimposed mass density contours, caused by gravitational lensing of dark matter. Photograph taken with Hubble Space Telescope.
    Date 22 August 2006
    http://cxc.harvard.edu/symposium_2005/proceedings/files/markevitch_maxim.pdf
    User:Mac_Davis

    The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energy’s gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.

    But the crowning achievement for Verlinde’s theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe.

    CMB per ESA/Planck
    CMB per ESA/Planck

    The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks’ amplitudes. Verlinde expects that his version will work — once again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. “Having said this,” he said, “I have not calculated this all through.”

    While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues’ recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

    In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxy’s halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies’ rotation speeds, even though they’re set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies’ dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says it’s too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues’ galaxy data set. If not, then the standard dark matter paradigm is in big trouble. “Obviously this is something that the community needs to look at more carefully,” Zurek said.

    Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. “If somebody were to come to you and say, ‘The solar system doesn’t work on an inverse-square law, really it’s an inverse-cube law, but there’s dark matter that’s arranged just so that it always looks inverse-square,’ you would say that person is insane,” he said. “But that’s basically what we’re asking to be the case with dark matter here.”

    Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. “That said, you should always check that you’re not on a bandwagon,” she added. “Even though this paradigm explains everything, you should always check that there isn’t something else going on.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:18 am on September 16, 2016 Permalink | Reply
    Tags: , Quanta Magazine,   

    From Quanta: “The Strange Second Life of String Theory” 

    Quanta Magazine
    Quanta Magazine

    September 15, 2016
    K.C. Cole

    String theory has so far failed to live up to its promise as a way to unite gravity and quantum mechanics.
    At the same time, it has blossomed into one of the most useful sets of tools in science.

    1
    Renee Rominger/Moonrise Whims for Quanta Magazine

    String theory strutted onto the scene some 30 years ago as perfection itself, a promise of elegant simplicity that would solve knotty problems in fundamental physics — including the notoriously intractable mismatch between Einstein’s smoothly warped space-time and the inherently jittery, quantized bits of stuff that made up everything in it.

    It seemed, to paraphrase Michael Faraday, much too wonderful not to be true: Simply replace infinitely small particles with tiny (but finite) vibrating loops of string. The vibrations would sing out quarks, electrons, gluons and photons, as well as their extended families, producing in harmony every ingredient needed to cook up the knowable world. Avoiding the infinitely small meant avoiding a variety of catastrophes. For one, quantum uncertainty couldn’t rip space-time to shreds. At last, it seemed, here was a workable theory of quantum gravity.

    Even more beautiful than the story told in words was the elegance of the math behind it, which had the power to make some physicists ecstatic.

    To be sure, the theory came with unsettling implications. The strings were too small to be probed by experiment and lived in as many as 11 dimensions of space. These dimensions were folded in on themselves — or “compactified” — into complex origami shapes. No one knew just how the dimensions were compactified — the possibilities for doing so appeared to be endless — but surely some configuration would turn out to be just what was needed to produce familiar forces and particles.

    For a time, many physicists believed that string theory would yield a unique way to combine quantum mechanics and gravity. “There was a hope. A moment,” said David Gross, an original player in the so-called Princeton String Quartet, a Nobel Prize winner and permanent member of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. “We even thought for a while in the mid-’80s that it was a unique theory.”

    And then physicists began to realize that the dream of one singular theory was an illusion. The complexities of string theory, all the possible permutations, refused to reduce to a single one that described our world. “After a certain point in the early ’90s, people gave up on trying to connect to the real world,” Gross said. “The last 20 years have really been a great extension of theoretical tools, but very little progress on understanding what’s actually out there.”

    Many, in retrospect, realized they had raised the bar too high. Coming off the momentum of completing the solid and powerful “standard model” of particle physics in the 1970s, they hoped the story would repeat — only this time on a mammoth, all-embracing scale. “We’ve been trying to aim for the successes of the past where we had a very simple equation that captured everything,” said Robbert Dijkgraaf, the director of the Institute for Advanced Study in Princeton, New Jersey. “But now we have this big mess.”

    Like many a maturing beauty, string theory has gotten rich in relationships, complicated, hard to handle and widely influential. Its tentacles have reached so deeply into so many areas in theoretical physics, it’s become almost unrecognizable, even to string theorists. “Things have gotten almost postmodern,” said Dijkgraaf, who is a painter as well as mathematical physicist.

    The mathematics that have come out of string theory have been put to use in fields such as cosmology and condensed matter physics — the study of materials and their properties. It’s so ubiquitous that “even if you shut down all the string theory groups, people in condensed matter, people in cosmology, people in quantum gravity will do it,” Dijkgraaf said.

    “It’s hard to say really where you should draw the boundary around and say: This is string theory; this is not string theory,” said Douglas Stanford, a physicist at the IAS. “Nobody knows whether to say they’re a string theorist anymore,” said Chris Beem, a mathematical physicist at the University of Oxford. “It’s become very confusing.”

    String theory today looks almost fractal. The more closely people explore any one corner, the more structure they find. Some dig deep into particular crevices; others zoom out to try to make sense of grander patterns. The upshot is that string theory today includes much that no longer seems stringy. Those tiny loops of string whose harmonics were thought to breathe form into every particle and force known to nature (including elusive gravity) hardly even appear anymore on chalkboards at conferences. At last year’s big annual string theory meeting, the Stanford University string theorist Eva Silverstein was amused to find she was one of the few giving a talk “on string theory proper,” she said. A lot of the time she works on questions related to cosmology.

    Even as string theory’s mathematical tools get adopted across the physical sciences, physicists have been struggling with how to deal with the central tension of string theory: Can it ever live up to its initial promise? Could it ever give researchers insight into how gravity and quantum mechanics might be reconciled — not in a toy universe, but in our own?

    “The problem is that string theory exists in the landscape of theoretical physics,” said Juan Maldacena, a mathematical physicist at the IAS and perhaps the most prominent figure in the field today. “But we still don’t know yet how it connects to nature as a theory of gravity.” Maldacena now acknowledges the breadth of string theory, and its importance to many fields of physics — even those that don’t require “strings” to be the fundamental stuff of the universe — when he defines string theory as “Solid Theoretical Research in Natural Geometric Structures.”

    An Explosion of Quantum Fields

    One high point for string theory as a theory of everything came in the late 1990s, when Maldacena revealed that a string theory including gravity in five dimensions was equivalent to a quantum field theory in four dimensions. This “AdS/CFT” duality appeared to provide a map for getting a handle on gravity — the most intransigent piece of the puzzle — by relating it to good old well-understood quantum field theory.

    This correspondence was never thought to be a perfect real-world model. The five-dimensional space in which it works has an “anti-de Sitter” geometry, a strange M.C. Escher-ish landscape that is not remotely like our universe.

    But researchers were surprised when they dug deep into the other side of the duality. Most people took for granted that quantum field theories — “bread and butter physics,” Dijkgraaf calls them — were well understood and had been for half a century. As it turned out, Dijkgraaf said, “we only understand them in a very limited way.”

    These quantum field theories were developed in the 1950s to unify special relativity and quantum mechanics. They worked well enough for long enough that it didn’t much matter that they broke down at very small scales and high energies. But today, when physicists revisit “the part you thought you understood 60 years ago,” said Nima Arkani-Hamed, a physicist at the IAS, you find “stunning structures” that came as a complete surprise. “Every aspect of the idea that we understood quantum field theory turns out to be wrong. It’s a vastly bigger beast.”

    Researchers have developed a huge number of quantum field theories in the past decade or so, each used to study different physical systems. Beem suspects there are quantum field theories that can’t be described even in terms of quantum fields. “We have opinions that sound as crazy as that, in large part, because of string theory.”

    This virtual explosion of new kinds of quantum field theories is eerily reminiscent of physics in the 1930s, when the unexpected appearance of a new kind of particle — the muon — led a frustrated I.I. Rabi to ask: “Who ordered that?” The flood of new particles was so overwhelming by the 1950s that it led Enrico Fermi to grumble: “If I could remember the names of all these particles, I would have been a botanist.”

    Physicists began to see their way through the thicket of new particles only when they found the more fundamental building blocks making them up, like quarks and gluons. Now many physicists are attempting to do the same with quantum field theory. In their attempts to make sense of the zoo, many learn all they can about certain exotic species.

    Conformal field theories (the right hand of AdS/CFT) are a starting point. In the simplest type of conformal field theory, you start with a version of quantum field theory where “the interactions between the particles are turned off,” said David Simmons-Duffin, a physicist at the IAS. If these specific kinds of field theories could be understood perfectly, answers to deep questions might become clear. “The idea is that if you understand the elephant’s feet really, really well, you can interpolate in between and figure out what the whole thing looks like.”

    Like many of his colleagues, Simmons-Duffin says he’s a string theorist mostly in the sense that it’s become an umbrella term for anyone doing fundamental physics in underdeveloped corners. He’s currently focusing on a physical system that’s described by a conformal field theory but has nothing to do with strings. In fact, the system is water at its “critical point,” where the distinction between gas and liquid disappears. It’s interesting because water’s behavior at the critical point is a complicated emergent system that arises from something simpler. As such, it could hint at dynamics behind the emergence of quantum field theories.

    Beem focuses on supersymmetric field theories, another toy model, as physicists call these deliberate simplifications. “We’re putting in some unrealistic features to make them easier to handle,” he said. Specifically, they are amenable to tractable mathematics, which “makes it so a lot of things are calculable.”

    Toy models are standard tools in most kinds of research. But there’s always the fear that what one learns from a simplified scenario does not apply to the real world. “It’s a bit of a deal with the devil,” Beem said. “String theory is a much less rigorously constructed set of ideas than quantum field theory, so you have to be willing to relax your standards a bit,” he said. “But you’re rewarded for that. It gives you a nice, bigger context in which to work.”

    It’s the kind of work that makes people such as Sean Carroll, a theoretical physicist at the California Institute of Technology, wonder if the field has strayed too far from its early ambitions — to find, if not a “theory of everything,” at least a theory of quantum gravity. “Answering deep questions about quantum gravity has not really happened,” he said. “They have all these hammers and they go looking for nails.” That’s fine, he said, even acknowledging that generations might be needed to develop a new theory of quantum gravity. “But it isn’t fine if you forget that, ultimately, your goal is describing the real world.”

    It’s a question he has asked his friends. Why are they investigating detailed quantum field theories? “What’s the aspiration?” he asks. Their answers are logical, he says, but steps removed from developing a true description of our universe.

    nstead, he’s looking for a way to “find gravity inside quantum mechanics.” A paper he recently wrote with colleagues claims to take steps toward just that. It does not involve string theory.

    The Broad Power of Strings

    Perhaps the field that has gained the most from the flowering of string theory is mathematics itself. Sitting on a bench beside the IAS pond while watching a blue heron saunter in the reeds, Clay Córdova, a researcher there, explained how what seemed like intractable problems in mathematics were solved by imagining how the question might look to a string. For example, how many spheres could fit inside a Calabi-Yau manifold — the complex folded shape expected to describe how spacetime is compactified? Mathematicians had been stuck. But a two-dimensional string can wiggle around in such a complex space. As it wiggled, it could grasp new insights, like a mathematical multidimensional lasso. This was the kind of physical thinking Einstein was famous for: thought experiments about riding along with a light beam revealed E=mc2. Imagining falling off a building led to his biggest eureka moment of all: Gravity is not a force; it’s a property of space-time.

    2
    The amplituhedron is a multi-dimensional object that can be used to calculate particle interactions. Physicists such as Chris Beem are applying techniques from string theory in special geometries where “the amplituhedron is its best self,” he says. Nima Arkani-Hamed

    Using the physical intuition offered by strings, physicists produced a powerful formula for getting the answer to the embedded sphere question, and much more. “They got at these formulas using tools that mathematicians don’t allow,” Córdova said. Then, after string theorists found an answer, the mathematicians proved it on their own terms. “This is a kind of experiment,” he explained. “It’s an internal mathematical experiment.” Not only was the stringy solution not wrong, it led to Fields Medal-winning mathematics. “This keeps happening,” he said.

    String theory has also made essential contributions to cosmology. The role that string theory has played in thinking about mechanisms behind the inflationary expansion of the universe — the moments immediately after the Big Bang, where quantum effects met gravity head on — is “surprisingly strong,” said Silverstein, even though no strings are attached.

    Still, Silverstein and colleagues have used string theory to discover, among other things, ways to see potentially observable signatures of various inflationary ideas. The same insights could have been found using quantum field theory, she said, but they weren’t. “It’s much more natural in string theory, with its extra structure.”

    Inflationary models get tangled in string theory in multiple ways, not least of which is the multiverse — the idea that ours is one of a perhaps infinite number of universes, each created by the same mechanism that begat our own. Between string theory and cosmology, the idea of an infinite landscape of possible universes became not just acceptable, but even taken for granted by a large number of physicists. The selection effect, Silverstein said, would be one quite natural explanation for why our world is the way it is: In a very different universe, we wouldn’t be here to tell the story.

    This effect could be one answer to a big problem string theory was supposed to solve. As Gross put it: “What picks out this particular theory” — the Standard Model — from the “plethora of infinite possibilities?”

    Silverstein thinks the selection effect is actually a good argument for string theory. The infinite landscape of possible universes can be directly linked to “the rich structure that we find in string theory,” she said — the innumerable ways that string theory’s multidimensional space-time can be folded in upon itself.

    Building the New Atlas

    At the very least, the mature version of string theory — with its mathematical tools that let researchers view problems in new ways — has provided powerful new methods for seeing how seemingly incompatible descriptions of nature can both be true. The discovery of dual descriptions of the same phenomenon pretty much sums up the history of physics. A century and a half ago, James Clerk Maxwell saw that electricity and magnetism were two sides of a coin. Quantum theory revealed the connection between particles and waves. Now physicists have strings.

    “Once the elementary things we’re probing spaces with are strings instead of particles,” said Beem, the strings “see things differently.” If it’s too hard to get from A to B using quantum field theory, reimagine the problem in string theory, and “there’s a path,” Beem said.

    In cosmology, string theory “packages physical models in a way that’s easier to think about,” Silverstein said. It may take centuries to tie together all these loose strings to weave a coherent picture, but young researchers like Beem aren’t bothered a bit. His generation never thought string theory was going to solve everything. “We’re not stuck,” he said. “It doesn’t feel like we’re on the verge of getting it all sorted, but I know more each day than I did the day before – and so presumably we’re getting somewhere.”

    Stanford thinks of it as a big crossword puzzle. “It’s not finished, but as you start solving, you can tell that it’s a valid puzzle,” he said. “It’s passing consistency checks all the time.”

    “Maybe it’s not even possible to capture the universe in one easily defined, self-contained form, like a globe,” Dijkgraaf said, sitting in Robert Oppenheimer’s many windowed office from when he was Einstein’s boss, looking over the vast lawn at the IAS, the pond and the woods in the distance. Einstein, too, tried and failed to find a theory of everything, and it takes nothing away from his genius.

    “Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty,” Dijkgraaf said. “Using the atlas will require that physics be fluent in many languages, many approaches, all at the same time. Their work will come from many different directions, perhaps far-flung.”

    He finds it “totally disorienting” and also “fantastic.”

    Arkani-Hamed believes we are in the most exciting epoch of physics since quantum mechanics appeared in the 1920s. But nothing will happen quickly. “If you’re excited about responsibly attacking the very biggest existential physics questions ever, then you should be excited,” he said. “But if you want a ticket to Stockholm for sure in the next 15 years, then probably not.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:00 pm on September 9, 2016 Permalink | Reply
    Tags: , , , Quanta Magazine   

    From Quanta: “Colliding Black Holes Tell New Story of Stars” 

    Quanta Magazine
    Quanta Magazine

    September 6, 2016
    Natalie Wolchover

    Just months after their discovery, gravitational waves coming from the mergers of black holes are shaking up astrophysics.

    1
    Ana Kova for Quanta Magazine

    At a talk last month in Santa Barbara, California, addressing some of the world’s leading astrophysicists, Selma de Mink cut to the chase. “How did they form?” she began.

    “They,” as everybody knew, were the two massive black holes that, more than 1 billion years ago and in a remote corner of the cosmos, spiraled together and merged, making waves in the fabric of space and time. These “gravitational waves” rippled outward and, on Sept. 14, 2015, swept past Earth, strumming the ultrasensitive detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO).

    LSC LIGO Scientific Collaboration
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo Hanford, WA, USA installation
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA
    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    LIGO’s discovery, announced in February, triumphantly vindicated Albert Einstein’s 1916 prediction that gravitational waves exist.

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib
    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    By tuning in to these tiny tremors in space-time and revealing for the first time the invisible activity of black holes — objects so dense that not even light can escape their gravitational pull — LIGO promised to open a new window on the universe, akin, some said, to when Galileo first pointed a telescope at the sky.

    Already, the new gravitational-wave data has shaken up the field of astrophysics. In response, three dozen experts spent two weeks in August sorting through the implications at the Kavli Institute for Theoretical Physics (KITP) in Santa Barbara.

    Jump-starting the discussions, de Mink, an assistant professor of astrophysics at the University of Amsterdam, explained that of the two — and possibly more — black-hole mergers that LIGO has detected so far, the first and mightiest event, labeled GW150914, presented the biggest puzzle. LIGO was expected to spot pairs of black holes weighing in the neighborhood of 10 times the mass of the sun, but these packed roughly 30 solar masses apiece. “They are there — massive black holes, much more massive than we thought they were,” de Mink said to the room. “So, how did they form?”

    The mystery, she explained, is twofold: How did the black holes get so massive, considering that stars, some of which collapse to form black holes, typically blow off most of their mass before they die, and how did they get so close to each other — close enough to merge within the lifetime of the universe? “These are two things that are sort of mutually exclusive,” de Mink said. A pair of stars that are born huge and close together will normally mingle and then merge before ever collapsing into black holes, failing to kick up detectable gravitational waves.

    Nailing down the story behind GW150914 “is challenging all our understanding,” said Matteo Cantiello, an astrophysicist at KITP. Experts must retrace the uncertain steps from the moment of the merger back through the death, life and birth of a pair of stars — a sequence that involves much unresolved astrophysics. “This will really reinvigorate certain old questions in our understanding of stars,” said Eliot Quataert, a professor of astronomy at the University of California, Berkeley, and one of the organizers of the KITP program. Understanding LIGO’s data will demand a reckoning of when and why stars go supernova; which ones turn into which kinds of stellar remnants; how stars’ composition, mass and rotation affect their evolution; how their magnetic fields operate; and more.

    The work has just begun, but already LIGO’s first few detections have pushed two theories of binary black-hole formation to the front of the pack. Over the two weeks in Santa Barbara, a rivalry heated up between the new “chemically homogeneous” model for the formation of black-hole binaries, proposed by de Mink and colleagues earlier this year, and the classic “common envelope” model espoused by many other experts. Both theories (and a cluster of competitors) might be true somewhere in the cosmos, but probably only one of them accounts for the vast majority of black-hole mergers. “In science,” said Daniel Holz of the University of Chicago, a common-envelope proponent, “there’s usually only one dominant process — for anything.”

    Star Stories

    2
    The R136 star cluster at the heart of the Tarantula Nebula gives rise to many massive stars, which are thought to be the progenitors of black-hole binaries. NASA, ESA, F. Paresce, R. O’Connell and the Wide Field Camera 3 Science Oversight Committee

    The story of GW150914 almost certainly starts with massive stars — those that are at least eight times as heavy as the sun and which, though rare, play a starring role in galaxies. Massive stars are the ones that explode as supernovas, spewing matter into space to be recycled as new stars; only their cores then collapse into black holes and neutron stars, which drive exotic and influential phenomena such as gamma-ray bursts, pulsars and X-ray binaries. De Mink and collaborators showed in 2012 that most known massive stars live in binary systems. Binary massive stars, in her telling, “dance” and “kiss” and suck each other’s hydrogen fuel “like vampires,” depending on the circumstances. But which circumstances lead them to shrink down to points that recede behind veils of darkness, and then collide?

    The conventional common-envelope story, developed over decades starting with the 1970s work of the Soviet scientists Aleksandr Tutukov and Lev Yungelson, tells of a pair of massive stars that are born in a wide orbit. As the first star runs out of fuel in its core, its outer layers of hydrogen puff up, forming a “red supergiant.” Much of this hydrogen gas gets sucked away by the second star, vampire-style, and the core of the first star eventually collapses into a black hole. The interaction draws the pair closer, so that when the second star puffs up into a supergiant, it engulfs the two of them in a common envelope. The companions sink ever closer as they wade through the hydrogen gas. Eventually, the envelope is lost to space, and the core of the second star, like the first, collapses into a black hole. The two black holes are close enough to someday merge.

    Because the stars shed so much mass, this model is expected to yield pairs of black holes on the lighter side, weighing in the ballpark of 10 solar masses. LIGO’s second signal, from the merger of eight- and 14-solar-mass black holes, is a home run for the model. But some experts say that the first event, GW150914, is a stretch.

    In a June paper in Nature, Holz and collaborators Krzysztof Belczynski, Tomasz Bulik and Richard O’Shaughnessy argued that common envelopes can theoretically produce mergers of 30-solar-mass black holes if the progenitor stars weigh something like 90 solar masses and contain almost no metal (which accelerates mass loss). Such heavy binary systems are likely to be relatively rare in the universe, raising doubts in some minds about whether LIGO would have observed such an outlier so soon. In Santa Barbara, scientists agreed that if LIGO detects many very heavy mergers relative to lighter ones, this will weaken the case for the common-envelope scenario.

    3
    Lucy Reading-Ikkanda for Quanta Magazine

    This weakness of the conventional theory has created an opening for new ideas. One such idea began brewing in 2014, when de Mink and Ilya Mandel, an astrophysicist at the University of Birmingham and a member of the LIGO collaboration, realized that a type of binary-star system that de Mink has studied for years might be just the ticket to forming massive binary black holes.

    The chemically homogeneous model begins with a pair of massive stars that are rotating around each other extremely rapidly and so close together that they become “tidally locked,” like tango dancers. In tango, “you are extremely close, so your bodies face each other all the time,” said de Mink, a dancer herself. “And that means you are spinning around each other, but it also forces you to spin around your own axis as well.” This spinning stirs the stars, making them hot and homogeneous throughout. And this process might allow the stars to undergo fusion throughout their whole interiors, rather than just their cores, until both stars use up all their fuel. Because the stars never expand, they do not intermingle or shed mass. Instead, each collapses wholesale under its own weight into a massive black hole. The black holes dance for a few billion years, gradually spiraling closer and closer until, in a space-time-buckling split second, they coalesce.

    De Mink and Mandel made their case for the chemically homogeneous model in a paper posted online in January. Another paper proposing the same idea, by researchers at the University of Bonn led by the graduate student Pablo Marchant, appeared days later. When LIGO announced the detection of GW150914 the following month, the chemically homogeneous theory shot to prominence. “What I’m discussing was a pretty crazy story up to the moment that it made, very nicely, black holes of the right mass,” de Mink said.

    However, aside from some provisional evidence, the existence of stirred stars is speculative. And some experts question the model’s efficacy. Simulations suggest that the chemically homogeneous model struggles to explain smaller black-hole binaries like those in LIGO’s second signal. Worse, doubt has arisen as to how well the theory really accounts for GW150914, which is supposed to be its main success story. “It’s a very elegant model,” Holz said. “It’s very compelling. The problem is that it doesn’t seem to fully work.”

    All Spun Up

    Along with the masses of the colliding black holes, LIGO’s gravitational-wave signals also reveal whether the black holes were spinning. At first, researchers paid less attention to the spin measurement, in part because gravitational waves only register spin if black holes are spinning around the same axis that they orbit each other around, saying nothing about spin in other directions. However, in a May paper, researchers at the Institute for Advanced Study in Princeton, N.J., and the Hebrew University of Jerusalem argued that the kind of spin that LIGO measures is exactly the kind black holes would be expected to have if they formed via the chemically homogeneous channel. (Tango dancers spin and orbit each other in the same direction.) And yet, the 30-solar-mass black holes in GW150914 were measured to have very low spin, if any, seemingly striking a blow against the tango scenario.

    “Is spin a problem for the chemically homogeneous channel?” Sterl Phinney, a professor of astrophysics at the California Institute of Technology, prompted the Santa Barbara group one afternoon. After some debate, the scientists agreed that the answer was yes.

    However, mere days later, de Mink, Marchant, and Cantiello found a possible way out for the theory. Cantiello, who has recently made strides in studying stellar magnetic fields, realized that the tangoing stars in the chemically homogeneous channel are essentially spinning balls of charge that would have powerful magnetic fields, and these magnetic fields are likely to cause the star’s outer layers to stream into strong poles. In the same way that a spinning figure skater slows down when she extends her arms, these poles would act like brakes, gradually reducing the stars’ spin. The trio has since been working to see if their simulations bear out this picture. Quataert called the idea “plausible but perhaps a little weaselly.”

    5
    Lucy Reading-Ikkanda for Quanta Magazine; Source: LIGO

    On the last day of the program, setting the stage for an eventful autumn as LIGO comes back online with higher sensitivity and more gravitational-wave signals roll in, the scientists signed “Phinney’s Declaration,” a list of concrete statements about what their various theories predict. “Though all models for black hole binaries may be created equal (except those inferior ones proposed by our competitors),” begins the declaration, drafted by Phinney, “we hope that observational data will soon make them decidedly unequal.”

    As the data pile up, an underdog theory of black-hole binary formation could conceivably gain traction — for instance, the notion that binaries form through dynamical interactions inside dense star-forming regions called “globular clusters.” LIGO’s first run suggested that black-hole mergers are more common than the globular-cluster model predicts. But perhaps the experiment just got lucky last time and the estimated merger rate will drop.

    Adding to the mix, a group of cosmologists recently theorized that GW150914 might have come from the merger of primordial black holes, which were never stars to begin with but rather formed shortly after the Big Bang from the collapse of energetic patches of space-time. Intriguingly, the researchers argued in a recent paper in Physical Review Letters that such 30-solar-mass primordial black holes could comprise some or all of the missing “dark matter” that pervades the cosmos. There’s a way of testing the idea against astrophysical signals called fast radio bursts.

    It’s perhaps too soon to dwell on such an enticing possibility; astrophysicists point out that it would require suspiciously good luck for black holes from the Big Bang to happen to merge at just the right time for us to detect them, 13.8 billion years later. This is another example of the new logic that researchers must confront at the dawn of gravitational-wave astronomy. “We’re at a really fun stage,” de Mink said. “This is the first time we’re thinking in these pictures.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 8:57 am on September 9, 2016 Permalink | Reply
    Tags: , , , , Genetic Engineering to Clash With Evolution, Quanta Magazine   

    From Quanta: “Genetic Engineering to Clash With Evolution” 

    Quanta Magazine
    Quanta Magazine

    September 8, 2016
    Brooke Borel

    In a crowded auditorium at New York’s Cold Spring Harbor Laboratory in August, Philipp Messer, a population geneticist at Cornell University, took the stage to discuss a powerful and controversial new application for genetic engineering: gene drives.

    Gene drives can force a trait through a population, defying the usual rules of inheritance. A specific trait ordinarily has a 50-50 chance of being passed along to the next generation. A gene drive could push that rate to nearly 100 percent. The genetic dominance would then continue in all future generations. You want all the fruit flies in your lab to have light eyes? Engineer a drive for eye color, and soon enough, the fruit flies’ offspring will have light eyes, as will their offspring, and so on for all future generations. Gene drives may work in any species that reproduces sexually, and they have the potential to revolutionize disease control, agriculture, conservation and more. Scientists might be able to stop mosquitoes from spreading malaria, for example, or eradicate an invasive species.

    The technology represents the first time in history that humans have the ability to engineer the genes of a wild population. As such, it raises intense ethical and practical concerns, not only from critics but from the very scientists who are working with it.

    Messer’s presentation highlighted a potential snag for plans to engineer wild ecosystems: Nature usually finds a way around our meddling. Pathogens evolve antibiotic resistance; insects and weeds evolve to thwart pesticides. Mosquitoes and invasive species reprogrammed with gene drives can be expected to adapt as well, especially if the gene drive is harmful to the organism — it’ll try to survive by breaking the drive.

    “In the long run, even with a gene drive, evolution wins in the end,” said Kevin Esvelt, an evolutionary engineer at the Massachusetts Institute of Technology. “On an evolutionary timescale, nothing we do matters. Except, of course, extinction. Evolution doesn’t come back from that one.”

    Gene drives are a young technology, and none have been released into the wild. A handful of laboratory studies show that gene drives work in practice — in fruit flies, mosquitoes and yeast. Most of these experiments have found that the organisms begin to develop evolutionary resistance that should hinder the gene drives. But these proof-of-concept studies follow small populations of organisms. Large populations with more genetic diversity — like the millions of swarms of insects in the wild — pose the most opportunities for resistance to emerge.

    It’s impossible — and unethical — to test a gene drive in a vast wild population to sort out the kinks. Once a gene drive has been released, there may be no way to take it back. (Some researchers have suggested the possibility of releasing a second gene drive to shut down a rogue one. But that approach is hypothetical, and even if it worked, the ecological damage done in the meantime would remain unchanged.)

    The next best option is to build models to approximate how wild populations might respond to the introduction of a gene drive. Messer and other researchers are doing just that. “For us, it was clear that there was this discrepancy — a lot of geneticists have done a great job at trying to build these systems, but they were not concerned that much with what is happening on a population level,” Messer said. Instead, he wants to learn “what will happen on the population level, if you set these things free and they can evolve for many generations — that’s where resistance comes into play.”

    At the meeting at Cold Spring Harbor Laboratory, Messer discussed a computer model his team developed, which they described in a paper posted in June on the scientific preprint site biorxiv.org. The work is one of three theoretical papers on gene drive resistance submitted to biorxiv.org in the last five months — the others are from a researcher at the University of Texas, Austin, and a joint team from Harvard University and MIT. (The authors are all working to publish their research through traditional peer-reviewed journals.) According to Messer, his model suggests “resistance will evolve almost inevitably in standard gene drive systems.”

    It’s still unclear where all this interplay between resistance and gene drives will end up. It could be that resistance will render the gene drive impotent. On the one hand, this may mean that releasing the drive was a pointless exercise; on the other hand, some researchers argue, resistance could be an important natural safety feature. Evolution is unpredictable by its very nature, but a handful of biologists are using mathematical models and careful lab experiments to try to understand how this powerful genetic tool will behave when it’s set loose in the wild.

    1
    Lucy Reading-Ikkanda for Quanta Magazine

    Resistance Isn’t Futile

    Gene drives aren’t exclusively a human technology. They occasionally appear in nature. Researchers first thought of harnessing the natural versions of gene drives decades ago, proposing to re-create them with “crude means, like radiation” or chemicals, said Anna Buchman, a postdoctoral researcher in molecular biology at the University of California, Riverside. These genetic oddities, she adds, “could be manipulated to spread genes through a population or suppress a population.”

    In 2003, Austin Burt, an evolutionary geneticist at Imperial College London, proposed a more finely tuned approach called a homing endonuclease gene drive, which would zero in on a specific section of DNA and alter it.

    Burt mentioned the potential problem of resistance — and suggested some solutions — both in his seminal paper and in subsequent work. But for years, it was difficult to engineer a drive in the lab, because the available technology was cumbersome.

    With the advent of genetic engineering, Burt’s idea became reality. In 2012, scientists unveiled CRISPR, a gene-editing tool that has been described as a molecular word processor. It has given scientists the power to alter genetic information in every organism they have tried it on. CRISPR locates a specific bit of genetic code and then breaks both strands of the DNA at that site, allowing genes to be deleted, added or replaced.

    CRISPR provides a relatively easy way to release a gene drive. First, researchers insert a CRISPR-powered gene drive into an organism. When the organism mates, its CRISPR-equipped chromosome cleaves the matching chromosome coming from the other parent. The offspring’s genetic machinery then attempts to sew up this cut. When it does, it copies over the relevant section of DNA from the first parent — the section that contains the CRISPR gene drive. In this way, the gene drive duplicates itself so that it ends up on both chromosomes, and this will occur with nearly every one of the original organism’s offspring.

    Just three years after CRISPR’s unveiling, scientists at the University of California, San Diego, used CRISPR to insert inheritable gene drives into the DNA of fruit flies, thus building the system Burt had proposed. Now scientists can order the essential biological tools on the internet and build a working gene drive in mere weeks. “Anyone with some genetics knowledge and a few hundred dollars can do it,” Messer said. “That makes it even more important that we really study this technology.”

    Although there are many different ways gene drives could work in practice, two approaches have garnered the most attention: replacement and suppression. A replacement gene drive alters a specific trait. For example, an anti-malaria gene drive might change a mosquito’s genome so that the insect no longer had the ability to pick up the malaria parasite. In this situation, the new genes would quickly spread through a wild population so that none of the mosquitoes could carry the parasite, effectively stopping the spread of the disease.

    A suppression gene drive would wipe out an entire population. For example, a gene drive that forced all offspring to be male would make reproduction impossible.

    But wild populations may resist gene drives in unpredictable ways. “We know from past experiences that mosquitoes, especially the malaria mosquitoes, have such peculiar biology and behavior,” said Flaminia Catteruccia, a molecular entomologist at the Harvard T.H. Chan School of Public Health. “Those mosquitoes are much more resilient than we make them. And engineering them will prove more difficult than we think.” In fact, such unpredictability could likely be found in any species.

    2
    A sample of malaria-infected blood contains two Plasmodium falciparum parasites. CDC/PHIL

    The three new biorxiv.org papers use different models to try to understand this unpredictability, at least at its simplest level.

    The Cornell group used a basic mathematical model to map how evolutionary resistance will emerge in a replacement gene drive. It focuses on how DNA heals itself after CRISPR breaks it (the gene drive pushes a CRISPR construct into each new organism, so it can cut, copy and paste itself again). The DNA repairs itself automatically after a break. Exactly how it does so is determined by chance. One option is called nonhomologous end joining, in which the two ends that were broken get stitched back together in a random way. The result is similar to what you would get if you took a sentence, deleted a phrase, and then replaced it with an arbitrary set of words from the dictionary — you might still have a sentence, but it probably wouldn’t make sense. The second option is homology-directed repair, which uses a genetic template to heal the broken DNA. This is like deleting a phrase from a sentence, but then copying a known phrase as a replacement — one that you know will fit the context.

    Nonhomologous end joining is a recipe for resistance. Because the CRISPR system is designed to locate a specific stretch of DNA, it won’t recognize a section that has the equivalent of a nonsensical word in the middle. The gene drive won’t get into the DNA, and it won’t get passed on to the next generation. With homology-directed repair, the template could include the gene drive, ensuring that it would carry on.

    The Cornell model tested both scenarios. “What we found was it really is dependent on two things: the nonhomologous end-joining rate and the population size,” said Robert Unckless, an evolutionary geneticist at the University of Kansas who co-authored the paper as a postdoctoral researcher at Cornell. “If you can’t get nonhomologous end joining under control, resistance is inevitable. But resistance could take a while to spread, which means you might be able to achieve whatever goal you want to achieve.” For example, if the goal is to create a bubble of disease-proof mosquitoes around a city, the gene drive might do its job before resistance sets in.

    The team from Harvard and MIT also looked at nonhomologous end joining, but they took it a step further by suggesting a way around it: by designing a gene drive that targets multiple sites in the same gene. “If any of them cut at their sites, then it’ll be fine — the gene drive will copy,” said Charleston Noble, a doctoral student at Harvard and the first author of the paper. “You have a lot of chances for it to work.”

    The gene drive could also target an essential gene, Noble said — one that the organism can’t afford to lose. The organism may want to kick out the gene drive, but not at the cost of altering a gene that’s essential to life.

    The third biorxiv.org paper, from the UT Austin team, took a different approach. It looked at how resistance could emerge at the population level through behavior, rather than within the target sequence of DNA. The target population could simply stop breeding with the engineered individuals, for example, thus stopping the gene drive.

    “The math works out that if a population is inbred, at least to some degree, the gene drive isn’t going to work out as well as in a random population,” said James Bull, the author of the paper and an evolutionary biologist at Austin. “It’s not just sequence evolution. There could be all kinds of things going on here, by which populations block [gene drives],” Bull added. “I suspect this is the tip of the iceberg.”

    Resistance is constrained only by the limits of evolutionary creativity. It could emerge from any spot along the target organism’s genome. And it extends to the surrounding environment as well. For example, if a mosquito is engineered to withstand malaria, the parasite itself may grow resistant and mutate into a newly infectious form, Noble said.

    Not a Bug, but a Feature?

    If the point of a gene drive is to push a desired trait through a population, then resistance would seem to be a bad thing. If a drive stops working before an entire population of mosquitoes is malaria-proof, for example, then the disease will still spread. But at the Cold Spring Harbor Laboratory meeting, Messer suggested the opposite: “Let’s embrace resistance. It could provide a valuable safety control mechanism.” It’s possible that the drive could move just far enough to stop a disease in a particular region, but then stop before it spread to all of the mosquitoes worldwide, carrying with it an unknowable probability of unforeseen environmental ruin.

    Not everyone is convinced that this optimistic view is warranted. “It’s a false security,” said Ethan Bier, a geneticist at the University of California, San Diego. He said that while such a strategy is important to study, he worries that researchers will be fooled into thinking that forms of resistance offer “more of a buffer and safety net than they do.”

    And while mathematical models are helpful, researchers stress that models can’t replace actual experimentation. Ecological systems are just too complicated. “We have no experience engineering systems that are going to evolve outside of our control. We have never done that before,” Esvelt said. “So that’s why a lot of these modeling studies are important — they can give us a handle on what might happen. But I’m also hesitant to rely on modeling and trying to predict in advance when systems are so complicated.”

    Messer hopes to put his theoretical work into a real-world setting, at least in the lab. He is currently directing a gene drive experiment at Cornell that tracks multiple cages of around 5,000 fruit flies each — more animals than past studies have used to research gene drive resistance. The gene drive is designed to distribute a fluorescent protein through the population. The proteins will glow red under a special light, a visual cue showing how far the drive gets before resistance weeds it out.

    Others are also working on resistance experiments: Esvelt and Catteruccia, for example, are working with George Church, a geneticist at Harvard Medical School, to develop a gene drive in mosquitoes that they say will be immune to resistance. They plan to insert multiple drives in the same gene — the strategy suggested by the Harvard/MIT paper.

    Such experiments will likely guide the next generation of computer models, to help tailor them more precisely to a large wild population.

    “I think it’s been interesting because there is this sort of going back and forth between theory and empirical work,” Unckless said. “We’re still in the early days of it, but hopefully it’ll be worthwhile for both sides, and we’ll make some informed and ethically correct decisions about what to do.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:40 pm on August 29, 2016 Permalink | Reply
    Tags: , , Jammed Cells Expose the Physics of Cancer, Quanta Magazine   

    From Quanta: “Jammed Cells Expose the Physics of Cancer” 

    Quanta Magazine
    Quanta Magazine

    August 16, 2016
    Gabriel Popkin

    The subtle mechanics of densely packed cells may help explain why some cancerous tumors stay put while others break off and spread through the body.

    1
    Ashley Mackenzie for Quanta Magazine

    In 1995, while he was a graduate student at McGill University in Montreal, the biomedical scientist Peter Friedl saw something so startling it kept him awake for several nights. Coordinated groups of cancer cells he was growing in his adviser’s lab started moving through a network of fibers meant to mimic the spaces between cells in the human body.

    For more than a century, scientists had known that individual cancer cells can metastasize, leaving a tumor and migrating through the bloodstream and lymph system to distant parts of the body. But no one had seen what Friedl had caught in his microscope: a phalanx of cancer cells moving as one. It was so new and strange that at first he had trouble getting it published. “It was rejected because the relevance [to metastasis] wasn’t clear,” he said. Friedl and his co-authors eventually published a short paper in the journal Cancer Research.

    Two decades later, biologists have become increasingly convinced that mobile clusters of tumor cells, though rarer than individual circulating cells, are seeding many — perhaps most — of the deadly metastatic invasions that cause 90 percent of all cancer deaths. But it wasn’t until 2013 that Friedl, now at Radboud University in the Netherlands, really felt that he understood what he and his colleagues were seeing. Things finally fell into place for him when he read a paper [Science Direct] by Jeffrey Fredberg, a professor of bioengineering and physiology at Harvard University, which proposed that cells could be “jammed” — packed together so tightly that they become a unit, like coffee beans stuck in a hopper.

    Fredberg’s research focused on lung cells, but Friedl thought his own migrating cancer cells might also be jammed. “I realized we had exactly the same thing, in 3-D and in motion,” he said. “That got me very excited, because it was an available concept that we could directly put onto our finding.” He soon published one of the first papers applying the concept of jamming to experimental measurements of cancer cells.

    Physicists have long provided doctors with tumor-fighting tools such as radiation and proton beams. But only recently has anyone seriously considered the notion that purely physical concepts might help us understand the basic biology of one of the world’s deadliest phenomena. In the past few years, physicists studying metastasis have generated surprisingly precise predictions of cell behavior. Though it’s early days, proponents are optimistic that phase transitions such as jamming will play an increasingly important role in the fight against cancer. “Certainly in the physics community there’s momentum,” Fredberg said. “If the physicists are on board with it, the biologists are going to have to. Cells obey the rules of physics — there’s no choice.”

    The Jam Index

    In the broadest sense, physical principles have been applied to cancer since long before physics existed as a discipline. The ancient Greek physician Hippocrates gave cancer its name when he referred to it as a “crab,” comparing the shape of a tumor and its surrounding veins to a carapace and legs.

    But those solid tumors do not kill more than 8 million people annually. Once tumor cells strike out on their own and metastasize to new sites in the body, drugs and other therapies rarely do more than prolong a patient’s life for a few years.

    Biologists often view cancer primarily as a genetic program gone wrong, with mutations and epigenetic changes producing cells that don’t behave the way they should: Genes associated with cell division and growth may be turned up, and genes for programmed cell death may be turned down. To a small but growing number of physicists, however, the shape-shifting and behavior changes in cancer cells evoke not an errant genetic program but a phase transition.

    The phase transition — a change in a material’s internal organization between ordered and disordered states — is a bedrock concept in physics. Anyone who has watched ice melt or water boil has witnessed a phase transition. Physicists have also identified such transitions in magnets, crystals, flocking birds and even cells (and cellular components) placed in artificial environments.

    But compared to a homogeneous material like water or a magnet — or even a collection of identical cells in a dish — cancer is a hot mess. Cancers vary widely depending on the individual and the organ they develop in. Even a single tumor comprises a mind-boggling jumble of cells with different shapes, sizes and protein compositions. Such complexities can make biologists wary of a general theoretical framework. But they don’t daunt physicists. “Biologists are more trained to look at complexity and differences,” said the physicist Krastan Blagoev, who directs a National Science Foundation program that funds work on theoretical physics in living systems. “Physicists try to look at what’s common and extract behaviors from the commonness.”

    In a demonstration of this approach, the physicists Andrea Liu, now of the University of Pennsylvania, and Sidney Nagel of the University of Chicago published a brief commentary in Nature in 1998 about the process of jamming. They described familiar examples: traffic jams, piles of sand, and coffee beans stuck together in a grocery-store hopper. These are all individual items held together by an external force so that they resemble a solid. Liu and Nagel put forward the provocative suggestion that jamming could be a previously unrecognized phase transition, a notion that physicists, after more than a decade of debate, have now accepted.

    Though not the first mention of jamming in the scientific literature, Liu and Nagel’s paper set off what Fredberg calls “a deluge” among physicists. (The paper has been cited more than 1,400 times.) Fredberg realized that cells in lung tissue, which he had spent much of his career studying, are closely packed in a similar way to coffee beans and sand. In 2009 he and colleagues published [Nature Physics] the first paper suggesting that jamming could hold cells in tissues in place, and that an unjamming transition could mobilize some of those cells, a possibility that could have implications for asthma and other diseases.

    2
    Lucy Reading-Ikkanda for Quanta Magazine

    The paper appeared amid a growing recognition of the importance of mechanics, and not just genetics, in directing cell behavior, Fredberg said. “People had always thought that the mechanical implications were at the most downstream end of the causal cascade, and at the most upstream end are genetic and epigenetic factors,” he said. “Then people discovered that physical forces and mechanical events actually can be upstream of genetic events — that cells are very aware of their mechanical microenvironments.”

    Lisa Manning, a physicist at Syracuse University, read Fredberg’s paper and decided to put his idea into action. She and colleagues used a two-dimensional model of cells that are connected along edges and at vertices, filling all space. The model yielded an order parameter — a measurable number that quantifies a material’s internal order — that they called the “shape index.” The shape index relates the perimeter of a two-dimensional slice of the cell and its total surface area. “We made what I would consider a ridiculously strict prediction: When that number is equal to 3.81 or below, the tissue is a solid, and when that number is above 3.81, that tissue is a fluid,” Manning said. “I asked Jeff Fredberg to go look at this, and he did [Nature Materials], and it worked perfectly.”

    Fredberg saw that lung cells with a shape index above 3.81 started to mobilize and squeeze past each other. Manning’s prediction “came out of pure theory, pure thought,” he said. “It’s really an astounding validation of a physical theory.” A program officer with the Physical Sciences in Oncology program at the National Cancer Institute learned about the results and encouraged Fredberg to do a similar analysis using cancer cells. The program has given him funding to look for signatures of jamming in breast-cancer cells.

    Meanwhile, Josef Käs, a physicist at Leipzig University in Germany, wondered if jamming could help explain puzzling behavior in cancer cells. He knew from his own studies and those of others that breast and cervical tumors, while mostly stiff, also contain soft, mobile cells that stream into the surrounding environment. If an unjamming transition was fluidizing these cancer cells, Käs immediately envisioned a potential response: Perhaps an analysis of biopsies based on measurements of tumor cells’ state of jamming, rather than a nearly century-old visual inspection procedure, could determine whether a tumor is about to metastasize.

    Käs is now using a laser-based tool to look for signatures of jamming in tumors, and he hopes to have results later this year. In a separate study that is just beginning, he is working with Manning and her colleagues at Syracuse to look for phase transitions not just in cancer cells themselves, but also in the matrix of fibers that surrounds tumors.

    More speculatively, Käs thinks the idea could also yield new avenues for therapies that are gentler than the shock-and-awe approach clinicians typically use to subdue a tumor. “If you can jam a whole tumor, then you have a benign tumor — that I believe,” he said. “If you find something which basically jams cancer cells efficiently and buys you another 20 years, that might be better than very disruptive chemotherapies.” Yet Käs is quick to clarify that he is not sure how a clinician would induce jamming.

    Castaway Cooperators

    Beyond the clinic, jamming could help resolve a growing conceptual debate in cancer biology, proponents say. Oncologists have suspected for several decades that metastasis usually requires a transition between sticky epithelial cells, which make up the bulk of solid tumors, and thinner, more mobile mesenchymal cells that are often found circulating solo in cancer patients’ bloodstreams. As more and more studies deliver results showing activity similar to that of Friedl’s migrating cell clusters, however, researchers have begun to question [Science] whether go-it-alone mesenchymal cells, which Friedl calls “lonely riders,” could really be the main culprits behind the metastatic disease that kills millions.

    Some believe jamming could help get oncology out of this conceptual jam. A phase transition between jammed and unjammed states could fluidize and mobilize tumor cells as a group, without requiring them to transform from one cell type to a drastically different one, Friedl said. This could allow metastasizing cells to cooperate with one another, potentially giving them an advantage in colonizing a new site.

    The key to developing this idea is to allow for a range of intermediate cell states between two extremes. “In the past, theories for how cancer might behave mechanically have either been theories for solids or theories for fluids,” Manning said. “Now we need to take into account the fact that they’re right on the edge.”

    Hints of intermediate states between epithelial and mesenchymal are also emerging from physics research not motivated by phase-transition concepts. Herbert Levine, a biophysicist at Rice University, and his late colleague Eshel Ben-Jacob of Tel Aviv University recently created a model of metastasis based on concepts borrowed from nonlinear dynamics. It predicts the existence of clusters of circulating cells that have traits of both epithelial and mesenchymal cells. Cancer biologists have never seen such transitional cell states, but some are now seeking them in lab studies. “We wouldn’t have thought about it” on our own, said Kenneth Pienta, a prostate cancer specialist at Johns Hopkins University. “We have been directly affected by theoretical physics.”

    Biology’s Phase Transition

    Models of cell jamming, while useful, remain imperfect. For example, Manning’s models have been confined to two dimensions until now, even though tumors are three-dimensional. Manning is currently working on a 3-D version of her model of cellular motility. So far it seems to predict a fluid-to-solid transition similar to that of the 2-D model, she said.

    In addition, cells are not as simple as coffee beans. Cells in a tumor or tissue can change their own mechanical properties in often complex ways, using genetic programs and other feedback loops, and if jamming is to provide a solid conceptual foundation for aspects of cancer, it will need to account for this ability. “Cells are not passive,” said Valerie Weaver, the director of the Center for Bioengineering and Tissue Regeneration at the University of California, San Francisco. “Cells are responding.”

    Weaver also said that the predictions made by jamming models resemble what biologists call extrusion, a process by which dead epithelial cells are squeezed out of crowded tissue — the disfunction of which has recently been implicated in certain types of cancer. Manning believes that cell jamming likely provides an overarching mechanical explanation for many of the cell behaviors involved in cancer, including extrusion.

    Space-filling tissue models like the one Manning uses, which produce the jamming behavior, also have trouble accounting for all the details of how cells interact with their neighbors and with their environment, Levine said. He has taken a different approach, modeling some of the differences in the ways cells can react when they’re being crowded by other cells. “Jamming will take you some distance,” he said, adding, “I think we will get stuck if we just limit ourselves to thinking of these physics transitions.”

    Manning acknowledges that jamming alone cannot describe everything going on in cancer, but at least in certain types of cancer, it may play an important role, she said. “The message we’re not trying to put out there is that mechanics is the only game in town,” she said. “In some instances we might do a better job than traditional biochemical markers [in determining whether a particular cancer is dangerous]; in some cases we might not. But for something like cancer we want to have all hands on deck.”

    With this in mind, physicists have suggested other novel approaches to understanding cancer. A number of physicists, including Ricard Solé of Pompeu Fabra University in Barcelona, Jack Tuszynski of the University of Alberta, and Salvatore Torquato of Princeton University, have published theory papers suggesting ways that phase transitions could help explain aspects of cancer, and how experimentalists could test such predictions.

    Others, however, feel that phase transitions may not be the right tool. Robert Austin, a biological physicist at Princeton University, cautions that phase transitions can be surprisingly complex. Even for a seemingly elementary case such as freezing water, physicists have yet to compute exactly when a transition will occur, he notes — and cancer is far more complicated than water.

    And from a practical point of view, all the theory papers in the world won’t make a difference if physicists cannot get biologists and clinicians interested in their ideas. Jamming is a hot topic in physics, but most biologists have not yet heard of it, Fredberg said. The two communities can talk to each other at physics-and-cancer workshops during meetings hosted by the American Physical Society, the American Association for Cancer Research or the National Cancer Institute. But language and culture gaps remain. “I can come up with some phase diagrams, but in the end you have to translate it into a language which is relevant to oncologists,” Käs said.

    Those gaps will narrow if jamming and phase transition theory continue to successfully explain what researchers see in cells and tissues, Fredberg said. “If there’s really increasing evidence that the way cells move collectively revolves around jamming, it’s just a matter of time until that works its way into the biological literature.”

    And that, Friedl said, will give biologists a powerful new conceptual tool. “The challenge, but also the fascination, comes from identifying how living biology hijacks the physical principle and brings it to life and reinvents it using molecular strategies of cells.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:17 am on August 13, 2016 Permalink | Reply
    Tags: , , , , , , Quanta Magazine   

    From Quanta: “What No New Particles Means for Physics” 

    Quanta Magazine
    Quanta Magazine

    August 9, 2016
    Natalie Wolchover

    1
    Olena Shmahalo/Quanta Magazine

    Physicists at the Large Hadron Collider (LHC) in Europe have explored the properties of nature at higher energies than ever before, and they have found something profound: nothing new.

    It’s perhaps the one thing that no one predicted 30 years ago when the project was first conceived.

    The infamous “diphoton bump” that arose in data plots in December has disappeared, indicating that it was a fleeting statistical fluctuation rather than a revolutionary new fundamental particle. And in fact, the machine’s collisions have so far conjured up no particles at all beyond those catalogued in the long-reigning but incomplete “Standard Model” of particle physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    In the collision debris, physicists have found no particles that could comprise dark matter, no siblings or cousins of the Higgs boson, no sign of extra dimensions, no leptoquarks — and above all, none of the desperately sought supersymmetry particles that would round out equations and satisfy “naturalness,” a deep principle about how the laws of nature ought to work.

    CERN ATLAS Higgs Event
    CERN ATLAS Higgs Event

    CERN CMS Higgs Event
    CERN CMS Higgs Event

    “It’s striking that we’ve thought about these things for 30 years and we have not made one correct prediction that they have seen,” said Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton, N.J.

    The news has emerged at the International Conference on High Energy Physics in Chicago over the past few days in presentations by the ATLAS and CMS experiments, whose cathedral-like detectors sit at 6 and 12 o’clock on the LHC’s 17-mile ring.

    CERN/ATLAS detector
    CERN/ATLAS detector

    CERN/CMS Detector
    CERN/CMS Detector

    Both teams, each with over 3,000 members, have been working feverishly for the past three months analyzing a glut of data from a machine that is finally running at full throttle after being upgraded to nearly double its previous operating energy. It now collides protons with 13 trillion electron volts (TeV) of energy — more than 13,000 times the protons’ individual masses — providing enough raw material to beget gargantuan elementary particles, should any exist.

    2
    Lucy Reading-Ikkanda for Quanta Magazine

    So far, none have materialized. Especially heartbreaking for many is the loss of the diphoton bump, an excess of pairs of photons that cropped up in last year’s teaser batch of 13-TeV data, and whose origin has been the speculation of some 500 papers by theorists. Rumors about the bump’s disappearance in this year’s data began leaking in June, triggering a community-wide “diphoton hangover.”

    “It would have single-handedly pointed to a very exciting future for particle experiments,” said Raman Sundrum, a theoretical physicist at the University of Maryland. “Its absence puts us back to where we were.”

    The lack of new physics deepens a crisis that started in 2012 during the LHC’s first run, when it became clear that its 8-TeV collisions would not generate any new physics beyond the Standard Model. (The Higgs boson, discovered that year, was the Standard Model’s final puzzle piece, rather than an extension of it.) A white-knight particle could still show up later this year or next year, or, as statistics accrue over a longer time scale, subtle surprises in the behavior of the known particles could indirectly hint at new physics. But theorists are increasingly bracing themselves for their “nightmare scenario,” in which the LHC offers no path at all toward a more complete theory of nature.

    Some theorists argue that the time has already come for the whole field to start reckoning with the message of the null results. The absence of new particles almost certainly means that the laws of physics are not natural in the way physicists long assumed they are. “Naturalness is so well-motivated,” Sundrum said, “that its actual absence is a major discovery.”

    Missing Pieces

    The main reason physicists felt sure that the Standard Model could not be the whole story is that its linchpin, the Higgs boson, has a highly unnatural-seeming mass. In the equations of the Standard Model, the Higgs is coupled to many other particles. This coupling endows those particles with mass, allowing them in turn to drive the value of the Higgs mass to and fro, like competitors in a tug-of-war. Some of the competitors are extremely strong — hypothetical particles associated with gravity might contribute (or deduct) as much as 10 million billion TeV to the Higgs mass — yet somehow its mass ends up as 0.125 TeV, as if the competitors in the tug-of-war finish in a near-perfect tie. This seems absurd — unless there is some reasonable explanation for why the competing teams are so evenly matched.

    4
    Maria Spiropulu of the California Institute of Technology, pictured in the LHC’s CMS control room, brushed aside talk of a nightmare scenario, saying, “Experimentalists have no religion.” Courtesy of Maria Spiropulu

    Supersymmetry, as theorists realized in the early 1980s, does the trick. It says that for every “fermion” that exists in nature — a particle of matter, such as an electron or quark, that adds to the Higgs mass — there is a supersymmetric “boson,” or force-carrying particle, that subtracts from the Higgs mass. This way, every participant in the tug-of-war game has a rival of equal strength, and the Higgs is naturally stabilized. Theorists devised alternative proposals for how naturalness might be achieved, but supersymmetry had additional arguments in its favor: It caused the strengths of the three quantum forces to exactly converge at high energies, suggesting they were unified at the beginning of the universe. And it supplied an inert, stable particle of just the right mass to be dark matter.

    “We had figured it all out,” said Maria Spiropulu, a particle physicist at the California Institute of Technology and a member of CMS. “If you ask people of my generation, we were almost taught that supersymmetry is there even if we haven’t discovered it. We believed it.”

    Standard model of Supersymmetry DESY
    Standard model of Supersymmetry DESY

    Hence the surprise when the supersymmetric partners of the known particles didn’t show up — first at the Large Electron-Positron Collider in the 1990s, then at the Tevatron in the 1990s and early 2000s, and now at the LHC. As the colliders have searched ever-higher energies, the gap has widened between the known particles and their hypothetical superpartners, which must be much heavier in order to have avoided detection. Ultimately, supersymmetry becomes so “broken” that the effects of the particles and their superpartners on the Higgs mass no longer cancel out, and supersymmetry fails as a solution to the naturalness problem. Some experts argue that we’ve passed that point already. Others, allowing for more freedom in how certain factors are arranged, say it is happening right now, with ATLAS and CMS excluding the stop quark — the hypothetical superpartner of the 0.173-TeV top quark — up to a mass of 1 TeV. That’s already a nearly sixfold imbalance between the top and the stop in the Higgs tug-of-war. Even if a stop heavier than 1 TeV exists, it would be pulling too hard on the Higgs to solve the problem it was invented to address.

    “I think 1 TeV is a psychological limit,” said Albert de Roeck, a senior research scientist at CERN, the laboratory that houses the LHC, and a professor at the University of Antwerp in Belgium.

    Some will say that enough is enough, but for others there are still loopholes to cling to. Among the myriad supersymmetric extensions of the Standard Model, there are more complicated versions in which stop quarks heavier than 1 TeV conspire with additional supersymmetric particles to counterbalance the top quark, tuning the Higgs mass. The theory has so many variants, or individual “models,” that killing it outright is almost impossible. Joe Incandela, a physicist at the University of California, Santa Barbara, who announced the discovery of the Higgs boson on behalf of the CMS collaboration in 2012, and who now leads one of the stop-quark searches, said, “If you see something, you can make a model-independent statement that you see something. Seeing nothing is a little more complicated.”

    Particles can hide in nooks and crannies. If, for example, the stop quark and the lightest neutralino (supersymmetry’s candidate for dark matter) happen to have nearly the same mass, they might have stayed hidden so far. The reason for this is that, when a stop quark is created in a collision and decays, producing a neutralino, very little energy will be freed up to take the form of motion. “When the stop decays, there’s a dark-matter particle just kind of sitting there,” explained Kyle Cranmer of New York University, a member of ATLAS. “You don’t see it. So in those regions it’s very difficult to look for.” In that case, a stop quark with a mass as low as 0.6 TeV could still be hiding in the data.

    Experimentalists will strive to close these loopholes in the coming years, or to dig out the hidden particles. Meanwhile, theorists who are ready to move on face the fact that they have no signposts from nature about which way to go. “It’s a very muddled and uncertain situation,” Arkani-Hamed said.

    New Hope

    Many particle theorists now acknowledge a long-looming possibility: that the mass of the Higgs boson is simply unnatural — its small value resulting from an accidental, fine-tuned cancellation in a cosmic game of tug-of-war — and that we observe such a peculiar property because our lives depend on it. In this scenario, there are many, many universes, each shaped by different chance combinations of effects. Out of all these universes, only the ones with accidentally lightweight Higgs bosons will allow atoms to form and thus give rise to living beings. But this “anthropic” argument is widely disliked for being seemingly untestable.

    In the past two years, some theoretical physicists have started to devise totally new natural explanations for the Higgs mass that avoid the fatalism of anthropic reasoning and do not rely on new particles showing up at the LHC. Last week at CERN, while their experimental colleagues elsewhere in the building busily crunched data in search of such particles, theorists held a workshop to discuss nascent ideas such as the relaxion hypothesis — which supposes that the Higgs mass, rather than being shaped by symmetry, was sculpted dynamically by the birth of the cosmos — and possible ways to test these ideas. Nathaniel Craig of the University of California, Santa Barbara, who works on an idea called neutral naturalness, said in a phone call from the CERN workshop, “Now that everyone is past their diphoton hangover, we’re going back to these questions that are really aimed at coping with the lack of apparent new physics at the LHC.”

    Arkani-Hamed, who, along with several colleagues, recently proposed another new approach called Nnaturalness, said, “There are many theorists, myself included, who feel that we’re in a totally unique time, where the questions on the table are the really huge, structural ones, not the details of the next particle. We’re very lucky to get to live in a period like this — even if there may not be major, verified progress in our lifetimes.”

    As theorists return to their blackboards, the 6,000 experimentalists with CMS and ATLAS are reveling in their exploration of a previously uncharted realm. “Nightmare, what does it mean?” said Spiropulu, referring to theorists’ angst about the nightmare scenario. “We are exploring nature. Maybe we don’t have time to think about nightmares like that, because we are being flooded in data and we are extremely excited.”

    There’s still hope that new physics will show up. But discovering nothing, in Spiropulu’s view, is a discovery all the same — especially when it heralds the death of cherished ideas. “Experimentalists have no religion,” she said.

    Some theorists agree. Talk of disappointment is “crazy talk,” Arkani-Hamed said. “It’s actually nature! We’re learning the answer! These 6,000 people are busting their butts and you’re pouting like a little kid because you didn’t get the lollipop you wanted?”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 3:21 pm on August 11, 2016 Permalink | Reply
    Tags: , Deuteron, New Measurement Deepens Proton Puzzle, , Quanta Magazine   

    From Quanta: “New Measurement Deepens Proton Puzzle” 

    Quanta Magazine
    Quanta Magazine

    August 11, 2016
    Natalie Wolchover

    1
    Researchers fired a laser at a gas of muonic deuterium in order to measure the size of its nucleus. Courtesy of Randolf Pohl

    The same group that discovered a curious discrepancy in measurements of the size of the proton, giving rise to the “proton radius puzzle,” has now found a matching discrepancy in measurements of a nuclear particle called the deuteron. The new finding, to appear on August 12 in Science, increases the slim chance that something is truly amiss, rather than simply mismeasured, in the heart of atoms.

    The puzzle is that the proton — the positively charged particle found in atomic nuclei, which is actually a fuzzy ball of quarks and gluons — is measured to be ever so slightly larger when it is orbited by an electron than when it is orbited by a muon, a sibling of the electron that’s 207 times as heavy but otherwise identical. It’s as if the proton tightens its belt in the muon’s presence. And yet, according to the reigning theory of particle physics, the proton should interact with the muon and the electron in exactly the same way. As hundreds of papers have pointed out since the proton radius puzzle was born in 2010, a shrinking of the proton in the presence of a muon would most likely signify the existence of a previously unknown fundamental force — one that acts between protons and muons, but not between protons and electrons. (Interestingly, this new physics could also explain a long-standing discrepancy in the measurement of the muon’s anomalous magnetic moment.)

    This “would, of course, be fantastic,” said Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, who led both the 2010 experiment and the new study. “But the most realistic thing is that it’s not new physics.”

    The harsh reality is that the proton radius is extremely hard to measure, making such a measurement error-prone. It’s especially tough in the typical case where a proton is orbited by an electron, as in a regular hydrogen atom. Numerous groups have attempted this measurement over many decades; their average value for the proton radius is just shy of 0.88 femtometers. But Pohl’s group, seeking greater precision, set out in 1998 to measure the proton radius in “muonic hydrogen,” since the muon’s heft makes the proton’s size easier to probe. Twelve years later, the scientists reported in Nature a value for the proton radius that was far more precise than any single previous measurement using regular hydrogen, but which, at 0.84 femtometers, fell stunningly short of the average.

    The question is: Were all the measurements using regular hydrogen simply off — all accidentally too large? When I first corresponded with Pohl in 2013, the year he and his colleagues reported an updated muonic hydrogen measurement in Science, he emailed me a plot showing how, historically, measurements of physical constants have often drifted dramatically as techniques change and improve before converging on their correct values. “Quite instructive, no?” Pohl wrote. He was keeping things in perspective.

    1
    Examples of how the measured values of constants can vary dramatically before converging on their correct values. Particle Data Group

    But he and his group were also keeping at it. Already, they had begun the study that is finally being published this week.

    This time, they measured the radius of the deuteron, the nucleus of a deuterium atom (an isotope of hydrogen) that is comprised of a proton and a neutron. They measured it in muonic deuterium, in which a muon orbits a deuteron. The scientists then compared their measurement to the deuteron radius as measured in regular, electron-orbited deuterium, and asked: Is there a deuteron radius puzzle to match the proton’s?

    Their experiment probes the deuteron radius as follows: When electrons or muons orbit the deuteron in a certain energy level, they actually spend much of their time inside the deuteron, which, like a solar system, has a lot of empty space. Being inside the deuteron reduces the attraction that the electron or muon feels to it, since the deuteron’s charge pulls in different directions, partly canceling out. And so, paradoxically, the more time an electron or muon spends inside the deuteron, the less strongly bound it is, and the more easily it can jump away. The muon, because it’s so much heavier, orbits the deuteron much more tightly than the electron does, and so it is far more likely to be found inside. This means it experiences a much more greatly reduced deuteron charge; this larger reduction due to the deuteron’s structure is why the muon is a more precise probe of its radius.

    To actually measure that radius, the researchers fire a laser at a gas of muonic deuterium, causing muons to jump to a higher energy level that does not overlap with the nucleus. The team can pinpoint the energy required for the muon to undergo the transition, revealing how weakly bound the muon was when residing partly inside the deuteron. From this they can figure out where “inside the deuteron” begins — that is, its radius.

    When they did this, Pohl and company found that the deuteron radius is smaller when measured in muonic deuterium compared to the average value using electronic deuterium, just as with the proton radius discrepancy. The size difference scales from proton to deuteron exactly as they would expect if both effects come from a new force. “So now there are two discrepancies, and they are completely independent,” aside from being measured by the same group, Pohl said.

    Still, Pohl is highly skeptical that the puzzle is evidence of new fundamental physics.

    His personal guess is that physicists have misgauged the Rydberg constant, a factor that goes into calculating the expected differences between atomic energy levels. While it is considered one of the most accurately measured constants, a small error could account for the proton and deuteron radius puzzles.

    To test this possibility, physicists in Toronto are attempting to measure the proton radius in a way that sidesteps the Rydberg constant. Other experiments are under way to test alternative hypotheses, mundane and exciting alike. Pohl’s group is diving into muonic helium, a system in which the effects of a new force, if it exists, should be enhanced, since there are two protons. We’ll keep you posted.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 1:37 pm on August 4, 2016 Permalink | Reply
    Tags: , Miranda Cheng, Monstrous moonshine, Quanta Magazine,   

    From Quanta: “Moonshine Master Toys With String Theory” 

    Quanta Magazine
    Quanta Magazine

    August 4, 2016
    Natalie Wolchover

    The physicist-mathematician Miranda Cheng is working to harness a mysterious connection between string theory, algebra and number theory.

    1
    Ilvy Njiokiktjien for Quanta Magazine

    After the Eyjafjallajökull volcano erupted in Iceland in 2010, flight cancellations left Miranda Cheng stranded in Paris. While waiting for the ash to clear, Cheng, then a postdoctoral researcher at Harvard University studying string theory, got to thinking about a paper that had recently been posted online. Its three coauthors had pointed out a numerical coincidence connecting far-flung mathematical objects. “That smells like another moonshine,” Cheng recalled thinking. “Could it be another moonshine?”

    She happened to have read a book about the “monstrous moonshine,” a mathematical structure that unfolded out of a similar bit of numerology: In the late 1970s, the mathematician John McKay noticed that 196,884, the first important coefficient of an object called the j-function, was the sum of one and 196,883, the first two dimensions in which a giant collection of symmetries called the monster group could be represented. By 1992, researchers had traced this farfetched (hence “moonshine”) correspondence to its unlikely source: string theory, a candidate for the fundamental theory of physics that casts elementary particles as tiny oscillating strings. The j-function describes the strings’ oscillations in a particular string theory model, and the monster group captures the symmetries of the space-time fabric that these strings inhabit.

    By the time of Eyjafjallajökull’s eruption, “this was ancient stuff,” Cheng said — a mathematical volcano that, as far as physicists were concerned, had gone dormant. The string theory model underlying monstrous moonshine was nothing like the particles or space-time geometry of the real world. But Cheng sensed that the new moonshine, if it was one, might be different. It involved K3 surfaces — the geometric objects that she and many other string theorists study as possible toy models of real space-time.

    By the time she flew home from Paris, Cheng had uncovered more evidence that the new moonshine existed. She and collaborators John Duncan and Jeff Harvey gradually teased out evidence of not one but 23 new moonshines: mathematical structures that connect symmetry groups on the one hand and fundamental objects in number theory called mock modular forms (a class that includes the j-function) on the other. The existence of these 23 moonshines, posited in their Umbral Moonshine Conjecture in 2012, was proved by Duncan and coworkers late last year.

    Meanwhile, Cheng, 37, is on the trail of the K3 string theory underlying the 23 moonshines — a particular version of the theory in which space-time has the geometry of a K3 surface. She and other string theorists hope to be able to use the mathematical ideas of umbral moonshine to study the properties of the K3 model in detail. This in turn could be a powerful means for understanding the physics of the real world where it can’t be probed directly — such as inside black holes. An assistant professor at the University of Amsterdam on leave from France’s National Center for Scientific Research, Cheng spoke with Quanta Magazine about the mysteries of moonshines, her hopes for string theory, and her improbable path from punk-rock high school dropout to a researcher who explores some of the most abstruse ideas in math and physics. An edited and condensed version of the conversation follows.

    2
    Ilvy Njiokiktjien for Quanta Magazine

    QUANTA MAGAZINE: You do string theory on so-called K3 surfaces. What are they, and why are they important?

    MIRANDA CHENG: String theory says there are 10 space-time dimensions. Since we only perceive four, the other six must be curled up or “compactified” too small to see, like the circumference of a very thin wire. There’s a plethora of possibilities — something like 10^500 — for how the extra dimensions might be compactified, and it’s almost impossible to say which compactification is more likely to describe reality than the rest. We can’t possibly study the physical properties of all of them. So you look for a toy model. And if you like having exact results instead of approximated results, which I like, then you often end up with a K3 compactification, which is a middle ground for compactifications between too simple and too complicated. It also captures the key properties of Calabi-Yau manifolds [the most highly studied class of compactifications] and how string theory behaves when it’s compactified on them. K3 also has the feature that you can often do direct and exact computations with it.

    What does K3 actually look like?

    You can think of a flat torus, then you fold it so that there’s a line or corner of sharp edges. Mathematicians have a way to smooth it, and the result of smoothing a folded flat torus is a K3 surface.

    So you can figure out what the physics is in this setup, with strings moving through this space-time geometry?

    Yes. In the context of my Ph.D., I explored how black holes behave in this theory. Once you have the curled-up dimensions being K3-related Calabi-Yaus, black holes can form. How do these black holes behave — especially their quantum properties?

    So you could try to solve the information paradox—the long-standing puzzle of what happens to quantum information when it falls inside a black hole.

    Absolutely. You can ask about the information paradox or properties of various types of black holes, like realistic astrophysical black holes or supersymmetric black holes that come out of string theory. Studying the second type can shed light on your realistic problems because they share the same paradox. That’s why trying to understand string theory in K3 and the black holes that arise in that compactification should also shed light on other problems. At least, that’s the hope, and I think it’s a reasonable hope.

    Do you think string theory definitely describes reality? Or is it something you study purely for its own sake?

    I personally always have the real world at the back of my mind — but really, really, really back. I use it as sort of an inspiration for determining roughly the big directions I’m going in. But my day-to-day research is not aimed at solving the real world. I see it as differences in taste and style and personal capabilities. New ideas are needed in fundamental high-energy physics, and it’s hard to say where those new ideas will come from. Understanding the basic, fundamental structures of string theory is needed and helpful. You’ve got to start somewhere where you can compute things, and that leads, often, to very mathematical corners. The payoff to understanding the real world might be really long term, but that’s necessary at this stage.

    Have you always had a knack for physics and math?

    As a child in Taiwan I was more into literature — that was my big thing. And then I got into music when I was 12 or so — pop music, rock, punk. I was always very good at math and physics, but I wasn’t really interested in it. And I always found school insufferable and was always trying to find a way around it. I tried to make a deal with the teacher that I wouldn’t need to go into the class. Or I had months of sick leave while I wasn’t sick at all. Or I skipped a year here and there. I just don’t know how to deal with authority, I guess.

    And the material was probably too easy. I skipped two years, but that didn’t help. So then they moved me to a special class and that made it even worse, because everybody was very competitive, and I just couldn’t deal with the competition at all. Eventually I was super depressed, and I decided either I would kill myself or not go to school. So I stopped going to school when I was 16, and I also left home because I was convinced that my parents would ask me to go back to school and I really didn’t want to do that. So I started working in a record shop, and by that time I also played in a band, and I loved it.

    How did you get from there to string theory?

    Long story short, I got a little bit discouraged or bored. I wanted to do something else aside from music. So I tried to go back to university, but then I had the problem that I hadn’t graduated from high school. But before I quit school I was in a special class for kids who are really good in science. I could get in the university with this. So I thought, OK, great, I’ll just get into university first by majoring in physics or math, and then I can switch to literature. So I enrolled in the physics department, having a very on- and off-again relationship to it, going to class every now and then, and then trying to study literature, while still playing in the band. Then I realized I’m not good enough in literature. And also there was a very good teacher teaching quantum mechanics. Just once I went to his class and thought, that’s actually pretty cool. I started paying a bit more attention to my studies of math and physics, and I started to find peace in it. That’s what started to attract me about math and physics, because my other life in the band playing music was more chaotic somehow. It sucks a lot of emotions out of you. You’re always working with people, and the music is too much about life, about emotions — you have to give a lot of yourself to it. Math and physics seems to have this peaceful quiet beauty. This space of serenity.

    Then at the end of university I thought, well, let me just have one more year to study physics, then I’m really done with it and can move on with my life. So I decided to go to Holland to see the world and study some physics, and I got really into it there.

    You got your master’s at Utrecht under Nobel Prize-winning physicist Gerard ’t Hooft, and then you did your Ph.D. in Amsterdam. What drew you in?

    Working with [’t Hooft] was a big factor. But just learning more is also a big factor — to realize that there are so many interesting questions. That’s the big-picture part. But for me the day-to-day part is also important. The learning process, the thinking process, really the beauty of it. Every day you encounter some equations or some way of thinking, or this fact leads to that fact — I thought, well, this is pretty. Gerard is not a string theorist — he’s very open-minded about what the correct area of quantum gravity should be — so I got exposed to a few different options. I got attracted by string theory because it’s mathematically rigorous, and pretty.

    With the work you’re doing now, aside from the beauty, are you also drawn to the mystery of these connections between seemingly different parts of math and physics?

    The mystery part connects to the bad side of my character, which is the obsessive side. That’s one of the driving forces that I would call slightly negative from the human point of view, though not the scientist point of view. But there’s also the positive driving force, which is that I really enjoy learning different stuff and feeling how ignorant I am. I enjoy that frustration, like, “I know nothing about this subject; I really want to learn!” So that’s one motivation — to be at this boundary place between math and physics. Moonshine is a puzzle that might require inspirations from everywhere and knowledge from everywhere. And the beauty, certainly — it’s a beautiful story. It’s kind of hard to say why it is beautiful. It’s beautiful not the same way as a song is beautiful or a picture is beautiful.

    What’s the difference?

    Typically a song is beautiful because it triggers certain emotions. It resonates with part of your life. Mathematical beauty is not that. It’s something much more structured. It gives you a feeling of something much more permanent, and independent of you. It makes me feel small, and I like that.

    What is a moonshine, exactly?

    A moonshine relates representations of a finite symmetry group to a function with special symmetries [ways that you can transform the function without affecting its output]. Underlying this relationship, at least in the case of monstrous moonshine, is a string theory. String theory has two geometries. One is the “worldsheet” geometry. If you have a string — essentially a circle — moving in time, then you get a cylinder. That’s what we call the worldsheet geometry; it’s the geometry of the string itself. If you roll the cylinder and connect the two ends, you get a torus. The torus gives you the symmetry of the j-function. The other geometry in string theory is space-time itself, and its symmetry gives you the monster group.

    We don’t know yet, but these are educated guesses: To have a moonshine tells you that this theory has to have an algebraic structure [you have to be able to do algebra with its elements]. If you look at a theory and you ask what kind of particles you have at a certain energy level, this question is infinite, because you can go to higher and higher energies, and then this question goes on and on. In monstrous moonshine, this is manifested in the fact that if you look at the j-function, there are infinitely many terms that basically capture the energy of the particles. But we know there’s an algebraic structure underlying it — there’s a mechanism for how the lower energy states can be related to higher energy states. So this infinite question has a structure; it’s not just random.

    As you can imagine, having an algebraic structure helps you understand what the structure is that captures a theory — how, if you look at the lower energy states, they will tell you something about the higher energy states. And then it also gives you more tools to do computations. If you want to understand something at a high-energy level [such as inside black holes], then I have more information about it. I can compute what I want to compute for high-energy states using this low-energy data I already have in hand. That’s the hope.

    Umbral moonshine tells you that there should be a structure like this that we don’t understand yet. Understanding it more generally will force us to understand this algebraic structure. And that will lead to a much deeper understanding of the theory. That’s the hope.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: