Tagged: Quanta Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:19 am on July 9, 2018 Permalink | Reply
    Tags: , , , Quanta Magazine,   

    From Quanta Magazine: “Physicists Find a Way to See the ‘Grin’ of Quantum Gravity” 

    Quanta Magazine
    From Quanta Magazine

    March 6, 2018
    Natalie Wolchover

    Re-released 7.8.18

    A recently proposed experiment would confirm that gravity is a quantum force.

    1
    Two microdiamonds would be used to test the quantum nature of gravity. Olena Shmahalo/Quanta Magazine

    In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.

    Bronstein figured out how to describe gravity in terms of quantized particles, now called gravitons, but only when the force of gravity is weak — that is (in general relativity), when the space-time fabric is so weakly curved that it can be approximated as flat. When gravity is strong, “the situation is quite different,” he wrote. “Without a deep revision of classical notions, it seems hardly possible to extend the quantum theory of gravity also to this domain.”

    His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.

    2
    To Solve the Biggest Mystery in Physics, Join Two Kinds of Law. Robbert Dijkgraaf . James O’Brien for Quanta Magazine.Reductionism breaks the world into elementary building blocks. Emergence finds the simple laws that arise out of complexity. These two complementary ways of viewing the universe come together in modern theories of quantum gravity. September 7, 2017

    Perhaps, given the chance, the whip-smart Bronstein might have helped to speed things along. Aside from quantum gravity, he contributed to astrophysics and cosmology, semiconductor theory, and quantum electrodynamics, and he also wrote several science books for children, before being caught up in Stalin’s Great Purge and executed in 1938, at the age of 31.

    The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.

    The problem is gravity’s extreme weakness. Whereas the quantized particles that convey the strong, weak and electromagnetic forces are so powerful that they tightly bind matter into atoms, and can be studied in tabletop experiments, gravitons are individually so weak that laboratories have no hope of detecting them. To detect a graviton with high probability, a particle detector would have to be so huge and massive that it would collapse into a black hole. This weakness is why it takes an astronomical accumulation of mass to gravitationally influence other massive bodies, and why we only see gravity writ large.

    Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity — where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed — always hide behind the horizons of black holes.

    3
    Mike Zeng for Quanta Magazine. Where Gravity Is Weak and Naked Singularities Are Verboten. Natalie Wolchover Recent calculations tie together two conjectures about gravity, potentially revealing new truths about its elusive quantum nature.

    “Even a few years ago it was a generic consensus that, most likely, it’s not even conceivably possible to measure quantization of the gravitational field in any way,” said Igor Pikovski, a theoretical physicist at Harvard University.

    Now, a pair of papers recently published in Physical Review Letters has changed the calculus.

    Spin Entanglement Witness for Quantum Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240401
    Gravitationally Induced Entanglement between Two Massive Particles is Sufficient Evidence of Quantum Effects in Gravity https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.240402

    The papers contend that it’s possible to access quantum gravity after all — while learning nothing about it. The papers, written by Sougato Bose at University College London and nine collaborators and by Chiara Marletto and Vlatko Vedral at the University of Oxford, propose a technically challenging, but feasible, tabletop experiment that could confirm that gravity is a quantum force like all the rest, without ever detecting a graviton. Miles Blencowe, a quantum physicist at Dartmouth College who was not involved in the work, said the experiment would detect a sure sign of otherwise invisible quantum gravity — the “grin of the Cheshire cat.”

    2
    A levitating microdiamond (green dot) in Gavin Morley’s lab at the University of Warwick, in front of the lens used to trap the diamond with light. Gavin W Morley

    The proposed experiment will determine whether two objects — Bose’s group plans to use a pair of microdiamonds — can become quantum-mechanically entangled with each other through their mutual gravitational attraction. Entanglement is a quantum phenomenon in which particles become inseparably entwined, sharing a single physical description that specifies their possible combined states. (The coexistence of different possible states, called a “superposition,” is the hallmark of quantum systems.) For example, an entangled pair of particles might exist in a superposition in which there’s a 50 percent chance that the “spin” of particle A points upward and B’s points downward, and a 50 percent chance of the reverse. There’s no telling in advance which outcome you’ll get when you measure the particles’ spin directions, but you can be sure they’ll point opposite ways.

    The authors argue that the two objects in their proposed experiment can become entangled with each other in this way only if the force that acts between them — in this case, gravity — is a quantum interaction, mediated by gravitons that can maintain quantum superpositions. “If you can do the experiment and you get entanglement, then according to those papers, you have to conclude that gravity is quantized,” Blencowe explained.

    To Entangle a Diamond

    Quantum gravity is so imperceptible that some researchers have questioned whether it even exists. The venerable mathematical physicist Freeman Dyson, 94, has argued since 2001 that the universe might sustain a kind of “dualistic” description, where “the gravitational field described by Einstein’s theory of general relativity is a purely classical field without any quantum behavior,” as he wrote that year in The New York Review of Books, even though all the matter within this smooth space-time continuum is quantized into particles that obey probabilistic rules.

    Dyson, who helped develop quantum electrodynamics (the theory of interactions beween matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.

    He is not the only skeptic. The renowned British physicist Sir Roger Penrose and, independently, the Hungarian researcher Lajos Diósi have hypothesized that space-time cannot maintain superpositions. They argue that its smooth, solid, fundamentally classical nature prevents it from curving in two different possible ways at once — and that its rigidity is exactly what causes superpositions of quantum systems like electrons and photons to collapse. This “gravitational decoherence,” in their view, gives rise to the single, rock-solid, classical reality experienced at macroscopic scales.

    The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.

    Bose’s and Marletto’s proposals appeared simultaneously mostly by chance, though experts said they reflect the zeitgeist. Experimental quantum physics labs around the world are putting ever-larger microscopic objects into quantum superpositions and streamlining protocols for testing whether two quantum systems are entangled. The proposed experiment will have to combine these procedures while requiring further improvements in scale and sensitivity; it could take a decade or more to pull it off. “But there are no physical roadblocks,” said Pikovski, who also studies how laboratory experiments might probe gravitational phenomena. “I think it’s challenging, but I don’t think it’s impossible.”

    The plan is laid out in greater detail in the paper by Bose and co-authors — an Ocean’s Eleven cast of experts for different steps of the proposal. In his lab at the University of Warwick, for instance, co-author Gavin Morley is working on step one, attempting to put a microdiamond in a quantum superposition of two locations. To do this, he’ll embed a nitrogen atom in the microdiamond, next to a vacancy in the diamond’s structure, and zap it with a microwave pulse. An electron orbiting the nitrogen-vacancy system both absorbs the light and doesn’t, and the system enters a quantum superposition of two spin directions — up and down — like a spinning top that has some probability of spinning clockwise and some chance of spinning counterclockwise. The microdiamond, laden with this superposed spin, is subjected to a magnetic field, which makes up-spins move left while down-spins go right. The diamond itself therefore splits into a superposition of two trajectories.

    In the full experiment, the researchers must do all this to two diamonds — a blue one and a red one, say — suspended next to each other inside an ultracold vacuum. When the trap holding them is switched off, the two microdiamonds, each in a superposition of two locations, fall vertically through the vacuum. As they fall, the diamonds feel each other’s gravity. But how strong is their gravitational attraction?

    If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.

    In each case, the different degrees of gravitational attraction affect the evolving components of the diamonds’ superpositions. The two diamonds become interdependent, meaning that their states can only be specified in combination — if this, then that — so that, in the end, the spin directions of their two nitrogen-vacancy systems will be correlated.

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    After the microdiamonds have fallen side by side for about three seconds — enough time to become entangled by each other’s gravity — they then pass through another magnetic field that brings the branches of each superposition back together. The last step of the experiment is an “entanglement witness” protocol developed by the Dutch physicist Barbara Terhal and others: The blue and red diamonds enter separate devices that measure the spin directions of their nitrogen-vacancy systems. (Measurement causes superpositions to collapse into definite states.) The two outcomes are then compared. By running the whole experiment over and over and comparing many pairs of spin measurements, the researchers can determine whether the spins of the two quantum systems are correlated with each other more often than a known upper bound for objects that aren’t quantum-mechanically entangled. In that case, it would follow that gravity does entangle the diamonds and can sustain superpositions.

    “What’s beautiful about the arguments is that you don’t really need to know what the quantum theory is, specifically,” Blencowe said. “All you have to say is there has to be some quantum aspect to this field that mediates the force between the two particles.”

    Technical challenges abound. The largest object that’s been put in a superposition of two locations before is an 800-atom molecule. Each microdiamond contains more than 100 billion carbon atoms — enough to muster a sufficient gravitational force. Unearthing its quantum-mechanical character will require colder temperatures, a higher vacuum and finer control. “So much of the work is getting this initial superposition up and running,” said Peter Barker, a member of the experimental team based at UCL who is improving methods for laser-cooling and trapping the microdiamonds. If it can be done with one diamond, Bose added, “then two doesn’t make much of a difference.”

    Why Gravity Is Unique

    Quantum gravity researchers do not doubt that gravity is a quantum interaction, capable of inducing entanglement. Certainly, gravity is special in some ways, and there’s much to figure out about the origin of space and time, but quantum mechanics must be involved, they say. “It doesn’t really make much sense to try to have a theory in which the rest of physics is quantum and gravity is classical,” said Daniel Harlow, a quantum gravity researcher at the Massachusetts Institute of Technology. The theoretical arguments against mixed quantum-classical models are strong (though not conclusive).

    On the other hand, theorists have been wrong before, Harlow noted: “So if you can check, why not? If that will shut up these people” — meaning people who question gravity’s quantumness — “that’s great.”

    Dyson wrote in an email, after reading the PRL papers, “The proposed experiment is certainly of great interest and worth performing with real quantum systems.” However, he said the authors’ way of thinking about quantum fields differs from his. “It is not clear to me whether [the experiment] would settle the question whether quantum gravity exists,” he wrote. “The question that I have been asking, whether a single graviton is observable, is a different question and may turn out to have a different answer.”

    In fact, the way Bose, Marletto and their co-authors think about quantized gravity derives from how Bronstein first conceived of it in 1935. (Dyson called Bronstein’s paper “a beautiful piece of work” that he had not seen before.) In particular, Bronstein showed that the weak gravity produced by a small mass can be approximated by Newton’s law of gravity. (This is the force that acts between the microdiamond superpositions.) According to Blencowe, weak quantized-gravity calculations haven’t been developed much, despite being arguably more physically relevant than the physics of black holes or the Big Bang. He hopes the new experimental proposal will spur theorists to find out whether there are any subtle corrections to the Newtonian approximation that future tabletop experiments might be able to probe.

    Leonard Susskind, a prominent quantum gravity and string theorist at Stanford University, saw value in carrying out the proposed experiment because “it provides an observation of gravity in a new range of masses and distances.” But he and other researchers emphasized that microdiamonds cannot reveal anything about the full theory of quantum gravity or space-time. He and his colleagues want to understand what happens at the center of a black hole, and at the moment of the Big Bang.

    Perhaps one clue as to why it is so much harder to quantize gravity than everything else is that other force fields in nature exhibit a feature called “locality”: The quantum particles in one region of the field (photons in the electromagnetic field, for instance) are “independent of the physical entities in some other region of space,” said Mark Van Raamsdonk, a quantum gravity theorist at the University of British Columbia. But “there’s at least a bunch of theoretical evidence that that’s not how gravity works.”

    In the best toy models of quantum gravity (which have space-time geometries that are simpler than those of the real universe), it isn’t possible to assume that the bendy space-time fabric subdivides into independent 3-D pieces, Van Raamsdonk said. Instead, modern theory suggests that the underlying, fundamental constituents of space “are organized more in a 2-D way.” The space-time fabric might be like a hologram, or a video game: “Even though the picture is three-dimensional, the information is stored in some two-dimensional computer chip,” he said. In that case, the 3-D world is illusory in the sense that different parts of it aren’t all that independent. In the video-game analogy, a handful of bits stored in the 2-D chip might encode global features of the game’s universe.

    The distinction matters when you try to construct a quantum theory of gravity. The usual approach to quantizing something is to identify its independent parts — particles, say — and then apply quantum mechanics to them. But if you don’t identify the correct constituents, you get the wrong equations. Directly quantizing 3-D space, as Bronstein did, works to some extent for weak gravity, but the method fails when space-time is highly curved.

    Witnessing the “grin” of quantum gravity would help motivate these abstract lines of reasoning, some experts said. After all, even the most sensible theoretical arguments for the existence of quantum gravity lack the gravitas of experimental facts. When Van Raamsdonk explains his research in a colloquium or conversation, he said, he usually has to start by saying that gravity needs to be reconciled with quantum mechanics because the classical space-time description fails for black holes and the Big Bang, and in thought experiments about particles colliding at unreachably high energies. “But if you could just do this simple experiment and get the result that shows you that the gravitational field was actually in a superposition,” he said, then the reason the classical description falls short would be self-evident: “Because there’s this experiment that suggests gravity is quantum.”

    Correction March 6, 2018: An earlier version of this article referred to Dartmouth University. Despite the fact that Dartmouth has multiple individual schools, including an undergraduate college as well as academic and professional graduate schools, the institution refers to itself as Dartmouth College for historical reasons.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

    Advertisements
     
  • richardmitnick 2:27 pm on May 20, 2018 Permalink | Reply
    Tags: , , , , , Quanta Magazine   

    From Quanta Magazine: “A New World’s Extraordinary Orbit Points to Planet Nine” 

    Quanta Magazine
    From Quanta Magazine

    May 15, 2018
    Shannon Hall

    Astronomers argue that there’s an undiscovered giant planet far beyond the orbit of Neptune. A newly discovered rocky body has added evidence to the circumstantial case for it.

    1
    Olena Shmahalo/Quanta Magazine

    In early 2016, two planetary scientists declared that a ghost planet is hiding in the depths of the solar system, well beyond the orbit of Pluto. Their claim, which they made based on the curious orbits of distant icy worlds, quickly sparked a race to find this so-called Planet Nine — a planet that is estimated to be about 10 times the mass of Earth. “It has a real magnetism to it,” said Gregory Laughlin, an astronomer at Yale University. “I mean, finding a 10-Earth-mass planet in our own solar system would be a discovery of unrivaled scientific magnitude.”

    Now, astronomers are reporting [The Astronomical Journal] that they have spotted another distant world — perhaps as large as a dwarf planet — whose orbit is so odd that it is likely to have been shepherded by Planet Nine.

    The Extreme Trans-Neptunian object orbits
    2
    6 original and 8 new TNO object orbits with current positions near their perihelion in purple, with hypothetical Planet Nine orbit in green. https://en.wikipedia.org/wiki/Planet_Nine. No image credit found.

    The object confirms a specific prediction made by Konstantin Batygin and Michael Brown, the astronomers at the California Institute of Technology who first argued for Planet Nine’s existence. “It’s not proof that Planet Nine exists,” said David Gerdes, an astronomer at the University of Michigan and a co-author on the new paper. “But I would say the presence of an object like this in our solar system bolsters the case for Planet Nine.”

    3
    Lucy Reading-Ikkanda/Quanta Magazine

    Gerdes and his colleagues spotted the new object in data from the Dark Energy Survey, a project that probes the acceleration in the expansion of the universe by surveying a region well above the plane of the solar system.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam

    This makes it an unlikely tool for finding objects inside the solar system, since they mostly orbit within the plane. But that is exactly what makes the new object unique: Its orbit is tilted 54 degrees with respect to the plane of the solar system. It’s something Gerdes did not expect to see. Batygin and Brown, however, predicted it.

    Two years ago, Batygin and Brown made a case [The Astronomical Journal] for Planet Nine’s existence based on the peculiar orbits of a handful of distant worlds known as Kuiper belt objects.

    Kuiper Belt. Minor Planet Center

    That small population loops outward toward the same quadrant of the solar system, a phenomenon that would be extremely unlikely to happen by chance. Batygin and Brown argued that a ninth planet must be shepherding those worlds into their strange orbits.

    What’s more, Batygin and Brown also predicted that over time, Planet Nine’s gravity would push these Kuiper belt objects out of their current plane and into ever-higher orbital inclinations. Although astronomers have already spotted a bizarre population of worlds that orbit the sun perpendicularly to the plane of the solar system, they had never caught an object transitioning between the two populations. “There’s no real way to put something on an orbit like that — except that it’s exactly what we predicted from Planet Nine,” Brown said. Batygin notes that the new object fits so perfectly with their model that it almost looks like one of the data points in their simulations. “A good theory reproduces data — but a great theory predicts new data,” he said.

    The Dark Energy Survey first detected evidence for the new object in late 2014. Gerdes and his colleagues have spent the years since then tracking its orbit and trying to understand its origins. In the new paper, they describe how they ran many simulations of the object within the known solar system, letting the clock run forward and backward 4.5 billion years at a time. Nothing could explain how the object landed in such a tilted orbit. It wasn’t until they added in a ninth planet — a planet with characteristics that perfectly match Batygin and Brown’s predictions — that the wacky orbit finally made sense. “The second you put Planet Nine in the simulations, not only can you form objects like this object, but you absolutely do,” said Juliette Becker, a graduate student at Michigan and the lead author on the new paper. A strong and sustained interaction with Planet Nine appears to be the only way to pump up the object’s inclination, pushing it away from the plane of the solar system. “There is no other reasonable way to populate the Kuiper belt with such highly inclined bodies,” Batygin said. “I think the case for the existence of Planet Nine is now genuinely excellent.”

    Other astronomers aren’t so certain — in part because the early solar system remains a mystery. Scientists suspect that the sun was born within a cluster of stars, meaning that the early planets might have had many close encounters with other stars that sent them on paths that seem impossible today. And even once the stars dispersed, the early solar system likely contained tens of thousands of dwarf planets that could have provided the gravitational nudges needed to push 2015 BP519, as the new object is called, into such an odd orbit. “To me, Planet Nine is one of a number of ways that the solar system could have unfolded,” said Michele Bannister, an astronomer at Queen’s University Belfast who was not involved in the study. “It’s a potential idea.” But at the moment it is just that — an idea.

    Yet when astronomers examine the larger universe, the idea doesn’t seem all that surprising. Planets between two and 10 times the mass of Earth are incredibly common throughout the galaxy, which makes it odd that our solar system doesn’t harbor one. “If it wasn’t in our own solar system — if the stakes weren’t so high — I think that the hypothesis would almost certainly be correct,” Laughlin said. “It’s only the fact that it’s so amazing that tends to give me pause.” Finding a ninth planet within our solar system would be both transformative and extraordinarily inspiring, he said. “It would be this dramatic confirmation of the scientific method, which would be pretty refreshing in the current age where the truth is on trial.”

    See the full article here .

    Please help promote STEM in your local schools.

    stem

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 4:30 pm on March 22, 2018 Permalink | Reply
    Tags: , , , , , , , Quanta Magazine, Squishy or Solid? A Neutron Star’s Insides Open to Debate   

    From Quanta Magazine: “Squishy or Solid? A Neutron Star’s Insides Open to Debate” 

    Quanta Magazine
    Quanta Magazine

    October 30, 2017 [Just now in social media]
    Joshua Sokol

    The core of a neutron star is such an extreme environment that physicists can’t agree on what happens inside. But a new space-based experiment — and a few more colliding neutron stars — should reveal whether neutrons themselves break down.

    1
    Maciej Rebisz for Quanta Magazine

    The alerts started in the early morning of Aug. 17. Gravitational waves produced by the wreck of two neutron stars — dense cores of dead stars — had washed over Earth. The thousand-plus physicists of the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) rushed to decode the space-time vibrations that rolled across the detectors like a drawn-out peal of thunder. Thousands of astronomers scrambled to witness the afterglow. But officially, all this activity was kept secret. The data had to be collected and analyzed, the papers written. The outside world wouldn’t know for two more months.

    See https://sciencesprings.wordpress.com/2017/10/20/from-ucsc-neutron-stars-gravitational-waves-and-all-the-gold-in-the-universe/

    The strict ban put Jocelyn Read and Katerina Chatziioannou, two members of the LIGO collaboration, in a bit of an awkward situation. In the afternoon on the 17th, the two were scheduled to lead a panel at a conference dedicated to the question of what happens under the almost unfathomable conditions in a neutron star’s interior. Their panel’s topic? What a neutron-star merger would look like. “We sort of went off at the coffee break and sat around just staring at each other,” said Read, a professor at California State University, Fullerton. “OK, how are we going to do this?”

    Physicists have spent decades debating whether or not neutron stars contain new forms of matter, created when the stars break down the familiar world of protons and neutrons into new interactions between quarks or other exotic particles. Answering this question would also illuminate astronomical mysteries surrounding supernovas and the production of the universe’s heavy elements, such as gold.

    In addition to watching for collisions using LIGO, astrophysicists have been busy developing creative ways to probe neutron stars from the outside. The challenge is then to infer something about the hidden layers within. But this LIGO signal and those like it — emitted as two neutron stars pirouette around their center of mass, pull on each other like taffy, and finally smash together — offers a whole new handle on the problem.

    Strange Matter

    A neutron star is the compressed core of a massive star — the super dense cinders left over after a supernova. It has the mass of the sun, but squeezed into a space the width of a city. As such, neutron stars are the densest reservoirs of matter in the universe — the “last stuff on the line before a black hole,” said Mark Alford, a physicist at Washington University in St. Louis.

    To drill into one would bring us to the edge of modern physics. A centimeter or two of normal atoms — iron and silicon, mostly — encrusts the surface like the shiny red veneer on the universe’s densest Gobstopper. Then the atoms squeeze so close together that they lose their electrons, which fall into a shared sea. Deeper, the protons inside nuclei start turning into neutrons, which cluster so close together that they start to overlap.

    2
    Lucy Reading-Ikkanda/Quanta Magazine; Source: Feryal Özel

    But theorists argue about what happens farther in, when densities creep past two or three times higher than the density of a normal atomic nucleus. From the perspective of nuclear physics, neutron stars could just be protons and neutrons — collectively called nucleons — all the way in. “Everything can be explained by variations of nucleons,” said James Lattimer, an astrophysicist at Stony Brook University.

    Other astrophysicists suspect otherwise. Nucleons aren’t elementary particles. They’re made up of three quarks. Under immense pressure, these quarks might form a new state of quark matter. “Nucleons are not billiard balls,” said David Blaschke, a physicist at the University of Wroclaw in Poland. “They are like cherries. So you can compress them a little bit, but at some point you smash them.”

    But to some, the prospect of a quark jam like this is a relatively vanilla scenario. Theorists have long speculated that layers of other weird particles might arise inside a neutron star. As neutrons are jostled closer together, all that extra energy might go into creating heavier particles that contain not just the “up” and “down” quarks that exclusively make up protons and neutrons, but heavier and more exotic “strange” quarks.

    For example, neutrons might be replaced by hyperons, three-quark particles that include at least one strange quark. Laboratory experiments can make hyperons, but they vanish almost immediately. Deep inside neutron stars, they might be stable for millions of years.

    Alternatively, the hidden depths of neutron stars might be filled with kaons — also made with strange quarks — that collect into a single lump of matter sharing the same quantum state.

    For decades, though, the field has been stuck. Theorists invent ideas about what might be going on inside neutron stars, but that environment is so extreme and unfamiliar that experiments here on Earth can’t reach the right conditions. At Brookhaven National Laboratory and CERN, for example, physicists smash together heavy nuclei like those of gold and lead.

    That creates a soupy state of matter made up of released quarks, known as a quark-gluon plasma. But this stuff is rarefied, not dense, and at billions or trillions of degrees, it’s far hotter than the inside of neutron star, which sits in the comparatively chilly millions.

    Quark gluon plasma. Duke University

    Even the decades-old theory of quarks and nuclei — “quantum chromodynamics,” or QCD — can’t really provide answers. The computations needed to study QCD in relatively cold, dense environments are so devastatingly difficult that not even computers can calculate the results. Researchers are forced to resort to oversimplification and shortcuts.

    The only other option is for astronomers to study neutron stars themselves. Unfortunately, neutron stars are distant, thus dim, and difficult to measure for anything but the very basic bulk properties. Even worse, the truly interesting physics is happening under the surface. “It’s a bit like there’s this lab that’s doing amazing things,” Alford said, “but all you’re allowed to do is see the light coming out of the window.”

    With a new generation of experiments coming online, though, theorists might soon get their best look yet.

    6
    The NICER instrument, shown here before it was launched to the International Space Station, monitors the X-ray emissions of neutron stars. NASA/Goddard/Keith Gendreau

    Squishy or Hard?

    Whatever might be inside the core of a neutron star — loose quarks, or kaon condensates, or hyperons, or just regular old nucleons — the material must be able to hold up to the crushing weight of more than a sun’s worth of gravity. Otherwise, the star would collapse into a black hole. But different materials will compress to different degrees when squeezed by gravity’s vise, determining how heavy the star can be at a given physical size.

    Stuck on the outside, astronomers work backwards to figure out what neutron stars are made of. For this purpose, it helps to know how squishy or stiff they are when squeezed. And for that, astronomers need to measure the masses and radii of various neutron stars.

    In terms of mass, the most easily weighed neutron stars are pulsars: neutron stars that rotate quickly, sweeping a radio beam across Earth with each spin. About 10 percent of the 2,500 known pulsars belong to binary systems. As these pulsars move with their partners, what should be a constant tick-tock of pulses hitting Earth will vary, betraying the pulsar’s motion and its location in its orbit. And from the orbit, astronomers can use Kepler’s laws and the additional rules imposed by Einstein’s general relativity to solve for the masses of the pair.

    So far, the biggest breakthrough has been the discovery of surprisingly hefty neutron stars. In 2010, a team led by Scott Ransom at the National Radio Astronomy Observatory in Virginia announced that they had measured a pulsar weighing about two solar masses — making it far bigger than any previously seen. Some people doubted whether such a neutron star could exist; that it does has had immense consequences for our understanding of how nuclei behave. “Now it’s like the most cited observational pulsar paper ever, because of the nuclear physicists,” Ransom said.

    According to some neutron-star models, which hold that gravity should strongly compress neutron stars, an object at that mass should collapse all the way into a black hole. That would be bad news for kaon condensates, which would be especially squishy, and it bodes poorly for some versions of quark matter and hyperons that would also compress too much. The measurement has been confirmed with the discovery of another neutron star of two solar masses in 2013.

    Radii are trickier. Astrophysicists like Feryal Özel at the University of Arizona have devised various tricks to calculate the physical size of neutron stars by observing the X-rays emitted at their surfaces. Here’s one way: You can look at the overall X-ray emission, use it to estimate the temperature of the surface, and then figure out how big the neutron star needs to be to emit the observed light (correcting for how the light bends through space-time warped by gravity). Or you can look for hot spots on the neutron star’s surface that spin in and out of view. The neutron star’s strong gravitational field will modify the pulses of light from these hot spots. And once you understand the star’s gravitational field, you can reconstruct its mass and radius.

    Taken at face value, these X-ray measurements suggest that even though neutron stars can be heavy, they are on the small end of predictions: only about 20 to 22 kilometers wide, according to Özel.

    Accepting that neutron stars are both small and massive “kind of locks you in, in a good way,” Özel said. Neutron stars stuffed with interacting quarks would look like this, she said, while neutron stars made up of only nucleons would have larger radii.

    But Lattimer, among other critics, has reservations about the assumptions that go into the X-ray measurements, which he calls flawed. He thinks they make the radii look smaller they really are.

    Both sides expect that a resolution to the dispute will soon arrive. This past June, SpaceX’s 11th resupply mission to the International Space Station brought with it a 372-kilogram box containing an X-ray telescope called the Neutron Star Interior Composition Explorer (NICER).

    7
    NICER before launch.

    Now taking data, NICER is designed to find the size of neutron stars by watching for hot spots on their surfaces. The experiment should produce better radii measurements of neutron stars, including pulsars that have already had their masses measured.

    “We look so much forward to it,” Blaschke said. A well-measured mass and radius for even a single neutron star would knock out many possible theories of their interior structure, keeping in play only the ones that could produce that particular combination of size and weight.

    And now, finally chiming in, there’s LIGO.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    As a first pass, the signal that Read huddled over coffee to discuss on Aug. 17 had been processed as if it were a merger of two black holes, not two neutron stars. This wasn’t unreasonable. LIGO’s previous signals had all come from black holes, which are more tractable beasts from a computational standpoint. But this signal involved lighter objects and went on for much longer than the black hole mergers. “It’s immediately obvious that this was not the same kind of system that we were practiced on,” Read said.

    When two black holes spiral together, they bleed orbital energy into space-time as gravitational waves. But in the final second or so of the new 90-second-long LIGO signal, each object did something black holes don’t do: It deformed. The pair started to stretch and squeeze each other’s matter, generating tides that stole energy from their orbits. This drove them to collide faster than they would have otherwise.

    After a frantic few months of running computer simulations, Read’s group inside LIGO has released their first measurement of the effect of those tides on the signal. So far, the team can set only an upper limit — meaning the tides have a weak or even unnoticeable effect. In turn, that means that neutron stars are physically small, with their matter held very tightly around their centers and thus more resistant to getting yanked by tides. “I think the first gravitational-wave measurement is in a sense really kind of confirming the kinds of things that X-ray observations have been saying,” Read said. But this isn’t the last word. She expects that more sophisticated modeling of the same signal will yield a more precise estimate.

    With NICER and LIGO both offering new ways to look at neutron-star stuff, many experts are optimistic that the next few years will provide unambiguous answers to the question of how the material stands up to gravity. But theorists like Alford caution that measuring neutron-star matter’s squishiness alone won’t fully reveal what it is.

    Perhaps other signatures can say more. Ongoing observations of the rate at which neutron stars cool, for example, should let astrophysicists speculate about the particles inside them and their ability to radiate away energy. Or observations of how their spins slow over time could help determine the viscosity of their insides.

    Ultimately, just knowing when dense matter changes phase and what it changes into is a worthy goal, Alford argues. “Mapping the properties of matter under different conditions,” he said, “kind of is physics”.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:10 pm on March 21, 2018 Permalink | Reply
    Tags: , , , , , , Quanta Magazine   

    From Quanta Magazine: “Science’s Path From Myth to Multiverse” 

    Quanta Magazine
    Quanta Magazine

    In his latest book, the Nobel Prize winner Steven Weinberg explores how science made the modern world, and where it might take us from here.

    March 17, 2015 [Just found this in social media.]
    Dan Falk

    Steven Weinberg, U Texas


    Steven Weinberg

    Steven Weinberg, a physicist at the University of Texas, Austin, won a Nobel Prize in 1979 for work that became a cornerstone of particle physics.

    We can think of the history of physics as an attempt to unify the world around us: Gradually, over many centuries, we’ve come to see that seemingly unrelated phenomena are intimately connected. The physicist Steven Weinberg of the University of Texas, Austin, received his Nobel Prize in 1979 for a major breakthrough in that quest — showing how electromagnetism and the weak nuclear force are manifestations of the same underlying theory (he shared the prize with Abdus Salam and Sheldon Glashow). That work became a cornerstone of the Standard Model of particle physics, which describes how the fundamental building blocks of the universe come together to create the world we see.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    In his new book To Explain the World: The Discovery of Modern Science, Weinberg examines how modern science was born.

    2

    By tracing the development of what we now call the “scientific method” — an approach, developed over centuries, that emphasizes experiments and observations rather than reasoning from first principles — he makes the argument that science, unlike other ways of interpreting the world around us, can offer true progress. Through science, our understanding of the world improves over time, building on what has come before. Mistakes can happen, but are eventually corrected. Weinberg spoke with Quanta Magazine about the past and future of physics, the role of philosophy within science, and the startling possibility that the universe we see around us is a tiny sliver of a much larger multiverse. An edited and condensed version of the interview follows.

    QUANTA MAGAZINE: As a physicist, how is your perspective on the history of science different from that of a historian?

    STEVEN WEINBERG: One difference, of course, is that they know more than I do — at least, in their particular field of specialization. Real historians have a much better grasp of the original sources than I could possibly have. If they’re historians of the ancient world, they’ll be experts in Greek and Latin, which I’m not even remotely knowledgeable about.

    But there’s also a difference in attitude. Many historians are strongly opposed to the so-called “Whig interpretation” of history, in which you look at the past and try to pick out the threads that lead to the present. They feel it’s much more important to get into the frame of mind of the people who lived at the time you’re writing about. And they have a point. But I would argue that, when it comes to the history of science, a Whig interpretation is much more justifiable. The reason is that science, unlike, say, politics or religion, is a cumulative branch of knowledge. You can say, not merely as a matter of taste, but with sober judgment, that Newton knew more about the world than Aristotle did, and Einstein knew more than Newton did. There really has been progress. And to trace that progress, it makes sense to look at the science of the past and try to pick out modes of thought that either led to progress, or impeded progress.

    Why did you focus on the history of physics and astronomy?

    Well, that’s what I know about; that’s where I have some competence. But there’s another reason: It’s in physics and astronomy that science first became “modern.” Actually, it’s physics as applied to astronomy. Newton gave us the modern approach to physics in the late 17th century. Other branches of science became modern only more recently: chemistry in the early 19th century; biology in the mid-19th century, or perhaps the early 20th century. So if you want to understand the discovery of modern science — which is the subtitle of my book — that discovery was made in the context of physics, especially as applied to astronomy.

    Theoretical physics is often seen as a quest for unification — we think of Newton, unifying terrestrial and celestial physics, or James Clerk Maxwell, unifying electricity, magnetism, and light. And of course your own work. Where does this quest for unification stand today?

    It hasn’t advanced very much, except for the fact that the theories we speculated about in the 1960s have been confirmed by observation. In the theory I developed in 1967 — Abdus Salam developed essentially the same theory, independently, in 1968 — a symmetry-breaking field played a fundamental role, manifest in a particle called the Higgs boson, whose properties we predicted, except for its mass. Now, thanks to experiments performed at CERN, the Higgs has been verified.

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    LHC

    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    So we’re on much more solid ground. But we haven’t gone any further. There have been enormous efforts to take further steps, especially in the context of string theory. String theory would unify all of the forces — the strong and weak nuclear forces, and the electromagnetic force, together with gravity. String theory has provided some deep mathematical ideas about how that might work. But we’re far from being able to verify the theory — much further than we were from verifying the electroweak theory 40 years ago.

    The Large Hadron Collider (LHC) is scheduled to start up again this year [2015], with twice the power it had during its initial run. What do you hope it’ll find — I’m not sure if “hope” is the right word — when it’s turned on?

    “The Standard Model is so complex that it would be hard to put it on a T-shirt.”

    Hope is exactly the right word! It depends on what new particles might have masses in the range that the LHC can probe. There are certainly things to look for. The most obvious thing is the dark-matter particle. We know from astronomy that five-sixths of the matter in the universe is something that doesn’t fit in the Standard Model of particle physics. But we have no idea what its mass is. Astronomers can tell us the total mass of this dark matter, but not the mass carried by each particle. If it’s a conventional dark-matter particle, known as a WIMP — “weakly interacting massive particle” — then the LHC might find it. It depends on how heavy it is, and on how it decays, because you never see the particle itself, you only see the products of its decay.

    The LHC might also find signs of supersymmetry, a theory positing that known particles each have a partner particle — but again, we don’t know what the mass of those partner particles would be. And here, there’s an even deeper uncertainty: We don’t know if supersymmetry has anything to do with the real world. There could also be heavier quarks, perhaps even heavier versions of the Higgs particle.

    It’s sometimes said that supersymmetry would be a kind of thumbs-up for string theory, which has been impossible to test in any direct way. If the LHC finds no evidence for supersymmetry, what happens to string theory?

    Standard model of Supersymmetry DESY

    Damned if I know! Unfortunately, string theory doesn’t make very specific predictions about physics at the energies that are accessible to us. The kind of energies of the structures that string theory deals with are so high, we’ll probably never be able to reproduce them in the lab. But those energies were common in the very early universe. So by making cosmological observations, we may get a handle on the physics of those incredibly high energies. For example, if the matter-energy density at the time of inflation was of the order of magnitude that is characteristic of string theory, then a great deal of gravitational radiation would have been produced at that time, and it would have left an imprint on the cosmic microwave background. Last year, scientists working with the BICEP2 telescope announced that they had found these gravitational waves; now it seems they were actually measuring interstellar dust. Further observations with the Planck satellite may be able to settle this question. I think that’s one of the most exciting things going on in all of physical science right now.

    BICEP 2

    Gravitational Wave Background from BICEP 2 which ultimately failed to be correct. The Planck team determined that the culprit was cosmic dust.

    For theorists, is the ultimate goal a set of equations we could put on a T-shirt?

    That’s the aim. The Standard Model is so complex that it would be hard to put it on a T-shirt — though not impossible; you’d just have to write kind of small. Now, it wouldn’t take gravity into account, so it wouldn’t be a “theory of everything.” But it would be a theory of all the other things we study in our physics laboratories. The Standard Model is sufficiently complicated, and has so many arbitrary features, that we know it’s not the final answer. The goal would be to have a much simpler theory with fewer arbitrary features — maybe even none at all — that would fit on a T-shirt. We’re not there yet.

    Some physicists suggest that we may have to settle for an array of different theories, perhaps representing different solutions to string theory’s equations. Maybe each solution represents a different universe — part of some larger “multiverse.”

    I am not a proponent of the idea that our Big Bang universe is just part of a larger multiverse. It has to be taken seriously as a possibility, though. And it does lead to interesting consequences. For example, it would explain why some constants of nature, particularly the dark energy, have values that seem to be very favorable to the appearance of life.

    Dark energy depiction. Image: Volker Springle/Max Planck Institute for Astrophysics/SP)

    Suppose you have a multiverse in which constants like dark energy vary from one big bang to another. Then, if you ask why it takes the value it does in our Big Bang, you have to take into account that there’s a selection effect: It’s only in big bangs where the dark energy takes a value favorable to the appearance of life that there’s anybody around to ask the question.

    “You don’t have to verify every prediction to know that a theory is correct.”

    This is very closely analogous to a question that astronomers have discussed for thousands of years, concerning the Earth and the sun. Why is the sun the distance that it is from us? If it were closer, the Earth would be too hot to harbor life; if it were further away, the Earth would be too cold. Why is it at just the right distance? Most people, like Galen, the Roman physician, thought that it was due to the benevolence of the gods, that it was all arranged for our benefit. A much better answer — the answer we would give today — is that there are billions of planets in our galaxy, and billions of galaxies in the universe. And it’s not surprising that a few of them, out of all those billions, are positioned in a way that’s favorable for life.

    But at least we can see some of those other planets. That’s not the case with the universes that are said to make up the multiverse.

    It’s not part of the requirement of a successful physical theory that everything it describes be observable, or that all possible predictions of the theory be verifiable. For example, we have a very successful theory of the strong nuclear forces, called quantum chromodynamics [QCD], which is based on the idea that quarks are bound together by forces that increase with distance, so that we will never, even in principle, be able to observe a quark in isolation.
    All we can observe are other successful predictions of QCD. We can’t actually detect quarks, but it doesn’t matter; we know QCD is correct, because it makes predictions that we can verify.

    Similarly, string theory, which predicts a multiverse, can’t be verified by detecting the other parts of the multiverse. But it might make other predictions that can be verified. For example, it may say that in all of the big bangs within the multiverse, certain things will always be true, and those things may be verifiable. It may say that certain symmetries will always be observed, or that they’ll always be broken according to a certain pattern that we can observe. If it made enough predictions like that, then we would say that string theory is correct. And if the theory predicted a multiverse, then we’d say that that’s correct too. You don’t have to verify every prediction to know that a theory is correct.

    When we talk about the multiverse, it seems as though physics is brushing up against philosophy. A number of physicists, including Stephen Hawking and Lawrence Krauss, have angered philosophers by describing philosophy as useless. In your new book, it sounds as if you agree with them. Is that right?

    I think academic philosophy is helpful only in a negative sense — that is, sometimes physicists get impressed with philosophical ideas, so that it can be helpful to hear from experts that those ideas have been challenged within the philosophical community. One example is positivism, which decrees that you should only talk about things that are directly detectable or observable. I think philosophers themselves have challenged that, and it’s good to know that.

    On the other hand, a kind of philosophical discussion does go on among physicists themselves. For example, the discussion we were having earlier about the multiverse raised the issue of what we expect from a scientific theory — when do we reject it as being outside of science; when do we accept it as being confirmed. Those are meta-scientific questions; they’re philosophical questions. The scientists never seem to reach an agreement about those things — like in the case of the multiverse — but then, neither do the professional philosophers.

    And sometimes, as with the example of positivism, the work of professional philosophers actually stands in the way of progress. That’s also the case with the approach known as constructivism — the idea that every society’s scientific theories are a social construct, like its political institutions, and have to be understood as coming out of a particular cultural milieu. I don’t know whether you’d call it a philosophical theory or a historical theory, but at any rate, I think that view is wrong, and I also think it could impede the work of science, because it takes away one of science’s great motivations, which is to discover something that, in an absolute sense, divorced from any cultural milieu, is actually true.

    You’re 81. Many people would be thinking about retirement, but you’re very active. What are you working on now?

    There’s something I’ve been working on for more than a year — maybe it’s just an old man’s obsession, but I’m trying to find an approach to quantum mechanics that makes more sense than existing approaches. I’ve just finished editing the second edition of my book, Lectures on Quantum Mechanics, in which I think I strengthen the argument that none of the existing interpretations of quantum mechanics are entirely satisfactory.

    I don’t intend to retire, because I enjoy doing what I’m doing. I enjoy teaching; I enjoy following research; and I enjoy doing a little research on my own. The year before last, before I got onto this quantum mechanics kick, I was writing papers about down-to-earth problems in elementary particle theory; I was also working on cosmology. I hope I go back to that.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:14 am on March 20, 2018 Permalink | Reply
    Tags: Albert Einstein’s general theory of relativity, , , , Cosmological-constant problem, , , In 1998 astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, , Quanta Magazine, , , Saul Perlmutter UC Berkeley Nobel laureate, , Why Does the Universe Need to Be So Empty?, Zero-point energy of the field   

    From The Atlantic Magazine and Quanta: “Why Does the Universe Need to Be So Empty?” 

    Quanta Magazine
    Quanta Magazine

    Atlantic Magazine

    The Atlantic Magazine

    Mar 19, 2018
    Natalie Wolchover

    Physicists have long grappled with the perplexingly small weight of empty space.

    The controversial idea that our universe is just a random bubble in an endless, frothing multiverse arises logically from nature’s most innocuous-seeming feature: empty space. Specifically, the seed of the multiverse hypothesis is the inexplicably tiny amount of energy infused in empty space—energy known as the vacuum energy, dark energy, or the cosmological constant. Each cubic meter of empty space contains only enough of this energy to light a light bulb for 11 trillionths of a second. “The bone in our throat,” as the Nobel laureate Steven Weinberg once put it [http://hetdex.org/dark_energy.html
    ], is that the vacuum ought to be at least a trillion trillion trillion trillion trillion times more energetic, because of all the matter and force fields coursing through it.

    1

    Somehow the effects of all these fields on the vacuum almost equalize, producing placid stillness. Why is empty space so empty?

    While we don’t know the answer to this question—the infamous “cosmological-constant problem”—the extreme vacuity of our vacuum appears necessary for our existence. In a universe imbued with even slightly more of this gravitationally repulsive energy, space would expand too quickly for structures like galaxies, planets, or people to form. This fine-tuned situation suggests that there might be a huge number of universes, all with different doses of vacuum energy, and that we happen to inhabit an extraordinarily low-energy universe because we couldn’t possibly find ourselves anywhere else.

    Some scientists bristle at the tautology of “anthropic reasoning” and dislike the multiverse for being untestable. Even those open to the multiverse idea would love to have alternative solutions to the cosmological constant problem to explore. But so far it has proved nearly impossible to solve without a multiverse. “The problem of dark energy [is] so thorny, so difficult, that people have not got one or two solutions,” says Raman Sundrum, a theoretical physicist at the University of Maryland.

    To understand why, consider what the vacuum energy actually is. Albert Einstein’s general theory of relativity says that matter and energy tell space-time how to curve, and space-time curvature tells matter and energy how to move. An automatic feature of the equations is that space-time can possess its own energy—the constant amount that remains when nothing else is there, which Einstein dubbed the cosmological constant. For decades, cosmologists assumed its value was exactly zero, given the universe’s reasonably steady rate of expansion, and they wondered why. But then, in 1998, astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, implying the presence of a repulsive energy permeating space. Dubbed dark energy by the astronomers, it’s almost certainly equivalent to Einstein’s cosmological constant. Its presence causes the cosmos to expand ever more quickly, since, as it expands, new space forms, and the total amount of repulsive energy in the cosmos increases.

    However, the inferred density of this vacuum energy contradicts what quantum-field theory, the language of particle physics, has to say about empty space. A quantum field is empty when there are no particle excitations rippling through it. But because of the uncertainty principle in quantum physics, the state of a quantum field is never certain, so its energy can never be exactly zero. Think of a quantum field as consisting of little springs at each point in space. The springs are always wiggling, because they’re only ever within some uncertain range of their most relaxed length. They’re always a bit too compressed or stretched, and therefore always in motion, possessing energy. This is called the zero-point energy of the field. Force fields have positive zero-point energies while matter fields have negative ones, and these energies add to and subtract from the total energy of the vacuum.

    The total vacuum energy should roughly equal the largest of these contributing factors. (Say you receive a gift of $10,000; even after spending $100, or finding $3 in the couch, you’ll still have about $10,000.) Yet the observed rate of cosmic expansion indicates that its value is between 60 and 120 orders of magnitude smaller than some of the zero-point energy contributions to it, as if all the different positive and negative terms have somehow canceled out. Coming up with a physical mechanism for this equalization is extremely difficult for two main reasons.

    First, the vacuum energy’s only effect is gravitational, and so dialing it down would seem to require a gravitational mechanism. But in the universe’s first few moments, when such a mechanism might have operated, the universe was so physically small that its total vacuum energy was negligible compared to the amount of matter and radiation. The gravitational effect of the vacuum energy would have been completely dwarfed by the gravity of everything else. “This is one of the greatest difficulties in solving the cosmological-constant problem,” the physicist Raphael Bousso wrote in 2007. A gravitational feedback mechanism precisely adjusting the vacuum energy amid the conditions of the early universe, he said, “can be roughly compared to an airplane following a prescribed flight path to atomic precision, in a storm.”

    Compounding the difficulty, quantum-field theory calculations indicate that the vacuum energy would have shifted in value in response to phase changes in the cooling universe shortly after the Big Bang. This raises the question of whether the hypothetical mechanism that equalized the vacuum energy kicked in before or after these shifts took place. And how could the mechanism know how big their effects would be, to compensate for them?

    So far, these obstacles have thwarted attempts to explain the tiny weight of empty space without resorting to a multiverse lottery. But recently, some researchers have been exploring one possible avenue: If the universe did not bang into existence, but bounced instead, following an earlier contraction phase, then the contracting universe in the distant past would have been huge and dominated by vacuum energy. Perhaps some gravitational mechanism could have acted on the plentiful vacuum energy then, diluting it in a natural way over time. This idea motivated the physicists Peter Graham, David Kaplan, and Surjeet Rajendran to discover a new cosmic bounce model, though they’ve yet to show how the vacuum dilution in the contracting universe might have worked.

    In an email, Bousso called their approach “a very worthy attempt” and “an informed and honest struggle with a significant problem.” But he added that huge gaps in the model remain, and “the technical obstacles to filling in these gaps and making it work are significant. The construction is already a Rube Goldberg machine, and it will at best get even more convoluted by the time these gaps are filled.” He and other multiverse adherents see their answer as simpler by comparison.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 8:22 pm on March 18, 2018 Permalink | Reply
    Tags: , , Mathematics vs Physics, , , Quanta Magazine, Shake a Black Hole, , The black hole stability conjecture   

    From Quanta: “To Test Einstein’s Equations, Poke a Black Hole” 

    Quanta Magazine
    Quanta Magazine

    mathematical physics
    https://sciencesprings.wordpress.com/2018/03/17/from-ethan-siegel-where-is-the-line-between-mathematics-and-physics/

    March 8, 2018
    Kevin Hartnett

    1
    Fantastic animation. Olena Shmahalo/Quanta Magazine

    In November 1915, in a lecture before the Prussian Academy of Sciences, Albert Einstein described an idea that upended humanity’s view of the universe. Rather than accepting the geometry of space and time as fixed, Einstein explained that we actually inhabit a four-dimensional reality called space-time whose form fluctuates in response to matter and energy.

    Einstein elaborated this dramatic insight in several equations, referred to as his “field equations,” that form the core of his theory of general relativity. That theory has been vindicated by every experimental test thrown at it in the century since.

    Yet even as Einstein’s theory seems to describe the world we observe, the mathematics underpinning it remain largely mysterious. Mathematicians have been able to prove very little about the equations themselves. We know they work, but we can’t say exactly why. Even Einstein had to fall back on approximations, rather than exact solutions, to see the universe through the lens he’d created.

    Over the last year, however, mathematicians have brought the mathematics of general relativity into sharper focus. Two groups have come up with proofs related to an important problem in general relativity called the black hole stability conjecture. Their work proves that Einstein’s equations match a physical intuition for how space-time should behave: If you jolt it, it shakes like Jell-O, then settles down into a stable form like the one it began with.

    “If these solutions were unstable, that would imply they’re not physical. They’d be a mathematical ghost that exists mathematically and has no significance from a physical point of view,” said Sergiu Klainerman, a mathematician at Princeton University and co-author, with Jérémie Szeftel, of one of the two new results [https://arxiv.org/abs/1711.07597].

    To complete the proofs, the mathematicians had to resolve a central difficulty with Einstein’s equations. To describe how the shape of space-time evolves, you need a coordinate system — like lines of latitude and longitude — that tells you which points are where. And in space-time, as on Earth, it’s hard to find a coordinate system that works everywhere.

    Shake a Black Hole

    General relativity famously describes space-time as something like a rubber sheet. Absent any matter, the sheet is flat. But start dropping balls onto it — stars and planets — and the sheet deforms. The balls roll toward one another. And as the objects move around, the shape of the rubber sheet changes in response.

    Einstein’s field equations describe the evolution of the shape of space-time. You give the equations information about curvature and energy at each point, and the equations tell you the shape of space-time in the future. In this way, Einstein’s equations are like equations that model any physical phenomenon: This is where the ball is at time zero, this is where it is five seconds later.

    “They’re a mathematically precise quantitative version of the statement that space-time curves in the presence of matter,” said Peter Hintz, a Clay research fellow at the University of California, Berkeley, and co-author, with András Vasy, of the other recent result [https://arxiv.org/abs/1606.04014].

    In 1916, almost immediately after Einstein released his theory of general relativity, the German physicist Karl Schwarzschild found an exact solution to the equations that describes what we now know as a black hole (the term wouldn’t be invented for another five decades). Later, physicists found exact solutions that describe a rotating black hole and one with an electrical charge.

    These remain the only exact solutions that describe a black hole. If you add even a second black hole, the interplay of forces becomes too complicated for present-day mathematical techniques to handle in all but the most special situations.

    Yet you can still ask important questions about this limited group of solutions. One such question developed out of work in 1952 by the French mathematician Yvonne Choquet-Bruhat. It asks, in effect: What happens when you shake a black hole?

    2
    Lucy Reading-Ikkanda/Quanta Magazine

    This problem is now known as the black hole stability conjecture. The conjecture predicts that solutions to Einstein’s equations will be “stable under perturbation.” Informally, this means that if you wiggle a black hole, space-time will shake at first, before eventually settling down into a form that looks a lot like the form you started with. “Roughly, stability means if I take special solutions and perturb them a little bit, change data a little bit, then the resulting dynamics will be very close to the original solution,” Klainerman said.

    So-called “stability” results are an important test of any physical theory. To understand why, it’s useful to consider an example that’s more familiar than a black hole.

    Imagine a pond. Now imagine that you perturb the pond by tossing in a stone. The pond will slosh around for a bit and then become still again. Mathematically, the solutions to whatever equations you use to describe the pond (in this case, the Navier-Stokes equations) should describe that basic physical picture. If the initial and long-term solutions don’t match, you might question the validity of your equations.

    “This equation might have whatever properties, it might be perfectly fine mathematically, but if it goes against what you expect physically, it can’t be the right equation,” Vasy said.

    For mathematicians working on Einstein’s equations, stability proofs have been even harder to find than solutions to the equations themselves. Consider the case of flat, empty Minkowski space — the simplest of all space-time configurations. This solution to Einstein’s equations was found in 1908 in the context of Einstein’s earlier theory of special relativity. Yet it wasn’t until 1993 that mathematicians managed to prove that if you wiggle flat, empty space-time, you eventually get back flat, empty space-time. That result, by Klainerman and Demetrios Christodoulou, is a celebrated work in the field.

    One of the main difficulties with stability proofs has to do with keeping track of what is going on in four-dimensional space-time as the solution evolves. You need a coordinate system that allows you to measure distances and identify points in space-time, just as lines of latitude and longitude allow us to define locations on Earth. But it’s not easy to find a coordinate system that works at every point in space-time and then continues to work as the shape of space-time evolves.

    “We don’t know of a one-size-fits-all way to do this,” Hintz wrote in an email. “After all, the universe does not hand you a preferred coordinate system.”

    The Measurement Problem

    The first thing to recognize about coordinate systems is that they’re a human invention. The second is that not every coordinate system works to identify every point in a space.

    Take lines of latitude and longitude: They’re arbitrary. Cartographers could have anointed any number of imaginary lines to be 0 degrees longitude.

    2

    And while latitude and longitude work to identify just about every location on Earth, they stop making sense at the North and South poles. If you knew nothing about Earth itself, and only had access to latitude and longitude readings, you might wrongly conclude there’s something topologically strange going on at those points.

    This possibility — of drawing wrong conclusions about the properties of physical space because the coordinate system used to describe it is inadequate — is at the heart of why it’s hard to prove the stability of space-time.

    “It could be the case that stability is true, but you’re using coordinates that are not stable and thus you miss the fact that stability is true,” said Mihalis Dafermos, a mathematician at the University of Cambridge and a leading figure in the study of Einstein’s equations.

    In the context of the black hole stability conjecture, whatever coordinate system you’re using has to evolve as the shape of space-time evolves — like a snugly fitting glove adjusting as the hand it encloses changes shape. The fit between the coordinate system and space-time has to be good at the start and remain good throughout. If it doesn’t, there are two things that can happen that would defeat efforts to prove stability.

    First, your coordinate system might change shape in a way that makes it break down at certain points, just as latitude and longitude fail at the poles. Such points are called “coordinate singularities” (to distinguish them from physical singularities, like an actual black hole). They are undefined points in your coordinate system that make it impossible to follow an evolving solution all the way through.

    Second, a poorly fitting coordinate system might disguise the underlying physical phenomena it’s meant to measure. To prove that solutions to Einstein’s equations settle down into a stable state after being perturbed, mathematicians must keep careful track of the ripples in space-time that are set in motion by the perturbation. To see why, it’s worth considering the pond again. A rock thrown into a pond generates waves. The long-term stability of the pond results from the fact that those waves decay over time — they grow smaller and smaller until there’s no sign they were ever there.

    The situation is similar for space-time. A perturbation will set off a cascade of gravitational waves, and proving stability requires proving that those gravitational waves decay. And proving decay requires a coordinate system — referred to as a “gauge” — that allows you to measure the size of the waves. The right gauge allows mathematicians to see the waves flatten and eventually disappear altogether.

    “The decay has to be measured relative to something, and it’s here where the gauge issue shows up,” Klainerman said. “If I’m not in the right gauge, even though in principle I have stability, I can’t prove it because the gauge will just not allow me to see that decay. If I don’t have decay rates of waves, I can’t prove stability.”

    The trouble is, while the coordinate system is crucial, it’s not obvious which one to choose. “You have a lot of freedom about what this gauge condition can be,” Hintz said. “Most of these choices are going to be bad.”

    Partway There

    A full proof of the black hole stability conjecture requires proving that all known black hole solutions to Einstein’s equations (with the spin of the black hole below a certain threshold) are stable after being perturbed. These known solutions include the Schwarzschild solution, which describes space-time with a nonrotating black hole, and the Kerr family of solutions, which describe configurations of space-time empty of everything save a single rotating black hole (where the properties of that rotating black hole — its mass and angular momentum — vary within the family of solutions).

    Both of the new results make partial progress toward a proof of the full conjecture.

    Hintz and Vasy, in a paper posted to the scientific preprint site arxiv.org in 2016 [see above 1606.04014], proved that slowly rotating black holes are stable. But their work did not cover black holes rotating above a certain threshold.

    Their proof also makes some assumptions about the nature of space-time. The original conjecture is in Minkowski space, which is not just flat and empty but also fixed in size. Hintz and Vasy’s proof takes place in what’s called de Sitter space, where space-time is accelerating outward, just like in the actual universe. This change of setting makes the problem simpler from a technical point of view, which is easy enough to appreciate at a conceptual level: If you drop a rock into an expanding pond, the expansion is going to stretch the waves and cause them to decay faster than they would have if the pond were not expanding.

    “You’re looking at a universe undergoing an accelerated expansion,” Hintz said. “This makes the problem a little easier as it appears to dilute the gravitational waves.”

    Klainerman and Szeftel’s work has a slightly different flavor. Their proof, the first part of which was posted online last November [see above 1711.07597], takes place in Schwarzschild space-time — closer to the original, more difficult setting for the problem. They prove the stability of a nonrotating black hole, but they do not address solutions in which the black hole is spinning. Moreover, they only prove the stability of black hole solutions for a narrow class of perturbations — where the gravitational waves generated by those perturbations are symmetric in a certain way.

    Both results involve new techniques for finding the right coordinate system for the problem. Hintz and Vasy start with an approximate solution to the equations, based on an approximate coordinate system, and gradually increase the precision of their answer until they arrive at exact solutions and well-behaved coordinates. Klainerman and Szeftel take a more geometric approach to the challenge.

    The two teams are now trying to build on their respective methods to find a proof of the full conjecture. Some expert observers think the day might not be far off.

    “I really think things are now at the stage that the remaining difficulties are just technical,” Dafermos said. “Somehow one doesn’t need new ideas to solve this problem.” He emphasized that a final proof could come from any one of the large number of mathematicians currently working on the problem.

    For 100 years Einstein’s equations have served as a reliable experimental guide to the universe. Now mathematicians may be getting closer to demonstrating exactly why they work so well.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 12:30 pm on March 4, 2018 Permalink | Reply
    Tags: , , , , Quanta Magazine,   

    From Quanta Magazine: “Elusive Higgs-Like State Created in Exotic Materials” 

    Quanta Magazine
    Quanta Magazine

    February 28, 2018
    Sophia Chen

    Two teams of physicists have created the “Higgs mode” – a link between particle physics and the physics of matter. The work could help researchers understand the strange behavior of deeply quantum systems.

    1
    Camille Chew for Quanta Magazine

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    If you want to understand the personality of a material, study its electrons. Table salt forms cubic crystals because its atoms share electrons in that configuration; silver shines because its electrons absorb visible light and reradiate it back. Electron behavior causes nearly all material properties: hardness, conductivity, melting temperature.

    Of late, physicists are intrigued by the way huge numbers of electrons can display collective quantum-mechanical behavior. In some materials, a trillion trillion electrons within a crystal can act as a unit, like fire ants clumping into a single mass to survive a flood. Physicists want to understand this collective behavior because of the potential link to exotic properties such as superconductivity, in which electricity can flow without any resistance.

    Last year, two independent research groups designed crystals, known as two-dimensional antiferromagnets, whose electrons can collectively imitate the Higgs boson. By precisely studying this behavior, the researchers think they can better understand the physical laws that govern materials — and potentially discover new states of matter. It was the first time that researchers have been able to induce such “Higgs modes” in these materials. “You’re creating a little mini universe,” said David Alan Tennant, a physicist at Oak Ridge National Laboratory who led one of the groups along with Tao Hong, his colleague there.

    Both groups induced electrons into Higgs-like activity by pelting their material with neutrons. During these tiny collisions, the electrons’ magnetic fields begin to fluctuate in a patterned way that mathematically resembles the Higgs boson.

    2
    A crystal made of copper bromide was used to construct the Oak Ridge team’s two-dimensional antiferromagnet. Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    The Higgs mode is not simply a mathematical curiosity. When a crystal’s structure permits its electrons to behave this way, the material most likely has other interesting properties, said Bernhard Keimer, a physicist at the Max Planck Institute for Solid State Research who coleads the other group.

    That’s because when you get the Higgs mode to appear, the material should be on the brink of a so-called quantum phase transition. Its properties are about to change drastically, like a snowball on a sunny spring day. The Higgs can help you understand the character of the quantum phase transition, says Subir Sachdev, a physicist at Harvard University. These quantum effects often portend bizarre new material properties.

    For example, physicists think that quantum phase transitions play a role in certain materials, known as topological insulators, that conduct electricity only on their surface and not in their interior. Researchers have also observed quantum phase transitions in high-temperature superconductors, although the significance of the phase transitions is still unclear. Whereas conventional superconductors need to be cooled to near absolute zero to observe such effects, high-temperature superconductors work at the relatively balmy conditions of liquid nitrogen, which is dozens of degrees higher.

    Over the past few years, physicists have created the Higgs mode in other superconductors, but they can’t always understand exactly what’s going on. The typical materials used to study the Higgs mode have a complicated crystal structure that increases the difficulty of understanding the physics at work.

    So both Keimer’s and Tennant’s groups set out to induce the Higgs mode in simpler systems. Their antiferromagnets were so-called two-dimensional materials: While each crystal exists as a 3-D chunk, those chunks are built out of stacked two-dimensional layers of atoms that act more or less independently. Somewhat paradoxically, it’s a harder experimental challenge to induce the Higgs mode in these two-dimensional materials. Physicists were unsure if it could be done.

    Yet the successful experiments showed that it was possible to use existing theoretical tools to explain the evolution of the Higgs mode. Keimer’s group found that the Higgs mode parallels the behavior of the Higgs boson. Inside a particle accelerator like the Large Hadron Collider, a Higgs boson will quickly decay into other particles, such as photons. In Keimer’s antiferromagnet, the Higgs mode morphs into different collective-electron motion that resembles particles called Goldstone bosons. The group experimentally confirmed that the Higgs mode evolves according to their theoretical predictions.

    Tennant’s group discovered how to make their material produce a Higgs mode that doesn’t die out. That knowledge could help them determine how to turn on other quantum properties, like superconductivity, in other materials. “What we want to understand is how to keep quantum behavior in systems,” said Tennant.

    Both groups hope to go beyond the Higgs mode. Keimer aims to actually observe a quantum phase transition in his antiferromagnet, which may be accompanied by additional weird phenomena. “That happens quite a lot,” he said. “You want to study a particular quantum phase transition, and then something else pops up.”

    They also just want to explore. They expect that more weird properties of matter are associated with the Higgs mode — potentially ones not yet envisioned. “Our brains don’t have a natural intuition for quantum systems,” said Tennant. “Exploring nature is full of surprises because it’s full of things we never imagined.”

    No science papers cited in this article.

    See the full article here .
    Re-released at Wired, Sophia Chen 3.4.18 Science.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 7:24 am on March 4, 2018 Permalink | Reply
    Tags: , Barbara Engelhardt, , , , GTEx-Genotype-Tissue Expression Consortium, , , Quanta Magazine   

    From Quanta Magazine: “A Statistical Search for Genomic Truths” 

    Quanta Magazine
    Quanta Magazine

    February 27, 2018
    Jordana Cepelewicz

    1
    Barbara Engelhardt, a Princeton University computer scientist, wants to strengthen the foundation of biological knowledge in machine-learning approaches to genomic analysis. Sarah Blesener for Quanta Magazine.

    We don’t have much ground truth in biology.” According to Barbara Engelhardt, a computer scientist at Princeton University, that’s just one of the many challenges that researchers face when trying to prime traditional machine-learning methods to analyze genomic data. Techniques in artificial intelligence and machine learning are dramatically altering the landscape of biological research, but Engelhardt doesn’t think those “black box” approaches are enough to provide the insights necessary for understanding, diagnosing and treating disease. Instead, she’s been developing new statistical tools that search for expected biological patterns to map out the genome’s real but elusive “ground truth.”

    Engelhardt likens the effort to detective work, as it involves combing through constellations of genetic variation, and even discarded data, for hidden gems. In research published last October [Nature], for example, she used one of her models to determine how mutations relate to the regulation of genes on other chromosomes (referred to as distal genes) in 44 human tissues. Among other findings, the results pointed to a potential genetic target for thyroid cancer therapies. Her work has similarly linked mutations and gene expression to specific features found in pathology images.

    The applications of Engelhardt’s research extend beyond genomic studies. She built a different kind of machine-learning model, for instance, that makes recommendations to doctors about when to remove their patients from a ventilator and allow them to breathe on their own.

    She hopes her statistical approaches will help clinicians catch certain conditions early, unpack their underlying mechanisms, and treat their causes rather than their symptoms. “We’re talking about solving diseases,” she said.

    To this end, she works as a principal investigator with the Genotype-Tissue Expression (GTEx) Consortium, an international research collaboration studying how gene regulation, expression and variation contribute to both healthy phenotypes and disease.

    2

    Right now, she’s particularly interested in working on neuropsychiatric and neurodegenerative diseases, which are difficult to diagnose and treat.

    Quanta Magazine recently spoke with Engelhardt about the shortcomings of black-box machine learning when applied to biological data, the methods she’s developed to address those shortcomings, and the need to sift through “noise” in the data to uncover interesting information. The interview has been condensed and edited for clarity.

    What motivated you to focus your machine-learning work on questions in biology?

    I’ve always been excited about statistics and machine learning. In graduate school, my adviser, Michael Jordan [at the University of California, Berkeley], said something to the effect of: “You can’t just develop these methods in a vacuum. You need to think about some motivating applications.” I very quickly turned to biology, and ever since, most of the questions that drive my research are not statistical, but rather biological: understanding the genetics and underlying mechanisms of disease, hopefully leading to better diagnostics and therapeutics. But when I think about the field I am in — what papers I read, conferences I attend, classes I teach and students I mentor — my academic focus is on machine learning and applied statistics.

    We’ve been finding many associations between genomic markers and disease risk, but except in a few cases, those associations are not predictive and have not allowed us to understand how to diagnose, target and treat diseases. A genetic marker associated with disease risk is often not the true causal marker of the disease — one disease can have many possible genetic causes, and a complex disease might be caused by many, many genetic markers possibly interacting with the environment. These are all challenges that someone with a background in statistical genetics and machine learning, working together with wet-lab scientists and medical doctors, can begin to address and solve. Which would mean we could actually treat genetic diseases — their causes, not just their symptoms.

    You’ve spoken before about how traditional statistical approaches won’t suffice for applications in genomics and health care. Why not?

    First, because of a lack of interpretability. In machine learning, we often use “black-box” methods — [classification algorithms called] random forests, or deeper learning approaches. But those don’t really allow us to “open” the box, to understand which genes are differentially regulated in particular cell types or which mutations lead to a higher risk of a disease. I’m interested in understanding what’s going on biologically. I can’t just have something that gives an answer without explaining why.

    The goal of these methods is often prediction, but given a person’s genotype, it is not particularly useful to estimate the probability that they’ll get Type 2 diabetes. I want to know how they’re going to get Type 2 diabetes: which mutation causes the dysregulation of which gene to lead to the development of the condition. Prediction is not sufficient for the questions I’m asking.

    A second reason has to do with sample size. Most of the driving applications of statistics assume that you’re working with a large and growing number of data samples — say, the number of Netflix users or emails coming into your inbox — with a limited number of features or observations that have interesting structure. But when it comes to biomedical data, we don’t have that at all. Instead, we have a limited number of patients in the hospital, a limited number of genotypes we can sequence — but a gigantic set of features or observations for any one person, including all the mutations in their genome. Consequently, many theoretical and applied approaches from statistics can’t be used for genomic data.

    What makes the genomic data so challenging to analyze?

    The most important signals in biomedical data are often incredibly small and completely swamped by technical noise. It’s not just about how you model the real, biological signal — the questions you’re trying to ask about the data — but also how you model that in the presence of this incredibly heavy-handed noise that’s driven by things you don’t care about, like which population the individuals came from or which technician ran the samples in the lab. You have to get rid of that noise carefully. And we often have a lot of questions that we would like to answer using the data, and we need to run an incredibly large number of statistical tests — literally trillions — to figure out the answers. For example, to identify an association between a mutation in a genome and some trait of interest, where that trait might be the expression levels of a specific gene in a tissue. So how can we develop rigorous, robust testing mechanisms where the signals are really, really small and sometimes very hard to distinguish from noise? How do we correct for all this structure and noise that we know is going to exist?

    So what approach do we need to take instead?

    My group relies heavily on what we call sparse latent factor models, which can sound quite mathematically complicated. The fundamental idea is that these models partition all the variation we observed in the samples, with respect to only a very small number of features. One of these partitions might include 10 genes, for example, or 20 mutations. And then as a scientist, I can look at those 10 genes and figure out what they have in common, determine what this given partition represents in terms of a biological signal that affects sample variance.

    So I think of it as a two-step process: First, build a model that separates all the sources of variation as carefully as possible. Then go in as a scientist to understand what all those partitions represent in terms of a biological signal. After this, we can validate those conclusions in other data sets and think about what else we know about these samples (for instance, whether everyone of the same age is included in one of these partitions).

    When you say “go in as a scientist,” what do you mean?

    I’m trying to find particular biological patterns, so I build these models with a lot of structure and include a lot about what kinds of signals I’m expecting. I establish a scaffold, a set of parameters that will tell me what the data say, and what patterns may or may not be there. The model itself has only a certain amount of expressivity, so I’ll only be able to find certain types of patterns. From what I’ve seen, existing general models don’t do a great job of finding signals we can interpret biologically: They often just determine the biggest influencers of variance in the data, as opposed to the most biologically impactful sources of variance. The scaffold I build instead represents a very structured, very complex family of possible patterns to describe the data. The data then fill in that scaffold to tell me which parts of that structure are represented and which are not.

    So instead of using general models, my group and I carefully look at the data, try to understand what’s going on from the biological perspective, and tailor our models based on what types of patterns we see.

    How does the latent factor model work in practice?

    We applied one of these latent factor models to pathology images [pictures of tissue slices under a microscope], which are often used to diagnose cancer. For every image, we also had data about the set of genes expressed in those tissues. We wanted to see how the images and the corresponding gene expression levels were coordinated.

    We developed a set of features describing each of the images, using a deep-learning method to identify not just pixel-level values but also patterns in the image. We pulled out over a thousand features from each image, give or take, and then applied a latent factor model and found some pretty exciting things.

    For example, we found sets of genes and features in one of these partitions that described the presence of immune cells in the brain. You don’t necessarily see these cells on the pathology images, but when we looked at our model, we saw a component there that represented only genes and features associated with immune cells, not brain cells. As far as I know, no one’s seen this kind of signal before. But it becomes incredibly clear when we look at these latent factor components.


    Video: Barbara Engelhardt, a computer scientist at Princeton University, explains why traditional machine-learning techniques have often fallen short for genomic analysis, and how researchers are overcoming that challenge. Sarah Blesener for Quanta Magazine

    You’ve worked with dozens of human tissue types to unpack how specific genetic variations help shape complex traits. What insights have your methods provided?

    We had 44 tissues, donated from 449 human cadavers, and their genotypes (sequences of their whole genomes). We wanted to understand more about the differences in how those genotypes expressed their genes in all those tissues, so we did more than 3 trillion tests, one by one, comparing every mutation in the genome with every gene expressed in each tissue. (Running that many tests on the computing clusters we’re using now takes about two weeks; when we move this iteration of GTEx to the cloud as planned, we expect it to take around two hours.) We were trying to figure out whether the [mutant] genotype was driving distal gene expression. In other words, we were looking for mutations that weren’t located on the same chromosome as the genes they were regulating. We didn’t find very much: a little over 600 of these distal associations. Their signals were very low.

    But one of the signals was strong: an exciting thyroid association, in which a mutation appeared to distally regulate two different genes. We asked ourselves: How is this mutation affecting expression levels in a completely different part of the genome? In collaboration with Alexis Battle’s lab at Johns Hopkins University, we looked near the mutation on the genome and found a gene called FOXE1, for a transcription factor that regulates the transcription of genes all over the genome. The FOXE1 gene is only expressed in thyroid tissues, which was interesting. But we saw no association between the mutant genotype and the expression levels of FOXE1. So we had to look at the components of the original signal we’d removed before — everything that had appeared to be a technical artifact — to see if we could detect the effects of the FOXE1 protein broadly on the genome.

    We found a huge impact of FOXE1 in the technical artifacts we’d removed. FOXE1, it seems, regulates a large number of genes only in the thyroid. Its variation is driven by the mutant genotype we found. And that genotype is also associated with thyroid cancer risk. We went back to the thyroid cancer samples — we had about 500 from the Cancer Genome Atlas — and replicated the distal association signal. These things tell a compelling story, but we wouldn’t have learned it unless we had tried to understand the signal that we’d removed.

    What are the implications of such an association?

    Now we have a particular mechanism for the development of thyroid cancer and the dysregulation of thyroid cells. If FOXE1 is a druggable target — if we can go back and think about designing drugs to enhance or suppress the expression of FOXE1 — then we can hope to prevent people at high thyroid cancer risk from getting it, or to treat people with thyroid cancer more effectively.

    The signal from broad-effect transcription factors like FOXE1 actually looks a lot like the effects we typically remove as part of the noise: population structure, or the batches the samples were run in, or the effects of age or sex. A lot of those technical influences are going to affect approximately similar numbers of genes — around 10 percent — in a similar way. That’s why we usually remove signals that have that pattern. In this case, though, we had to understand the domain we were working in. As scientists, we looked through all the signals we’d gotten rid of, and this allowed us to find the effects of FOXE1 showing up so strongly in there. It involved manual labor and insights from a biological background, but we’re thinking about how to develop methods to do it in a more automated way.

    So with traditional modeling techniques, we’re missing a lot of real biological effects because they look too similar to noise?

    Yes. There are a ton of cases in which the interesting pattern and the noise look similar. Take these distal effects: Pretty much all of them, if they are broad effects, are going to look like the noise signal we systematically get rid of. It’s methodologically challenging. We have to think carefully about how to characterize when a signal is biologically relevant or just noise, and how to distinguish the two. My group is working fairly aggressively on figuring that out.

    Why are those relationships so difficult to map, and why look for them?

    There are so many tests we have to do; the threshold for the statistical significance of a discovery has to be really, really high. That creates problems for finding these signals, which are often incredibly small; if our threshold is that high, we’re going to miss a lot of them. And biologically, it’s not clear that there are many of these really broad-effect distal signals. You can imagine that natural selection would eliminate the kinds of mutations that affect 10 percent of genes — that we wouldn’t want that kind of variability in the population for so many genes.

    But I think there’s no doubt that these distal associations play an enormous role in disease, and that they may be considered as druggable targets. Understanding their role broadly is incredibly important for human health.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 8:23 am on February 28, 2018 Permalink | Reply
    Tags: , , , , , , Quanta Magazine   

    From Quanta: “Deathblow Dealt to Dark Matter Disks” 

    Quanta Magazine
    Quanta Magazine

    November 17, 2017 [Just found it.]
    Natalie Wolchover

    1
    A projection showing how the positions of some 2 million stars measured by the Gaia satellite are expected to evolve in the future. Copyright: ESA/Gaia/DPAC, CC BY-SA 3.0 IGO

    Eighty years after the discovery of dark matter, physicists remain totally stumped about the nature of this nonreflective stuff that, judging by its gravitational effects, pervades the cosmos in far greater abundance than all the matter we can see. From axions to WIMPs (or “weakly interacting massive particles”), many candidates have been proposed as dark matter’s identity — and sought to no avail in dozens of experiments. One enticing speculation to have emerged in recent years imagines that dark matter isn’t a single, monolithic thing. Rather, there might exist a whole family of dark particles that interact with one another via unknown dark forces, much as ordinary matter consists of quarks, electrons and a bustling zoo of other light-sensitive quanta.

    The existence of a rich “dark sector” of particles could have consequences on galactic scales. Whereas dark matter of a single, inert type such as an axion would enshroud galaxies in spherical clouds called halos, particles in a dark sector might interact with one another in ways that release energy, enabling them to cool and settle into a lower-energy configuration. Namely, these cooling dark matter particles would collapse into rotating disks, just as stars and gas settle into pancake-shaped rotating galaxies and planets orbit their stars in a plane. In recent years, Lisa Randall, a theoretical physicist at Harvard University, has championed the idea that there might be just such a disk of dark matter coursing through the plane of the Milky Way.

    Randall and collaborators say this hypothetical dark disk could explain several observations, including a possible uptick of asteroid and comet impacts and associated mass extinctions on Earth every 35-or-so million years. As Randall discusses in her 2015 book, Dark Matter and the Dinosaurs, the subtle periodicity might happen because space objects get destabilized each time our solar system passes through the dark disk while bobbing up and down on its way around the galaxy.

    However, when I reported on the dark disk hypothesis in April 2016, the disk and all it would imply about the nature of dark matter were already in trouble. Inventories of the Milky Way showed that the mass of stars and gas in the galactic plane and the bobbing motions of stars circumnavigating the galaxy match up gravitationally, leaving only a smidgen of wiggle room in which to fit an invisible dark disk. At that time, only an extremely thin disk could exist, accounting for no more than 2 percent of the total amount of dark matter in the galaxy, with the rest being of the inert, halo-forming kind.

    Still, the presence of any dark disk at all, even one made of a minority of dark matter particles, would revolutionize physics. It would prove that there are multiple kinds of interacting dark matter particles and enable physicists to learn the properties of these particles from the features of the disk. And so researchers have awaited a more precise inventory of the Milky Way to see if a thin disk is needed to exactly match the mass of stuff in the galactic plane to the motions of stars around it. Now, with the numbers in, some say the disk is dead.

    2
    Katelin Schutz, a graduate student at the University of California, Berkeley, led a new analysis that disfavors the presence of a dark matter disk in the galaxy. Chris Akers

    Katelin Schutz, a cosmology graduate student at the University of California, Berkeley, and coauthors have checked for a dark disk using the first data release from the Gaia satellite, a European spacecraft endeavoring to measure the speeds and locations of one billion Milky Way stars.

    ESA/GAIA satellite

    In a paper posted online Nov. 9 and soon to be submitted to Physical Review Letters, Schutz and collaborators analyzed a subset of the stars measured by Gaia (representing 20 times more stars than had been previously analyzed). Their findings excluded the presence of any dark disk denser than about four-thousandths the mass of the sun per cubic light-year at the midplane of the galaxy. A disk roughly twice as dense would be needed to explain the comet impact periodicity and other original justifications for the dark disk idea. “Our new limits disfavor the presence of a thin dark matter disk,” Schutz and coauthors wrote — and that’s the case, she added by email, even though “we have tried to be quite generous and conservative with our estimations of systematic uncertainty.”

    “I think it really is dead!” said David Hogg, an astrophysicist at New York University and the Flatiron Institute (which, like Quanta, is funded by the Simons Foundation), and a leading expert in astronomical data analysis. “It is sad, of course.”

    However, Randall and Eric Kramer, her student-collaborator on a 2016 paper The Astrophysical Journal that found room for a thin dark disk, aren’t prepared to admit defeat. “It’s a good solid analysis, but I don’t think it rules out a dark disk,” Randall said. In particular, she and Kramer question the authors’ assumption that the stars they analyzed were in equilibrium rather than oscillating or ringing. According to Randall, there is some evidence that “the more straightforward equilibrium analysis might not be adequate.”

    “I think you’re never going to convince everyone, and science is always a conversation rather than a declaration,” Schutz said. Still, the Milky Way inventory will become even more precise, and any nonequilibrium effects can be teased out, as more Gaia data become available.

    Even if there’s no dark disk, it’s still possible that there might be a dark sector. It would have to consist of particles that — unlike particles in the light sector — don’t interact and combine in ways that give off significant amounts of energy. The possibilities for dark matter are virtually endless, given the stunning absence of experimental hints about its nature. That dearth of clues is why the dark disk “would have been awesome,” Hogg said.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
  • richardmitnick 2:46 pm on February 19, 2018 Permalink | Reply
    Tags: , Edward Witten, , Quanta Magazine   

    From Quanta Magazine: “A Physicist’s Physicist Ponders the Nature of Reality” Edward Witten 

    Quanta Magazine
    Quanta Magazine

    FOR L.Z. OF HP AND RUTGERS. I HOPE HE SEES IT.

    November 28, 2017 [Just found this. Did I miss it in November? I would not have skipped it.]
    Natalie Wolchover

    1
    Edward Witten in his office at the Institute for Advanced Study in Princeton, New Jersey.

    Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

    During a visit this fall, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

    Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

    Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

    When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

    Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

    People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

    What are some of these newfound facets of dualities?

    It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

    That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

    Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

    Given this web of relationships and the issue of how hard it is to characterize all duality, do you feel that this reflects a lack of understanding of the structure, or is it that we’re seeing the structure, only it’s very complicated?

    I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

    There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

    2
    Jean Sweep for Quanta Magazine

    What do you see as the relationship between math and physics?

    I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

    I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

    You can’t imagine it at all?

    No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties.

    I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

    Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement.

    Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

    What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

    The latter.

    Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

    What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

    Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

    Are you speaking of M-theory?

    M-theory is the candidate for the better description.

    You proposed M-theory 22 years ago. What are its prospects today?

    Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

    3
    Jean Sweep for Quanta Magazine

    What’s an example of something else we might need?

    Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

    Are you willing to speculate about how it would be different?

    I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

    The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

    You’re referring to Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” Why were you reading it?

    I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

    Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

    Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

    I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

    Do you consider Wheeler a hero?

    I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

    Why do you have more patience for such things now?

    I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

    Do you ever take your mind off physics and math?

    My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

    In contrast to Wheeler, it seems like your working style is to come to the insights through the calculations, rather than chasing a vague vision.

    In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

    And he was talking about explaining how physics arises from information.

    Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

    I see. Does he have any hypotheses?

    No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

    Do you have any ideas about the meaning of existence?

    No. [Laughs.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: