Tagged: Nautilus Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:51 pm on November 12, 2017 Permalink | Reply
    Tags: , , Caleb Scharf, , Nautilus, The Zoomable Universe, This Will Help You Grasp the Sizes of Things in the Universe   

    From Nautilus: “This Will Help You Grasp the Sizes of Things in the Universe” 

    Nautilus

    Nautilus

    Nov 08, 2017
    Dan Garisto

    1
    In The Zoomable Universe, Scharf puts the notion of scale—in biology and physics—center-stage. “The start of your journey through this book and through all known scales of reality is at that edge between known and unknown,” he writes. Illustration by Ron Miller

    Caleb Scharf wants to take you on an epic tour. His latest book, The Zoomable Universe, starts from the ends of the observable universe, exploring its biggest structures, like groups of galaxies, and goes all the way down to the Planck length—less than a billionth of a billionth of a billionth of a meter. It is a breathtaking synthesis of the large and small. Readers journeying through the book are treated to pictures, diagrams, and illustrations all accompanied by Scharf’s lucid, conversational prose. These visual aids give vital depth and perspective to the phenomena that he points out like a cosmic safari guide. Did you know, he offers, that all the Milky Way’s stars can fit inside the volume of our solar system?

    Scharf, the director of Columbia University’s Astrobiology Center, is a suitably engaging guide. He’s the author of the 2012 book Gravity’s Engines: How Bubble-Blowing Black Holes Rule Galaxies, Stars, and Life in the Universe, and last year, he speculated in Nautilus about whether alien life could be so advanced as to be indistinguishable from physics.

    In The Zoomable Universe, Scharf puts the notion of scale—in biology and physics—center-stage. “The start of your journey through this book and through all known scales of reality is at that edge between known and unknown,” he writes. Nautilus caught up with him to talk about our experience with scale and why he thinks it’s mysterious. (Scharf is a member of Nautilus’ advisory board.)

    Why is scale interesting?

    Scale is fascinating. Scientifically it’s a fundamental property of reality. We don’t even think about it. We talk about space and time—and perhaps we puzzle more over the nature of time than we do over the nature of scale or space—but it’s equally mysterious.

    What’s mysterious about scale?

    It’s something we all have direct experience of, even intuitively. We learn to evaluate the size of things. But we’re operating as humans in a very, very narrow slice of what is out there. And we’re aware of a very narrow range of scales: In some sense, we know more about the very large than we do about the very small.

    We know about atoms, kind of, but if you go smaller, it gets more uncertain—not just because of intrinsic uncertainty, but the completeness of our physics gets worse. We don’t really know what’s happening here. That leads you to a mystery at the Planck scale. On the big scale, it’s stuff we can actually see, we can actually chart.


    Not an alien planet, but the faceted eye of a louse embedded in an elephant’s skin. The Zoomable Universe.

    At certain scales, there’s not much happening. Does that hint at some underlying mystery?

    I think that is something worth contemplating. There’s quarks and then there’s 20 orders of magnitude smaller where—what do you say about it? That was the experience for the very small, but on the larger scale there’s some of that too…the emptiness of interstellar space. It is striking how empty most of everything is on the big scale and the small scale.

    We have all this rich stuff going on in the scale of the solar system and the earth and our biological scale. That’s where we’ve gained the most insight, accumulated the most knowledge. It is the scale where matter seems to condense down, where things appear solid, when in fact, it’s equally empty on the inside. But is that a human cultural bias? Or is that telling us something profound about the nature of the universe? I don’t really know the answer to that. But there’s something about the way we’re built, the way we think about the world. We’re clearly not attuned to that emptiness.

    Yet we’re drawn to it.

    We are drawn to it—like the example in the book with the stars packed together. Taking all the stars from the galaxy put together and being able to fit them inside the volume of the solar system? It is shocking. Trust me, I had to run the numbers a couple of times just to go, “Oh wow, okay, that really does work.”

    3
    As the Earth eclipses the Sun, our high wilderness of the lunar landscape is bathed in reddened light. Illustration by Ron Miller

    How did you represent things that we don’t have pictures of, like the surface of an exoplanet, or things at really small scales?

    That’s something we definitely talked a lot about in putting the book together. Ron Miller, the artist, would produce a landscape for an exoplanet. As a scientist, my inclination is to say, “We can’t do that—we can’t say what it looks like.” So we had this dialogue. We wanted an informed artistic approach. It became tricky when we got down to a small scale. I wanted to avoid the usual trope, which is an atom is a sphere, or a molecule is a sphere connected by things. You can’t have a picture of these things in the sense that we’re used to. We tried to compromise. We made something people kind of recognize, but we avoid the ball and stick models that are glued in everyone’s head.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

    Advertisements
     
  • richardmitnick 12:03 pm on November 9, 2017 Permalink | Reply
    Tags: , , Cosmologists have come to realize that our universe may be only one component of the multiverse, , Fred Adams, Mordehai Milgrom and MOND theory, Nautilus, , , The forces are not nearly as finely tuned as many scientists think, The parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets, The strong interaction- the weak interaction- electromagnetism- gravity   

    From Nautilus: “The Not-So-Fine Tuning of the Universe” 

    Nautilus

    Nautilus

    January 19, 2017 [Just found this referenced in another article.]
    Fred Adams
    Illustrations by Jackie Ferrentino

    Before there is life, there must be structure. Our universe synthesized atomic nuclei early in its history. Those nuclei ensnared electrons to form atoms. Those atoms agglomerated into galaxies, stars, and planets. At last, living things had places to call home. We take it for granted that the laws of physics allow for the formation of such structures, but that needn’t have been the case.

    Over the past several decades, many scientists have argued that, had the laws of physics been even slightly different, the cosmos would have been devoid of complex structures. In parallel, cosmologists have come to realize that our universe may be only one component of the multiverse, a vast collection of universes that makes up a much larger region of spacetime. The existence of other universes provides an appealing explanation for the apparent fine-tuning of the laws of physics. These laws vary from universe to universe, and we live in a universe that allows for observers because we couldn’t live anywhere else.

    1
    Setting The Parameters: The universe would have been habitable even if the forces of electromagnetism and gravity had been stronger or weaker. The crosshatched area shows the range of values consistent with life. The asterisk shows the actual values in our universe; the axes are scaled to these values. The constraints are that stars must be able to undergo nuclear fusion (below black curve), live long enough for complex life to evolve (below red curve), be hot enough to support biospheres (left of blue curve), and not outgrow their host galaxies (right of the cyan curve). Fred C. Adams.

    Astrophysicists have discussed fine-tuning so much that many people take it as a given that our universe is preternaturally fit for complex structures. Even skeptics of the multiverse accept fine-tuning; they simply think it must have some other explanation. But in fact the fine-tuning has never been rigorously demonstrated. We do not really know what laws of physics are necessary for the development of astrophysical structures, which are in turn necessary for the development of life. Recent work on stellar evolution, nuclear astrophysics, and structure formation suggest that the case for fine-tuning is less compelling than previously thought. A wide variety of possible universes could support life. Our universe is not as special as it might seem.

    The first type of fine-tuning involves the strengths of the fundamental forces of nature in working stars. If the electromagnetic force had been too strong, the electrical repulsion of protons would shut down nuclear fusion in stellar cores, and stars would fail to shine. If electromagnetism had been too weak, nuclear reactions would run out of control, and stars would blow up in spectacular explosions. If gravity had been too strong, stars would either collapse into black holes or never ignite.

    On closer examination, though, stars are remarkably robust. The strength of the electric force could vary by a factor of nearly 100 in either direction before stellar operations would be compromised. The force of gravity would have to be 100,000 times stronger. Going in the other direction, gravity could be a billion times weaker and still allow for working stars. The allowed strengths for the gravitational and electromagnetic forces depend on the nuclear reaction rate, which in turn depends on the strengths of the nuclear forces. If the reaction rate were faster, stars could function over an even wider range of strengths for gravitation and electromagnetism. Slower nuclear reactions would narrow the range.

    In addition to these minimal operational requirements, stars must meet a number of other constraints that further restrict the allowed strength of the forces. They must be hot. The surface temperature of a star must be high enough to drive the chemical reactions necessary for life. In our universe, there are ample regions around most stars where planets are warm enough, about 300 kelvins, to support biology. In universes where the electromagnetic force is stronger, stars are cooler, making them less hospitable.

    Stars must also have long lives. The evolution of complex life forms takes place over enormous spans of time. Since life is driven by a complex ensemble of chemical reactions, the basic clock for biological evolution is set by the time scales of atoms. In other universes, these atomic clocks will tick at different rates, depending on the strength of electromagnetism, and this variation must be taken into account. When the force is weaker, stars burn their nuclear fuel faster, and their lifetimes decrease.

    3
    Mordehai Milgrom
    Also in Physics
    The Physicist Who Denies Dark Matter
    By Oded Carmeli
    He is one of those dark matter people,” Mordehai Milgrom said about a colleague stopping by his office at the Weizmann Institute of Science. Milgrom introduced us, telling me that his friend is searching for evidence of dark matter in READ MORE

    Finally, stars must be able to form in the first place. In order for galaxies and, later, stars to condense out of primordial gas, the gas must be able to lose energy and cool down. The cooling rate depends (yet again) on the strength of electromagnetism. If this force is too weak, gas cannot cool down fast enough and would remain diffuse instead of condensing into galaxies. Stars must also be smaller than their host galaxies—otherwise star formation would be problematic. These effects put another lower limit on the strength of electromagnetism.

    Putting it all together, the strengths of the fundamental forces can vary by several orders of magnitude and still allow planets and stars to satisfy all the constraints (as illustrated in the figure below). The forces are not nearly as finely tuned as many scientists think.

    A second example of possible fine-tuning arises in the context of carbon production. After moderately large stars have fused the hydrogen in their central cores into helium, helium itself becomes the fuel. Through a complicated set of reactions, helium is burned into carbon and oxygen. Because of their important role in nuclear physics, helium nuclei are given a special name: alpha particles. The most common nuclei are composed of one, three, four, and five alpha particles. The nucleus with two alpha particles, beryllium-8, is conspicuously absent, and for a good reason: It is unstable in our universe.

    The instability of beryllium creates a serious bottleneck for the creation of carbon. As stars fuse helium nuclei together to become beryllium, the beryllium nuclei almost immediately decay back into their constituent parts. At any given time, the stellar core maintains a small but transient population of beryllium. These rare beryllium nuclei can interact with helium to produce carbon. Because the process ultimately involves three helium nuclei, it is called the triple-alpha reaction. But the reaction is too slow to produce the amount of carbon observed in our universe.

    To resolve this discrepancy, physicist Fred Hoyle predicted in 1953 that the carbon nucleus has to have a resonant state at a specific energy, as if it were a little bell that rang with a certain tone. Because of this resonance, the reaction rates for carbon production are much larger than they would be otherwise—large enough to explain the abundance of carbon found in our universe. The resonance was later measured in the laboratory at the predicted energy level.

    3
    Credit above

    The worry is that, in other universes, with alternate strengths of the forces, the energy of this resonance could be different, and stars would not produce enough carbon. Carbon production is compromised if the energy level is changed by more than about 4 percent. This issue is sometimes called the triple-alpha fine-tuning problem.

    Fortunately, this problem has a simple solution. What nuclear physics takes away, it also gives. Suppose nuclear physics did change by enough to neutralize the carbon resonance. Among the possible changes of this magnitude, about half would have the side effect of making beryllium stable, so the loss of the resonance would become irrelevant. In such alternate universes, carbon would be produced in the more logical manner of adding together alpha particles one at a time. Helium could fuse into beryllium, which could then react with additional alpha particles to make carbon. There is no fine-tuning problem after all.

    A third instance of potential fine-tuning concerns the simplest nuclei composed of two particles: deuterium nuclei, which contain one proton and one neutron; diprotons, consisting of two protons; and dineutrons, consisting of two neutrons. In our universe, only deuterium is stable. The production of helium takes place by first combining two protons into deuterium.

    If the strong nuclear force had been even stronger, diprotons could have been stable. In this case, stars could have generated energy through the simplest and fastest of nuclear reactions, where protons combine to become diprotons and eventually other helium isotopes. It is sometimes claimed that stars would then burn through their nuclear fuel at catastrophic rates, resulting in lifetimes that are too short to support biospheres. Conversely, if the strong force had been weaker, then deuterium would be unstable, and the usual stepping stone on the pathway to heavy elements would not be available. Many scientists have speculated that the absence of stable deuterium would lead to a universe with no heavy elements at all and that such a universe would be devoid of complexity and life.

    As it turns out, stars are remarkably stable entities. Their structure adjusts automatically to burn nuclear fuel at exactly the right rate required to support themselves against the crush of their own gravity. If the nuclear reaction rates are higher, stars will burn their nuclear fuel at a lower central temperature, but otherwise they will not be so different. In fact, our universe has an example of this type of behavior. Deuterium nuclei can combine with protons to form helium nuclei through the action of the strong force. The cross section for this reaction, which quantifies the probability of its occurrence, is quadrillions of times larger than for ordinary hydrogen fusion. Nonetheless, stars in our universe burn their deuterium in a relatively uneventful manner. The stellar core has an operating temperature of 1 million kelvins, compared to the 15 million kelvins required to burn hydrogen under ordinary conditions. These deuterium-burning stars have cooler centers and are somewhat larger than the sun, but are otherwise unremarkable.

    Similarly, if the strong nuclear force were lower, stars could continue to operate in the absence of stable deuterium. A number of different processes provide paths by which stars can generate energy and synthesize heavy elements. During the first part of their lives, stars slowly contract, their central cores grow hotter and denser, and they glow with the power output of the sun. Stars in our universe eventually become hot and dense enough to ignite nuclear fusion, but in alternative universes they could continue this contraction phase and generate power by losing gravitational potential energy. The longest-lived stars could shine with a power output roughly comparable to the sun for up to 1 billion years, perhaps long enough for biological evolution to take place.

    For sufficiently massive stars, the contraction would accelerate and become a catastrophic collapse. These stellar bodies would basically go supernova. Their central temperatures and densities would increase to such large values that nuclear reactions would ignite. Many types of nuclear reactions would take place in the death throes of these stars. This process of explosive nucleosynthesis could supply the universe with heavy nuclei, in spite of the lack of deuterium.

    Once such a universe produces trace amounts of heavy elements, later generations of stars have yet another option for nuclear burning. This process, called the carbon-nitrogen-oxygen cycle, does not require deuterium as an intermediate state. Instead, carbon acts as a catalyst to instigate the production of helium. This cycle operates in the interior of the sun and provides a small fraction of its total power. In the absence of stable deuterium, the carbon-nitrogen-oxygen cycle would dominate the energy generation. And this does not exhaust the options for nuclear power generation. Stars could also produce helium through a triple-nucleon process that is roughly analogous to the triple-alpha process for carbon production. Stars thus have many channels for providing both energy and complex nuclei in alternate universes.

    A fourth example of fine-tuning concerns the formation of galaxies and other large-scale structures. They were seeded by small density fluctuations produced in the earliest moments of cosmic time. After the universe had cooled down enough, these fluctuations started to grow stronger under the force of gravity, and denser regions eventually become galaxies and galaxy clusters. The fluctuations started with a small amplitude, denoted Q, equal to 0.00001. The primeval universe was thus incredibly smooth: The density, temperature, and pressure of the densest regions and of the most rarefied regions were the same to within a few parts per 100,000. The value of Q represents another possible instance of fine-tuning in the universe.

    If Q had been lower, it would have taken longer for fluctuations to grow strong enough to become cosmic structures, and galaxies would have had lower densities. If the density of a galaxy is too low, the gas in the galaxy is unable to cool. It might not ever condense into galactic disks or coalesce into stars. Low-density galaxies are not viable habitats for life. Worse, a long enough delay might have prevented galaxies from forming at all. Beginning about 4 billion years ago, the expansion of the universe began to accelerate and pull matter apart faster than it could agglomerate—a change of pace that is usually attributed to a mysterious dark energy. If Q had been too small, it could have taken so long for galaxies to collapse that the acceleration would have started before structure formation was complete, and further growth would have been suppressed. The universe could have ended up devoid of complexity, and lifeless. In order to avoid this fate, the value of Q cannot be smaller by more than a factor of 10.

    What if Q had been larger? Galaxies would have formed earlier and ended up denser. That, too, would have posed a danger for the prospects of habitability. Stars would have been much closer to one another and interacted more often. In so doing, they could have stripped planets out of their orbits and sent them hurtling into deep space. Furthermore, because stars would be closer together, the night sky would be brighter—perhaps as bright as day. If the stellar background were too dense, the combined starlight could boil the oceans of any otherwise suitable planets.

    5
    Galactic What-If: A galaxy that formed in a hypothetical universe with large initial density fluctuations might be even more hospitable than our Milky Way. The central region is too bright and hot for life, and planetary orbits are unstable. But the outer region is similar to the solar neighborhood. In between, the background starlight from the galaxy is comparable in brightness to the sunlight received by Earth, so all planets, no matter their orbits, are potentially habitable. Fred C. Adams.

    In this case, the fine-tuning argument is not very constraining. The central regions of galaxies could indeed produce such intense background radiation that all planets would be rendered uninhabitable. But the outskirts of galaxies would always have a low enough density for habitable planets to survive. An appreciable fraction of galactic real estate remains viable even when Q is thousands of times larger than in our universe. In some cases, a galaxy might be even more hospitable. Throughout much of the galaxy, the night sky could have the same brightness as the sunshine we see during the day on Earth. Planets would receive their life-giving energy from the entire ensemble of background stars rather than from just their own sun. They could reside in almost any orbit. In an alternate universe with larger density fluctuations than our own, even Pluto would get as much daylight as Miami. As a result, a moderately dense galaxy could have more habitable planets than the Milky Way.

    In short, the parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets. The force of gravity could have been 1,000 times stronger or 1 billion times weaker, and stars would still function as long-lived nuclear burning engines. The electromagnetic force could have been stronger or weaker by factors of 100. Nuclear reaction rates could have varied over many orders of magnitude. Alternative stellar physics could have produced the heavy elements that make up the basic raw material for planets and people. Clearly, the parameters that determine stellar structure and evolution are not overly fine-tuned.

    Given that our universe does not seem to be particularly fine-tuned, can we still say that our universe is the best one for life to develop? Our current understanding suggests that the answer is no. One can readily envision a universe that is friendlier to life and perhaps more logical. A universe with stronger initial density fluctuations would make denser galaxies, which could support more habitable planets than our own. A universe with stable beryllium would have straightforward channels available for carbon production and would not need the complication of the triple-alpha process. Although these issues are still being explored, we can already say that universes have many pathways for the development of complexity and biology, and some could be even more favorable for life than our own. In light of these generalizations, astrophysicists need to reexamine the possible implications of the multiverse, including the degree of fine-tuning in our universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
    • stewarthoughblog 1:13 am on November 10, 2017 Permalink | Reply

      The proposition that long-lived stars could last 1by and possibly be sufficient for life to evolve is not consistent with what science has observed with our solar system, planet and the origin of life. It is estimated that the first life did not appear until almost 1by after formation, making this wide speculation.

      The idea that an increased density of stars in the galaxy could support increased habitability of planets is inconsistent with astrophysical understanding of the criticality of solar radiation to not destroy all life and all biochemicals required.

      It is also widely speculative to propose that any of the fundamental constants and force tolerances can be virtually arbitrarily reassigned with minimal affect without much more serious scientific analysis. In light of the fundamental fact that the understanding of the origin of life naturalistically is a chaotic mess, it is widely speculative to conjecture the fine-tuning of the universe is not critical.

      Like

    • richardmitnick 10:13 am on November 10, 2017 Permalink | Reply

      Thanks for reading and commenting. I appreciate it.

      Like

  • richardmitnick 11:34 am on November 9, 2017 Permalink | Reply
    Tags: , But what is matter exactly, Einstein: m = E/c2. This is the great insight (not E = mc2), Frank Wilczek, , Higgs field, John Wheeler, Nautilus, , Physics Has Demoted Mass, , Quarks are quantum wave-particles   

    From Nautilus: “Physics Has Demoted Mass” 

    Nautilus

    Nautilus

    November 9, 2017
    Jim Baggott

    You’re sitting here, reading this article. Maybe it’s a hard copy, or an e-book on a tablet computer or e-reader. It doesn’t matter. Whatever you’re reading it on, we can be reasonably sure it’s made of some kind of stuff: paper, card, plastic, perhaps containing tiny metal electronic things on printed circuit boards. Whatever it is, we call it matter or material substance. It has a characteristic property that we call solidity. It has mass.

    But what is matter, exactly? Imagine a cube of ice, measuring a little over one inch (or 2.7 centimeters) in length. Imagine holding this cube of ice in the palm of your hand. It is cold, and a little slippery. It weighs hardly anything at all, yet we know it weighs something.

    Let’s make our question a little more focused. What is this cube of ice made of? And, an important secondary question: What is responsible for its mass?

    1
    Credit below

    To understand what a cube of ice is made of, we need to draw on the learning acquired by the chemists. Building on a long tradition established by the alchemists, these scientists distinguished between different chemical elements, such as hydrogen, carbon, and oxygen. Research on the relative weights of these elements and the combining volumes of gases led John Dalton and Louis Gay-Lussac to the conclusion that different chemical elements consist of atoms with different weights which combine according to a set of rules involving whole numbers of atoms.

    The mystery of the combining volumes of hydrogen and oxygen gas to produce water was resolved when it was realized that hydrogen and oxygen are both diatomic gases, H2 and O2. Water is then a compound consisting of two hydrogen atoms and one oxygen atom, H2O.

    This partly answers our first question. Our cube of ice consists of molecules of H2O organized in a regular array. We can also make a start on our second question. Avogadro’s law states that a mole of chemical substance will contain about 6 × 10^23 discrete “particles.” Now, we can interpret a mole of substance simply as its molecular weight scaled up to gram quantities. Hydrogen (in the form of H2) has a relative molecular weight of 2, implying that each hydrogen atom has a relative atomic weight of 1. Oxygen (O2) has a relative molecular weight of 32, implying that each oxygen atom has a relative atomic weight of 16. Water (H2O) therefore has a relative molecular weight of 2 × 1 + 16 = 18.

    It so happens that our cube of ice weighs about 18 grams, which means that it represents a mole of water, more or less. According to Avogadro’s law it must therefore contain about 6 × 10^23 molecules of H2O. This would appear to provide a definitive answer to our second question. The mass of the cube of ice derives from the mass of the hydrogen and oxygen atoms present in 6 × 10^23 molecules of H2O.

    But, of course, we can go further. We learned from J.J. Thompson, Ernest Rutherford, and Niels Bohr and many other physicists in the early 20th century that all atoms consist of a heavy, central nucleus surrounded by light, orbiting electrons. We subsequently learned that the central nucleus consists of protons and neutrons. The number of protons in the nucleus determines the chemical identity of the element: A hydrogen atom has one proton, an oxygen atom has eight (this is called the atomic number). But the total mass or weight of the nucleus is determined by the total number of protons and neutrons in the nucleus.

    Hydrogen still has only one (its nucleus consists of a single proton—no neutrons). The most common isotope of oxygen has—guess what?—16 (eight protons and eight neutrons). It’s obviously no coincidence that these proton and neutron counts are the same as the relative atomic weights I quoted above.

    If we ignore the light electrons, then we would be tempted to claim that the mass of the cube of ice resides in all the protons and neutrons in the nuclei of its hydrogen and oxygen atoms. Each molecule of H2O contributes 10 protons and eight neutrons, so if there are 6 × 10^23 molecules in the cube and we ignore the small difference in mass between a proton and a neutron, we conclude that the cube contains in total about 18 times this figure, or 108 × 10^23 protons and neutrons.

    So far, so good. But we’re not quite done yet. We now know that protons and neutrons are not elementary particles. They consist of quarks. A proton contains two up quarks and a down quark, a neutron two down quarks and an up quark. And the color force binding the quarks together inside these larger particles is carried by massless gluons.

    Okay, so surely we just keep going. If once again we approximate the masses of the up and down quarks as the same we just multiply by three and turn 108 × 10^23 protons and neutrons into 324 × 10^23 up and down quarks. We conclude that this is where all the mass resides. Yes?

    No. This is where our naïve atomic preconceptions unravel. We can look up the masses of the up and down quarks on the Particle Data Group website. The up and down quarks are so light that their masses can’t be measured precisely and only ranges are quoted. The following are all reported in units of MeV/c2. In these units the mass of the up quark is given as 2.3 with a range from 1.8 to 3.0. The down quark is a little heavier, 4.8, with a range from 4.5 to 5.3. Compare these with the mass of the electron, about 0.51 measured in the same units.

    Now comes the shock. In the same units of MeV/c2 the proton mass is 938.3, the neutron 939.6. The combination of two up quarks and a down quark gives us only 9.4, or just 1 percent of the mass of the proton. The combination of two down quarks and an up quark gives us only 11.9, or just 1.3 percent of the mass of the neutron. About 99 percent of the masses of the proton and neutron seem to be unaccounted for. What’s gone wrong?

    To answer this question, we need to recognize what we’re dealing with. Quarks are not self-contained “particles” of the kind that the Greeks or the mechanical philosophers might have imagined. They are quantum wave-particles; fundamental vibrations or fluctuations of elementary quantum fields. The up and down quarks are only a few times heavier than the electron, and we’ve demonstrated the electron’s wave-particle nature in countless laboratory experiments. We need to prepare ourselves for some odd, if not downright bizarre behavior.

    And let’s not forget the massless gluons. Or special relativity, and E = mc2. Or the difference between “bare” and “dressed” mass. And, last but not least, let’s not forget the role of the Higgs field in the “origin” of the mass of all elementary particles. To try to understand what’s going on inside a proton or neutron we need to reach for quantum chromodynamics, the quantum field theory of the color force between quarks.

    3
    icedmocha / Shutterstock

    Quarks and gluons possess color “charge.” Just what is this, exactly? We have no way of really knowing. We do know that color is a property of quarks and gluons and there are three types, which physicists have chosen to call red, green, and blue. But, just as nobody has ever “seen” an isolated quark or gluon, so more or less by definition nobody has ever seen a naked color charge. In fact, quantum chromodynamics (QCD) suggests that if a color charge could be exposed like this it would have a near-infinite energy. Aristotle’s maxim was that “nature abhors a vacuum.” Today we might say: “nature abhors a naked color charge.”

    So, what would happen if we could somehow create an isolated quark with a naked color charge? Its energy would go up through the roof, more than enough to conjure virtual gluons out of “empty” space. Just as the electron moving through its own self-generated electromagnetic field gathers a covering of virtual photons, so the exposed quark gathers a covering of virtual gluons. Unlike photons, the gluons themselves carry color charge and they are able to reduce the energy by, in part, masking the exposed color charge. Think of it this way: The naked quark is acutely embarrassed, and it quickly dresses itself with a covering of gluons.

    This isn’t enough, however. The energy is high enough to produce not only virtual particles (like a kind of background noise or hiss), but elementary particles, too. In the scramble to cover the exposed color charge, an anti-quark is produced which pairs with the naked quark to form a meson. A quark is never—but never—seen without a chaperone.

    But this still doesn’t do it. To cover the color charge completely we would need to put the anti-quark in precisely the same place at precisely the same time as the quark. Heisenberg’s uncertainty principle won’t let nature pin down the quark and anti-quark in this way. Remember that a precise position implies an infinite momentum, and a precise rate of change of energy with time implies an infinite energy. Nature has no choice but to settle for a compromise. It can’t cover the color charge completely but it can mask it with the anti-quark and the virtual gluons. The energy is at least reduced to a manageable level.

    This kind of thing also goes on inside the proton and neutron. Within the confines of their host particles, the three quarks rattle around relatively freely. But, once again, their color charges must be covered, or at least the energy of the exposed charges must be reduced. Each quark produces a blizzard of virtual gluons that pass back and forth between them, together with quark–anti-quark pairs. Physicists sometimes call the three quarks that make up a proton or a neutron “valence” quarks, as there’s enough energy inside these particles for a further sea of quark–anti-quark pairs to form. The valence quarks are not the only quarks inside these particles.

    What this means is that the mass of the proton and neutron can be traced largely to the energy of the gluons and the sea of quark–anti-quark pairs that are conjured from the color field.

    How do we know? Well, it must be admitted that it is actually really rather difficult to perform calculations using QCD. The color force is extremely strong, and the corresponding energies of color-force interactions are therefore very high. Remember that the gluons also carry color charge, so everything interacts with everything else. Virtually anything can happen, and keeping track of all the possible virtual and elementary-particle permutations is very demanding.

    This means that although the equations of QCD can be written down in a relatively straightforward manner, they cannot be solved analytically, on paper. Also, the mathematical sleight-of-hand used so successfully in QED no longer applies—because the energies of the interactions are so high we can’t apply the techniques of renormalization. Physicists have had no choice but to solve the equations on a computer instead.

    Considerable progress was made with a version of QCD called “QCD-lite.” This version considered only massless gluons and up and down quarks, and further assumed that the quarks themselves are also massless (so, literally, “lite”). Calculations based on these approximations yielded a proton mass that was found to be just 10 percent lighter than the measured value.

    Let’s stop to think about that for a bit. A simplified version of QCD in which we assume that no particles have mass to start with nevertheless predicts a mass for the proton that is 90 percent right. The conclusion is quite startling. Most of the mass of the proton comes from the energy of the interactions of its constituent quarks and gluons.

    John Wheeler used the phrase “mass without mass” to describe the effects of superpositions of gravitational waves which could concentrate and localize energy such that a black hole is created. If this were to happen, it would mean that a black hole—the ultimate manifestation of super-high-density matter—had been created not from the matter in a collapsing star but from fluctuations in spacetime. What Wheeler really meant was that this would be a case of creating a black hole (mass) from gravitational energy.

    But Wheeler’s phrase is more than appropriate here. Frank Wilczek, one of the architects of QCD, used it in connection with his discussion of the results of the QCD-lite calculations. If much of the mass of a proton and neutron comes from the energy of interactions taking place inside these particles, then this is indeed “mass without mass,” meaning that we get the behavior we tend to ascribe to mass without the need for mass as a property.

    Does this sound familiar? Recall that in Einstein’s seminal addendum to his 1905 paper on special relativity the equation he derived is actually m = E/c2. This is the great insight (not E = mc2). And Einstein was surely prescient when he wrote: “the mass of a body is a measure of its energy content.”[1] Indeed, it is. In his book The Lightness of Being, Wilczek wrote:[2]

    “If the body is a human body, whose mass overwhelmingly arises from the protons and neutrons it contains, the answer is now clear and decisive. The inertia of that body, with 95 percent accuracy, is its energy content.”

    In the fission of a U-235 nucleus, some of the energy of the color fields inside its protons and neutrons is released, with potentially explosive consequences. In the proton–proton chain involving the fusion of four protons, the conversion of two up quarks into two down quarks, forming two neutrons in the process, results in the release of a little excess energy from its color fields. Mass does not convert to energy. Energy is instead passed from one kind of quantum field to another.

    Where does this leave us? We’ve certainly come a long way since the ancient Greek atomists speculated about the nature of material substance, 2,500 years ago. But for much of this time we’ve held to the conviction that matter is a fundamental part of our physical universe. We’ve been convinced that it is matter that has energy. And, although matter may be reducible to microscopic constituents, for a long time we believed that these would still be recognizable as matter—they would still possess the primary quality of mass.

    Modern physics teaches us something rather different, and deeply counter-intuitive. As we worked our way ever inward—matter into atoms, atoms into sub-atomic particles, sub-atomic particles into quantum fields and forces—we lost sight of matter completely. Matter lost its tangibility. It lost its primacy as mass became a secondary quality, the result of interactions between intangible quantum fields. What we recognize as mass is a behavior of these quantum fields; it is not a property that belongs or is necessarily intrinsic to them.

    Despite the fact that our physical world is filled with hard and heavy things, it is instead the energy of quantum fields that reigns supreme. Mass becomes simply a physical manifestation of that energy, rather than the other way around.

    This is conceptually quite shocking, but at the same time extraordinarily appealing. The great unifying feature of the universe is the energy of quantum fields, not hard, impenetrable atoms. Perhaps this is not quite the dream that philosophers might have held fast to, but a dream nevertheless.

    References

    1. Einstein, A. Does the inertia of a body depend upon its energy-content? Annalen der Physik 18 (1905).

    2. Wilczek, F. The Lightness of Being Basic Books, New York, NY (2008).

    Photocollage credits: Physicsworld.com; Thatree Thitivongvaroon / Getty Images

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 8:13 am on September 3, 2017 Permalink | Reply
    Tags: , , , , , Nautilus, Neutron star mergers are the largest hadron colliders ever conceived, , What the Rumored Neutron Star Merger Might Teach Us   

    From Nautilus: “What the Rumored Neutron Star Merger Might Teach Us” 

    Nautilus

    Nautilus

    Aug 29, 2017
    Dan Garisto

    1
    In a sense, neutron star mergers are the largest hadron colliders ever conceived. Image by NASA Goddard Space Flight Center / Flickr

    This month, before LIGO, the Laser Interferometer Gravitational Wave Observatory, and its European counterpart Virgo, were going to close down for a year to undergo upgrades, they jointly surveyed the skies.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    It was a small observational window—the 1st to the 25th—but that may have been enough: A rumor that LIGO has detected another gravitational wave—the fourth in two years—is making the rounds. But this time, there’s a twist: The signal might have been caused by the merger of two neutron stars instead of black holes.

    If the rumor holds true, it would be an astonishingly lucky detection. To get a sense of the moment, Nautilus spoke to David Radice, a postdoctoral researcher at Princeton who simulates neutron star mergers, “one of LIGO’s main targets,” he says.

    This potential binary neutron star merger sighting reminds me of when biologists think they’ve discovered a new species. How would you describe it?

    I do agree that this is the first time something like this has been seen.

    For me, a nice analogy is one of particle colliders. In a sense, neutron star mergers are the largest hadron colliders ever conceived. Instead of smashing a few nucleons, it’s like smashing 1060 of them. So by looking at the aftermath, we can learn a lot about fundamental physics. There is a lot that can happen when these stars collide and I don’t think we have a full knowledge of all the possibilities. I think we’ll learn a lot and see new things.

    What it would it mean if they were detecting a neutron star binary merger?

    I expected this neutron star merger to be detected further in the future—the possibility that this merger has been detected earlier suggests that that rate of these events is higher than we thought. There is maybe also a counterpart—an electromagnetic wave. There are many things that you can only really do with an electromagnetic counterpart. For example, even when we have, in the far future, five detectors worldwide, we will not be able to pinpoint the exact location to the source with the precision to say: “OK, this is the host galaxy.”

    Well, if you have an electromagnetic counterpart, especially in the optical region, you can really pinpoint a galaxy and say, “This merger happened in this galaxy that has these properties.”

    What makes a neutron star binary merger different from a black hole binary merger?

    One of the main things is that in a black hole binary merger, you’re just looking at the space-time effects. In this case we are looking at this extremely dense matter. There are a lot of things that you can hope to learn about neutron star mergers. We’re looking at them for a source of gamma ray bursts, or as the origin of heavy elements, or as a way to learn about physics of very high density matter.

    One idea that has been around now for a few years is that many of the heavy elements—elements, for example, like platinum or gold—may actually be produced in neutron star mergers. Material is ejected, and because of nuclear processes, it will produce these heavy elements that are otherwise difficult to produce in normal stars.

    You’ve created visual simulations of neutron star mergers, like the one below. How much power is required to run them?

    It’s publicly available—anyone can download the code and do simulations similar to those…but you need to run them on a supercomputer. It typically takes weeks on thousands of processors, but it can tell you a lot about these mergers. Now the two detectors both LIGO and Virgo are expected to shut down and go through a series of upgrades. When they come back online, their sensitivity will be significantly boosted so we can see much farther out and learn more about each event.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 11:30 am on August 6, 2017 Permalink | Reply
    Tags: and Why Does It Seem to Flow?, , , China to launch world’s first ‘cold’ atomic clock in space ... and it’ll stay accurate for a billion years., , Nautilus, Where Did Time Come From   

    From Nautilus: “Where Did Time Come From, and Why Does It Seem to Flow?” 

    Nautilus

    Nautilus

    Jul 18, 2017
    John Steele

    1
    We say a river flows because it moves through space with respect to time. But time can’t move with respect to time—time is time.Image by violscraper / Flickr.


    NASA Deep Space Atomic Clock

    1
    NIST-F2 atomic clock operated by America’s National Institute of Standards and Technology in Boulder, Colorado.

    3
    China to launch world’s first ‘cold’ atomic clock in space … and it’ll stay accurate for a billion years.

    Paul Davies has a lot on his mind—or perhaps more accurate to say in his mind. A physicist at Arizona State University, he does research on a wide range of topics, from the abstract fields of theoretical physics and cosmology to the more concrete realm of astrobiology, the study of life in places beyond Earth. Nautilus sat down for a chat with Davies, and the discussion naturally drifted to the subject of time, a long-standing research interest of his. Here is a partial transcript of the interview, edited lightly for length and clarity.

    Is the flow of time real or an illusion?

    The flow of time is an illusion, and I don’t know very many scientists and philosophers who would disagree with that, to be perfectly honest. The reason that it is an illusion is when you stop to think, what does it even mean that time is flowing? When we say something flows like a river, what you mean is an element of the river at one moment is in a different place of an earlier moment. In other words, it moves with respect to time. But time can’t move with respect to time—time is time. A lot of people make the mistake of thinking that the claim that time does not flow means that there is no time, that time does not exist. That’s nonsense. Time of course exists. We measure it with clocks. Clocks don’t measure the flow of time, they measure intervals of time. Of course there are intervals of time between different events; that’s what clocks measure.

    So where does this impression of flow come from?

    Well, I like to give an analogy. Suppose I stand up, twirl around a few times, and stop. Then I have the overwhelming impression that the entire universe is rotating. I feel it to be rotating—of course I know it’s not. In the same way, I feel time is flowing, but of course I know it’s not. And presumably the explanation for this illusion has to do with something up here [in your head] and is connected with memory I guess—laying down of memories and so on. So it’s a feeling we have, but it’s not a property of time itself.

    And the other thing people contemplate: They think denying the flow of time is denying time asymmetry of the world. Of course events in the world follow a directional sequence. Drop an egg on the floor and it breaks. You don’t see eggs assembling themselves. Buildings fall down after earthquakes; they don’t rise up from heaps of rubble. [There are] many, many examples in daily life of the asymmetry of the world in time; that’s a property of the world. It’s not a property of time itself, and the explanation for that is to be sought in the very early universe and its initial conditions. It’s a whole different and perfectly respectable subject.

    Is time fundamental to the Universe?

    Time and space are the framework in which we formulate all of our current theories of the universe, but there is some question as to whether these might be emergent or secondary qualities of the universe. It could be that fundamentally the laws of the universe are formulated in terms of some sort of pre-space and time, and that space-time comes out of something more fundamental.

    Now obviously in daily life we experience a three-dimensional world and one dimension of time. But back in the Big Bang—we don’t really understand exactly how the universe was born in the Big Bang, but we think that quantum physics had something to do with it—it may be that this notion of what we would call a classical space-time, where everything seems to be sort of well-defined, maybe that was all closed out. And so maybe not just the world of matter and energy, but even space-time itself is a product of the special early stage of the universe. We don’t know that. That’s work under investigation.

    So time could be emergent?

    This dichotomy between space-time being emergent, a secondary quality—that something comes out of something more primitive, or something that is at the rock bottom of our description of nature—has been floating around since before my career. John Wheeler believed in and wrote about this in the 1950s—that there might be some pre-geometry, that would give rise to geometry just like atoms give rise to the continuum of elastic bodies—and people play around with that.

    The problem is that we don’t have any sort of experimental hands on that. You can dream up mathematical models that do this for you, but testing them looks to be pretty hopeless. I think the reason for that is that most people feel that if there is anything funny sort of underpinning space and time, any departure from our notion of a continuous space and time, that probably it would manifest itself only at the so-called Planck scale, which is [20 orders of magnitude] smaller than an atomic nucleus, and our best instruments at the moment are probing scales which are many orders of magnitude above that. It’s very hard to see how we could get at anything at the Planck scale in a controllable way.

    If multiple universes exist, do they have a common clock?

    The inter-comparison of time between different observers and different places is a delicate business even within one universe. When you talk about what is the rate of a clock, say, near the surface of a black hole, it’s going to be quite different from the rate of a clock here on Earth. So there isn’t even a common time in the entire universe.

    But now if we have a multiverse with other universes, whether each one in a sense comes with its own time—you can only do an inter-comparison between the two if there was some way of sending signals from one to the other. It depends on your multiverse model. There are many on offer, but on the one that cosmologists often talk about—where you have bubbles appearing in a sort of an inflating superstructure—then there’s no direct way of comparing a clock rate in one bubble from clock rates in another bubble.

    What do you think are the most exciting recent advances in understanding time?

    I’m particularly drawn to the work that is done in the lab on perception of time, because I think that has the ability to make rapid advances in the coming years. For example, there are famous experiments in which people apparently make free decisions at certain moments and yet it’s found that the decision was actually made a little bit earlier, but their own perception of time and their actions within time have been sort of edited after the event. When we observe the world, what we see is an apparently consistent and smooth narrative, but actually the brain is just being bombarded with sense data from different senses and puts all this together. It integrates it and then presents a consistent narrative as it were the conscious self. And so we have this impression that we’re in charge and everything is all smoothly put together. But as a matter of fact, most of this is, is a narrative that’s recreated after the event.

    Where it’s particularly striking of course is when people respond appropriately much faster than the speed of thought. You need only think of a piano player or a tennis player to see that the impression that they are making a conscious decision—“that ball is coming in this direction; I’d better move over here and hit it”—couldn’t possibly be. The time it takes for the signals to get to the brain and then through the motor system, back to the response, couldn’t work. And yet they still have this overwhelming impression that they’re observing the world in real time and are in control. I think all of this is pretty fascinating stuff.

    In terms of fundamental physics, is there anything especially new about time? I think the answer is not really. There are new ideas that are out there. I think there are still fundamental problems; we’ve talked about one of them: Is time an emergent property or a fundamental property? And the ultimate origin of the arrow of time, which is the asymmetry of the world in time, is still a bit contentious. We know we have to trace it back to the Big Bang, but there are still different issues swirling around there that we haven’t completely resolved. But these are sort of airy-fairy philosophical and theoretical issues in terms of measurement of time or anything being exposed about the nature of time.

    Then of course we’re always looking to our experimental colleagues to improve time measurements. At some stage these will become so good that we’ll no doubt see some peculiar effects showing up. There’s still an outstanding fundamental issue that although the laws of physics are symmetric in time, for the most part, there is one set of processes having to do with the weak interaction where there is apparently a fundamental breakdown of this time-reversal symmetry of a small amount. But it seems to play a crucial role and exactly how that fits into the broader picture in the universe. I think there’s still something to be played out there. So there’s still experiments can be done in particle physics that might disclose this time-reversal asymmetry which is there in the weak interaction, and how that fits in with the arrow of time.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 7:46 am on June 15, 2017 Permalink | Reply
    Tags: , , , Nautilus, When Neurology Becomes Theology, Wilder Penfield   

    From Nautilus: “When Neurology Becomes Theology” 

    Nautilus

    Nautilus

    June 15, 2017
    Robert A. Burton

    A neurologist’s perspective on research into consciousness.

    Early in my neurology residency, a 50-year-old woman insisted on being hospitalized for protection from the FBI spying on her via the TV set in her bedroom. The woman’s physical examination, lab tests, EEGs, scans, and formal neuropsychological testing revealed nothing unusual. Other than being visibly terrified of the TV monitor in the ward solarium, she had no other psychiatric symptoms or past psychiatric history. Neither did anyone else in her family, though she had no recollection of her mother, who had died when the patient was only 2.

    The psychiatry consultant favored the early childhood loss of her mother as a potential cause of a mid-life major depressive reaction. The attending neurologist was suspicious of an as yet undetectable degenerative brain disease, though he couldn’t be more specific. We residents were equally divided between the two possibilities.

    Fortunately an intern, a super-sleuth more interested in data than speculation, was able to locate her parents’ death certificates. The patient’s mother had died in a state hospital of Huntington’s disease—a genetic degenerative brain disease. (At that time such illnesses were often kept secret from the rest of the family.) Case solved. The patient was a textbook example of psychotic behavior preceding the cognitive decline and movement disorders characteristic of Huntington’s disease.

    1
    WHERE’S THE MIND?: Wilder Penfield spent decades studying how brains produce the experience of consciousness, but concluded “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” Montreal Neurological Institute

    As a fledgling neurologist, I’d already seen a wide variety of strange mental states arising out of physical diseases. But on this particular day, I couldn’t wrap my mind around a gene mutation generating an isolated feeling of being spied on by the FBI. How could a localized excess of amino acids in a segment of DNA be transformed into paranoia?

    Though I didn’t know it at the time, I had run headlong into the “hard problem of consciousness,” the enigma of how physical brain mechanisms create purely subjective mental states. In the subsequent 50 years, what was once fodder for neurologists’ late night speculations has mushroomed into the pre-eminent question in the philosophy of mind. As an intellectual challenge, there is no equal to wondering how subatomic particles, mindless cells, synapses, and neurotransmitters create the experience of red, the beauty of a sunset, the euphoria of lust, the transcendence of music, or in this case, intractable paranoia.

    Neuroscientists have long known which general areas of the brain and their connections are necessary for the state of consciousness. By observing both the effects of localized and generalized brain insults such as anoxia and anesthesia, none of us seriously doubt that consciousness arises from discrete brain mechanisms. Because these mechanisms are consistent with general biological principles, it’s likely that, with further technical advances, we will uncover how the brain generates consciousness.

    However, such knowledge doesn’t translate into an explanation for the what of consciousness—that state of awareness of one’s surroundings and self, the experience of one’s feelings and thoughts. Imagine a hypothetical where you could mix nine parts oxytocin, 17 parts serotonin, and 11 parts dopamine into a solution that would make 100 percent of people feel a sense of infatuation 100 percent of the time. Knowing the precise chemical trigger for the sensation of infatuation (the how) tells you little about the nature of the resulting feeling (the what).

    Over my career, I’ve gathered a neurologist’s working knowledge of the physiology of sensations. I realize neuroscientists have identified neural correlates for emotional responses. Yet I remain ignorant of what sensations and responses are at the level of experience. I know the brain creates a sense of self, but that tells me little about the nature of the sensation of “I-ness.” If the self is a brain-generated construct, I’m still left wondering who or what is experiencing the illusion of being me. Similarly, if the feeling of agency is an illusion, as some philosophers of mind insist, that doesn’t help me understand the essence of my experience of willfully typing this sentence.

    Slowly, and with much resistance, it’s dawned on me that the pursuit of the nature of consciousness, no matter how cleverly couched in scientific language, is more like metaphysics and theology. It is driven by the same urges that made us dream up gods and demons, souls and afterlife. The human urge to understand ourselves is eternal, and how we frame our musings always depends upon prevailing cultural mythology. In a scientific era, we should expect philosophical and theological ruminations to be couched in the language of physical processes. We argue by inference and analogy, dragging explanations from other areas of science such as quantum physics, complexity, information theory, and math into a subjective domain. Theories of consciousness are how we wish to see ourselves in the world, and how we wish the world might be.

    My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield’s 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. One of the great men of modern neuroscience, Penfield spent several decades stimulating the brains of conscious, non-anesthetized patients and noting their descriptions of the resulting mental states, including long-lost bits of memory, dreamy states, deju vu, feelings of strangeness, and otherworldliness. What was most startling about Penfield’s work was his demonstration that sensations that normally qualify how we feel about our thoughts can occur in the absence of any conscious thought. For example, he could elicit feelings of familiarity and strangeness without the patient thinking of anything to which the feeling might apply. His ability to spontaneously evoke pure mental states was proof positive that these states arise from basic brain mechanisms.

    And yet, here’s Penfield’s conclusion to his end-of-career magnum opus on the nature of the mind: “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” How is this possible? How could a man who had single-handedly elicited so much of the fabric of subjective states of mind decide that there was something to the mind beyond what the brain did?

    In the last paragraph of his book, Penfield explains, “In ordinary conversation, the ‘mind’ and ‘the spirit of man’ are taken to be the same. I was brought up in a Christian family and I have always believed, since I first considered the matter … that there is a grand design in which all conscious individuals play a role … Since a final conclusion … is not likely to come before the youngest reader of this book dies, it behooves each one of us to adopt for himself a personal assumption (belief, religion), and a way of life without waiting for a final word from science on the nature of man’s mind.”

    Front and center is Penfield’s observation that, in ordinary conversation, the mind is synonymous with the spirit of man. Further, he admits that, in the absence of scientific evidence, all opinions about the mind are in the realm of belief and religion. If Penfield is even partially correct, we shouldn’t be surprised that any theory of the “what” of consciousness would be either intentionally or subliminally infused with one’s metaphysics and religious beliefs.

    To see how this might work, take a page from Penfield’s brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify. For instance, conceptualize thought as a mental calculation and a visceral sense of the calculation. If you add 3 + 3, you compute 6, and simultaneously have the feeling that 6 is the correct answer. Thoughts feel right, wrong, strange, beautiful, wondrous, reasonable, far-fetched, brilliant, or stupid. Collectively these widely disparate mental sensations constitute much of the contents of consciousness. But we have no control over the mental sensations that color our thoughts. No one can will a sense of understanding or the joy of an a-ha! moment. We don’t tell ourselves to make an idea feel appealing; it just is. Yet these sensations determine the direction of our thoughts. If a thought feels irrelevant, we ignore it. If it feels promising, we pursue it. Our lines of reasoning are predicated upon how thoughts feel.

    2
    No image caption or credit.

    Shortly after reading Penfield’s book, I had the good fortune to spend a weekend with theoretical physicist David Bohm. Bohm took a great deal of time arguing for a deeper and interconnected hidden reality (his theory of implicate order). Though I had difficulty following his quantum theory-based explanations, I vividly remember him advising me that the present-day scientific approach of studying parts rather than the whole could never lead to any final answers about the nature of consciousness. According to him, all is inseparable and no part can be examined in isolation.

    In an interview in which he was asked to justify his unorthodox view of scientific method, Bohm responded, “My own interest in science is not entirely separate from what is behind an interest in religion or in philosophy—that is to understand the whole of the universe, the whole of matter, and how we originate.” If we were reading Bohm’s argument as a literary text, we would factor in his Jewish upbringing, his tragic mistreatment during the McCarthy era, the lack of general acceptance of his idiosyncratic take on quantum physics, his bouts of depression, and the close relationship between his scientific and religious interests.

    Many of today’s myriad explanations for how consciousness arises are compelling. But once we enter the arena of the nature of consciousness, there are no outright winners.

    Christof Koch, the chief scientific officer of the Allen Institute for Brain Science in Seattle, explains that a “system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe.”

    According to Daniel Dennett, professor of philosophy at Tufts University and author of Consciousness Explained and many other books on science and philosophy, consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons. He feels no need to postulate any additional physical component to explain the intrinsic qualities of our subjective experience.

    Meanwhile, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology, tells us consciousness “is how information feels when it is being processed in certain very complex ways.” He writes that “external reality is completely described by mathematics. If everything is mathematical, then, in principle, everything is understandable.” Rudolph E. Tanzi, a professor of neurology at Harvard University, admits, “To me the primal basis of existence is awareness and everything including ourselves and our brains are products of awareness.” He adds, “As a responsible scientist, one hypothesis which should be tested is that memory is stored outside the brain in a sea of consciousness.”

    Each argument, taken in isolation, seems logical, internally consistent, yet is at odds with the others. For me, the thread that connects these disparate viewpoints isn’t logic and evidence, but their overall intent. Belief without evidence is Richard Dawkins’ idea of faith. “Faith is belief in spite of, even perhaps because of, the lack of evidence.” These arguments are best read as differing expressions of personal faith.

    For his part, Dennett is an outspoken atheist and fervent critic of the excesses of religion. “I have absolutely no doubt that secular and scientific vision is right and deserves to be endorsed by everybody, and as we have seen over the last few thousand years, superstitious and religious doctrines will just have to give way.” As the basic premise of atheism is to deny that for which there is no objective evidence, he is forced to avoid directly considering the nature of purely subjective phenomena. Instead he settles on describing the contents of consciousness as illusions, resulting in the circularity of using the definition of mental states (illusions) to describe the general nature of these states.

    The problem compounds itself. Dennett is fond of pointing out (correctly) that there is no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. If so, we’re still faced with asking what/who, if anything, is experiencing consciousness? All roads lead back to the hard problem of consciousness.

    Though tacitly agreeing with those who contend that we don’t yet understand the nature of consciousness, Dennett argues that we are making progress. “We haven’t yet succeeded in fully conceiving how meaning could exist in a material world … or how consciousness works, but we’ve made progress: The questions we’re posing and addressing now are better than the questions of yesteryear. We’re hot on the trail of the answers.”

    By contrast, Koch is upfront in correlating his religious upbringing with his life-long pursuit of the nature of consciousness. Raised as a Catholic, he describes being torn between two contradictory views of the world—the Sunday view reflected by his family and church, and the weekday view as reflected in his work as a scientist (the sacred and the profane).

    In an interview with Nautilus, Koch said, “For reasons I don’t understand and don’t comprehend, I find myself in a universe that had to become conscious, reflecting upon itself.” He added, “The God I now believe in is closer to the God of Spinoza than it is to Michelangelo’s paintings or the God of the Old Testament, a god that resides in this mystical notion of all-nothingness.” Koch admitted, “I’m not a mystic. I’m a scientist, but this is a feeling I have.” In short, Koch exemplifies a truth seldom admitted—that mental states such as a mystical feeling shape how one thinks about and goes about studying the universe, including mental states such as consciousness.

    Both Dennett and Koch have spent a lifetime considering the problem of consciousness; though contradictory, each point of view has a separate appeal. And I appreciate much of Dennett and Koch’s explorations in the same way that I can mull over Aquinas and Spinoza without necessarily agreeing with them. One can enjoy the pursuit without believing in or expecting answers. After all these years without any personal progress, I remain moved by the essential nature of the quest, even if it translates into Sisyphus endlessly pushing his rock up the hill.

    The spectacular advances of modern science have generated a mindset that makes potential limits to scientific inquiry intuitively difficult to grasp. Again and again we are given examples of seemingly insurmountable problems that yield to previously unimaginable answers. Just as some physicists believe we will one day have a Theory of Everything, many cognitive scientists believe that consciousness, like any physical property, can be unraveled. Overlooked in this optimism is the ultimate barrier: The nature of consciousness is in the mind of the beholder, not in the eye of the observer.

    It is likely that science will tell us how consciousness occurs. But that’s it. Although the what of consciousness is beyond direct inquiry, the urge to explain will persist. It is who we are and what we do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:30 am on June 8, 2017 Permalink | Reply
    Tags: , , , , , Ludwig Boltzmann, Microstates, Nautilus, , , The Crisis of the Multiverse   

    From Nautilus: “The Crisis of the Multiverse” 

    Nautilus

    Nautilus

    June 8, 2017
    Ben Freivogel

    Physicists have always hoped that once we understood the fundamental laws of physics, they would make unambiguous predictions for physical quantities. We imagined that the underlying physical laws would explain why the mass of the Higgs particle must be 125 gigaelectron-volts, as was recently discovered, and not any other value, and also make predictions for new particles that are yet to be discovered.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    For example, we would like to predict what kind of particles make up the dark matter.

    These hopes now appear to have been hopelessly naïve. Our most promising fundamental theory, string theory, does not make unique predictions. It seems to contain a vast landscape of solutions, or “vacua,” each with its own values of the observable physical constants. The vacua are all physically realized within an enormous eternally inflating multiverse.

    Has the theory lost its mooring to observation? If the multiverse is large and diverse enough to contain some regions where dark matter is made out of light particles and other regions where dark matter is made out of heavy particles, how could we possibly predict which one we should see in our own region? And indeed many people have criticized the multiverse concept on just these grounds. If a theory makes no predictions, it ceases to be physics.

    But an important issue tends to go unnoticed in debates over the multiverse. Cosmology has always faced a problem of making predictions. The reason is that all our theories in physics are dynamical: The fundamental physical laws describe what will happen, given what already is. So, whenever we make a prediction in physics, we need to specify what the initial conditions are. How do we do that for the entire universe? What sets the initial initial conditions? This is science’s version of the old philosophical question of First Cause.

    The multiverse offers an answer. It is not the enemy of prediction, but its friend.

    The main idea is to make probabilistic predictions. By calculating what happens frequently and what happens rarely in the multiverse, we can make statistical predictions for what we will observe. This is not a new situation in physics. We understand an ordinary box of gas in the same way. Although we cannot possibly keep track of the motion of all the individual molecules, we can make extremely precise predictions for how the gas as a whole will behave. Our job is to develop a similar statistical understanding of events in the multiverse.

    This understanding could take one of three forms. First, the multiverse, though very large, might be able to explore only a finite number of different states, just like an ordinary box of gas. In this case we know how to make predictions, because after a while the multiverse forgets about the unknown initial conditions. Second, perhaps the multiverse is able to explore an infinite number of different states, in which case it never forgets its initial conditions, and we cannot make predictions unless we know what those conditions are. Finally, the multiverse might explore an infinite number of different states, but the exponential expansion of space effectively erases the initial conditions.

    1
    NEVER ENOUGH TIME: Synchronizing clocks is impossible to do in an infinite universe, which in turn undercuts the ability of physics to make predictions. Matteo Ianeselli / Wikimedia Commons

    In many ways, the first option is the most agreeable to physicists, because it extends our well-established statistical techniques. Unfortunately, the predictions we arrive at disagree violently with observations. The second option is very troubling, because our existing laws are incapable of providing the requisite initial conditions. It is the third possibility that holds the most promise for yielding sensible predictions.

    But this program has encountered severe conceptual obstacles. At root, our problems arise because the multiverse is an infinite expanse of space and time. These infinities lead to paradoxes and puzzles wherever we turn. We will need a revolution in our understanding of physics in order to make sense of the multiverse.

    The first option for making statistical predictions in cosmology goes back to a paper by the Austrian physicist Ludwig Boltzmann in 1895. Although it turns out to be wrong, in its failure we find the roots of our current predicament.

    Boltzmann’s proposal was a bold extrapolation from his work on understanding gases. To specify completely the state of a gas would require specifying the exact position of every molecule. That is impossible. Instead, what we can measure—and would like to make predictions for—is the coarse-grained properties of the box of gas, such as the temperature and the pressure.

    A key simplification allows us to do this. As the molecules bounce around, they will arrange and rearrange themselves in every possible way they can, thus exploring all their possible configurations, or “microstates.” This process will erase the memory of how the gas started out, allowing us to ignore the problem of initial conditions. Since we can’t keep track of where all the molecules are, and anyway their positions change with time, we assume that any microstate is equally likely.

    This gives us a way to calculate how likely it is to find the box in a given coarse-grained state, or “macrostate”: We simply count the fraction of microstates consistent with what we know about the macrostate. So, for example, it is more likely that the gas is spread uniformly throughout the box rather than clumped in one corner, because only very special microstates have all of the gas molecules in one region of the box.

    For this procedure to work, the total number of microstates, while very large, must be finite. Otherwise the system will never be able to explore all its states. In a box of gas, this finitude is guaranteed by the uncertainty principle of quantum mechanics. Because the position of each molecule cannot be specified exactly, the gas has only a finite number of distinct configurations.

    Gases that start off clumpy for some reason will spread out, for a simple reason: It is statistically far more likely for their molecules to be uniformly distributed rather than clustered. If the molecules begin in a fairly improbable configuration, they will naturally evolve to a more probable one as they bounce around randomly.

    Yet our intuition about gases must be altered when we consider huge spans of time. If we leave the gas in the box for long enough, it will explore some unusual microstates. Eventually all of the particles will accidentally cluster in one corner of the box.

    With this insight, Boltzmann launched into his cosmological speculations. Our universe is intricately structured, so it is analogous to a gas that clusters in one corner of a box—a state that is far from equilibrium. Cosmologists generally assume it must have begun that way, but Boltzmann pointed out that, over the vastness of the eons, even a chaotic universe will randomly fluctuate into a highly ordered state. Attributing the idea to his assistant, known to history only as “Dr. Schuetz,” Boltzmann wrote:

    “It may be said that the world is so far from thermal equilibrium that we cannot imagine the improbability of such a state. But can we imagine, on the other side, how small a part of the whole universe this world is? Assuming the universe is great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.”

    “If this assumption were correct, our world would return more and more to thermal equilibrium; but because the whole universe is so great, it might be probable that at some future time some other world might deviate as far from thermal equilibrium as our world does at present.”

    It is a compelling idea. What a shame that it is wrong.

    The trouble was first pointed out by the astronomer and physicist Sir Arthur Eddington in 1931, if not earlier. It has to do with what are now called “Boltzmann brains.” Suppose the universe is like a box of gas and, most of the time, is in thermal equilibrium—just a uniform, undifferentiated gruel. Complex structures, including life, arise only when there are weird fluctuations. At these moments, gas assembles into stars, our solar system, and all the rest. There is no step-by-step process that sculpts it. It is like a swirling cloud that, all of a sudden, just so happens to take the shape of a person.

    The problem is a quantitative one. A small fluctuation that makes an ordered structure in a small part of space is far, far more likely than a large fluctuation that forms ordered structures over a huge region of space. In Boltzmann and Schuetz’s theory, it would be far, far more likely to produce our solar system without bothering to make all of the other stars in the universe. Therefore, the theory conflicts with observation: It predicts that typical observers should see a completely blank sky, without stars, when they look up at night.

    Taking this argument to an extreme, the most common type of observer in this theory is one that requires the minimal fluctuation away from equilibrium. We imagine this as an isolated brain that survives just long enough to notice it is about to die: the so-called Boltzmann brain.

    If you take this type of theory seriously, it predicts that we are just some very special Boltzmann brains who have been deluded into thinking that we are observing a vast, homogeneous universe. At the next instant our delusions are extremely likely to be shattered, and we will discover that there are no other stars in the universe. If our state of delusion lasts long enough for this article to appear, you can safely discard the theory.

    What are we to conclude? Evidently, the whole universe is not like a box of gas after all. A crucial assumption in Boltzmann’s argument is that there are only a finite (if very large) number of molecular configurations. This assumption must be incorrect. Otherwise, we would be Boltzmann brains.

    2
    DON’T WAKE ME UP: Hibernation thought-experiments reveal a deep paradox with probability in an infinite multiverse. Twentieth Century Fox-Film Corporation / Photofest

    So, we must seek a new approach to making predictions in cosmology. The second option on our list is that the universe has an infinite number of states available to it. Then the tools that Boltzmann developed are no longer useful in calculating the probability of different things happening.

    But then we’re back to the problem of initial conditions. Unlike a finite box of gas, which forgets about its initial conditions as the molecules scramble themselves, a system with an infinite number of available states cannot forget its initial conditions, because it takes an infinite time to explore all of its available states. To make predictions, we would need a theory of initial conditions. Right now, we don’t have one. Whereas our present theories take the prior state of the universe as an input, a theory of initial conditions would have to give this state as an output. It would thus require a profound shift in the way physicists think.

    The multiverse offers a third way—that is part of its appeal. It allows us to make cosmological predictions in a statistical way within the current theoretical framework of physics. In the multiverse, the volume of space grows indefinitely, all the while producing expanding bubbles with a variety of states inside. Crucially, the predictions do not depend on the initial conditions. The expansion approaches a steady-state behavior, with the expanding high-energy state continually expanding and budding off lower-energy regions. The overall volume of space is growing, and the number of bubbles of every type is growing, but the ratio (and the probabilities) remain fixed.

    The basic idea of how to make predictions in such a theory is simple. We count how many observers in the multiverse measure a physical quantity to have a given value. The probability of our observing a given outcome equals the proportion of observers in the multiverse who observe that outcome.

    For instance, if 10 percent of observers live in regions of the multiverse where dark matter is made out of light particles (such as axions), while 90 percent of observers live in regions where dark matter is made out of heavy particles (which, counterintuitively, are called WIMPs), then we have a 10 percent chance of discovering that dark matter is made of light particles.

    The very best reason to believe this type of argument is that Steven Weinberg of the University of Texas at Austin used it to successfully predict the value of the cosmological constant a decade before it was observed. The combination of a theoretically convincing motivation with Weinberg’s remarkable success made the multiverse idea attractive enough that a number of researchers, including me, have spent years trying to work it out in detail.

    The major problem we faced is that, since the volume of space grows without bound, the number of observers observing any given thing is infinite, making it difficult to characterize which events are more or less likely to occur. This amounts to an ambiguity in how to characterize the steady-state behavior, known as the measure problem.

    Roughly, the procedure to make predictions goes as follows. We imagine that the universe evolves for a large but finite amount of time and count all of the observations. Then we calculate what happens when the time becomes arbitrarily large. That should tell us the steady-state behavior. The trouble is that there is no unique way to do this, because there is no universal way to define a moment in time. Observers in distant parts of spacetime are too far apart and accelerating away from each other too fast to be able to send signals to each other, so they cannot synchronize their clocks. Mathematically, we can choose many different conceivable ways to synchronize clocks across these large regions of space, and these different choices lead to different predictions for what types of observations are likely or unlikely.

    One prescription for synchronizing clocks tells us that most of the volume will be taken up by the state that expands the fastest. Another tells us that most of the volume will be taken up by the state the decays the slowest. Worse, many of these prescriptions predict that the vast majority of observers are Boltzmann brains. A problem we thought we had eliminated came rushing back in.

    When Don Page at the University of Alberta pointed out the potential problems with Boltzmann brains in a paper in 2006, Raphael Bousso at U.C. Berkeley and I were thrilled to realize that we could turn the problem on its head. We found we could use Boltzmann brains as a tool—a way to decide among differing prescriptions for how to synchronize clocks. Any proposal that predicts that we are Boltzmann brains must perforce be wrong. We were so excited (and worried that someone else would have the same idea) that we wrote our paper in just two days after Page’s paper appeared. Over the course of several years, persistent work by a relatively small group of researchers succeeded in using these types of tests to eliminate many proposals and to form something of a consensus in the field on a nearly unique solution to the measure problem. We felt that we had learned how to tame the frightening infinities of the theory.

    Just when things were looking good, we encountered a conceptual problem that I see no escape from within our current understanding: the end-of-time problem. Put simply, the theory predicts that the universe is on the verge of self-destruction.

    The issue came into focus via a thought experiment suggested by Alan Guth of the Massachusetts Institute of Technology and Vitaly Vanchurin at the University of Michigan in Duluth. This experiment is unusual even by the standards of theoretical physics. Suppose that you flip a coin and do not see the result. Then you are put into a cryogenic freezer. If the coin came up heads, the experimenters wake you up after one year. If the coin came up tails, the experimenters instruct their descendants to wake you up after 50 billion years. Now suppose you have just woken up and have a chance to bet whether you have been asleep for 1 year or 50 billion years. Common sense tells us that the odds for such a bet should be 50/50 if the coin is fair.

    But when we apply our rules for how to do calculations in an eternally expanding universe, we find that you should bet that you only slept for one year. This strange effect occurs because the volume of space is exponentially expanding and never stops. So the number of sleeper experiments beginning at any given time is always increasing. A lot more experiments started a year ago than 50 billion years ago, so most of the people waking up today were asleep for a short time.

    The scenario may sound extreme, even silly. But that’s just because the conditions we are dealing with in cosmology are extreme, involving spans of times and volumes of space that are outside human experience. You can understand the problem by thinking about a simpler scenario that is mathematically identical. Suppose that the population of Earth doubles every 30 years—forever. From time to time, people perform these sleeper experiments, except now the subjects sleep either for 1 year or for 100 years. Suppose that every day 1 percent of the population takes part.

    Now suppose you are just waking up in your cryogenic freezer and are asked to bet how long you were asleep. On the one hand, you might argue that obviously the odds are 50/50. On the other, on any given day, far more people wake up from short naps than from long naps. For example, in the year 2016, sleepers who went to sleep for a short time in 2015 will wake up, as will sleepers who began a long nap in 1916. But since far more people started the experiment in 2015 than in 1916 (always 1 percent of the population), the vast majority of people who wake up in 2016 slept for a short time. So it might be natural to guess that you are waking from a short nap.

    The fact that two logical lines of argument yield contradictory answers tells us that the problem is not well-defined. It just isn’t a sensible problem to calculate probabilities under the assumption that the human population grows exponentially forever, and indeed it is impossible for the population to grow forever. What is needed in this case is some additional information about how the exponential growth stops.

    Consider two options. In the first, one day no more babies are born, but every sleeper experiment that has begun eventually finishes. In the second, a huge meteor suddenly destroys the planet, terminating all sleeper experiments. You will find that in option one, half of all observers who ever wake up do so from short naps, while in option two, most observers who ever wake up do so from short naps. It’s dangerous to take a long nap in the second option, because you might be killed by a meteor while sleeping. Therefore, when you wake up, it’s reasonable to bet that you most likely took a short nap. Once the theory becomes well-defined by making the total number of people finite, probability questions have unique, sensible answers.

    In eternal expansion, more sleepers wake up from short naps. Bousso, Stefan Leichenauer at Berkeley, Vladimir Rosenhaus at the Kavli Institute for Theoretical Physics, and I pointed out that these strange results have a simple physical interpretation: The reason that more sleepers wake up from short naps is that living in an eternally expanding universe is dangerous, because one can run into the end of time. Once we realized this, it became clear that this end-of-time effect was an inherent characteristic of the recipe we were using to calculate probabilities, and it is there whether or not anyone actually decides to undertake these strange sleeper experiments. In fact, given the parameters that define our universe, we calculated that there is about a 50 percent probability of encountering the end of time in the next 5 billion years.

    To be clear about the conclusion: No one thinks that time suddenly ends in spacetimes like ours, let alone that we should be conducting peculiar hibernation experiments. Instead, the point is that our recipe for calculating probabilities accidentally injected a novel type of catastrophe into the theory. This problem indicates that we are missing major pieces in our understanding of physics over large distances and long times.

    To put it all together: Theoretical and observational evidence suggests that we are living in an enormous, eternally expanding multiverse where the constants of nature vary from place to place. In this context, we can only make statistical predictions.

    If the universe, like a box of gas, can exist in only a finite number of available states, theory predicts that we are Boltzmann brains, which conflicts with observations, not to mention common sense. If, on the contrary, the universe has an infinite number of available states, then our usual statistical techniques are not predictive, and we are stuck. The multiverse appears to offer a middle way. The universe has an infinite number of states available, avoiding the Boltzmann brain problem, yet approaches a steady-state behavior, allowing for a straightforward statistical analysis. But then we still find ourselves making absurd predictions. In order to make any of these three options work, I think we will need a revolutionary advance in our understanding of physics.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:08 am on June 8, 2017 Permalink | Reply
    Tags: , , Craig Kaplan, Johnson solids, Nautilus, Polygons, The Impossible Mathematics of the Real World   

    From Nautilus: “The Impossible Mathematics of the Real World” 

    Nautilus

    Nautilus

    June 8, 2017
    Evelyn Lamb

    Using stiff paper and transparent tape, Craig Kaplan assembles a beautiful roundish shape that looks like a Buckminster Fuller creation or a fancy new kind of soccer ball. It consists of four regular dodecagons (12-sided polygons with all angles and sides the same) and 12 decagons (10-sided), with 28 little gaps in the shape of equilateral triangles. There’s just one problem. This figure should be impossible. That set of polygons won’t meet at the vertices. The shape can’t close up.

    Kaplan’s model works only because of the wiggle room you get when you assemble it with paper. The sides can warp a little bit, almost imperceptibly. “The fudge factor that arises just from working in the real world with paper means that things that ought to be impossible actually aren’t,” says Kaplan, a computer scientist at the University of Waterloo in Canada.

    1
    Impossibly real: This shape, which mathematician Craig Kaplan built using paper polygons, is only able to close because of subtle warping of the paper. Craig Kaplan

    It is a new example of an unexpected class of mathematical objects that the American mathematician Norman Johnson stumbled upon in the 1960s. Johnson was working to complete a project started over 2,000 years earlier by Plato: to catalog geometric perfection. Among the infinite variety of three-dimensional shapes, just five can be constructed out of identical regular polygons: the tetrahedron, cube, octahedron, dodecahedron, and icosahedron. If you mix and match polygons, you can form another 13 shapes from regular polygons that meet the same way at every vertex—the Archimedean solids—as well as prisms (two identical polygons connected by squares) and “anti-prisms” (two identical polygons connected by equilateral triangles).

    In 1966 Johnson, then at Michigan State University, found another 92 solids composed only of regular polygons, now called the Johnson solids. And with that, he exhausted all the possibilities, as the Russian mathematician Viktor Zalgaller, then at Leningrad State University, proved a few years later. It is impossible to form any other closed shapes out of regular polygons.

    Yet in completing the inventory of polyhedra, Johnson noticed something odd. He discovered his shapes by building models from cardboard and rubber bands. Because there are relatively few possible polyhedra, he expected that any new ones would quickly reveal themselves. Once he started to put the sides into place, the shape should click together as a matter of necessity. But that didn’t happen. “It wasn’t always obvious, when you assembled a bunch of polygons, that what was assembled was a legitimate figure,” Johnson recalls.

    A model could appear to fit together, but “if you did some calculations, you could see that it didn’t quite stand up,” he says. On closer inspection, what had seemed like a square wasn’t quite a square, or one of the faces didn’t quite lie flat. If you trimmed the faces, they would fit together exactly, but then they’d no longer be exactly regular.

    Intent on enumerating the perfect solids, Johnson didn’t give these near misses much attention. “I sort of set them aside and concentrated on the ones that were valid,” he says. But not only does this niggling near-perfection draw the interest of Kaplan and other math enthusiasts today, it is part of a large class of near-miss mathematics.

    There’s no precise definition of a near miss. There can’t be. A hard and fast rule doesn’t make sense in the wobbly real world. For now, Kaplan relies on a rule of thumb when looking for new near-miss Johnson solids: “the real, mathematical error inherent in the solid is comparable to the practical error that comes from working with real-world materials and your imperfect hands.” In other words, if you succeed in building an impossible polyhedron—if it’s so close to being possible that you can fudge it—then that polyhedron is a near miss. In other parts of mathematics, a near miss is something that is close enough to surprise or fool you, a mathematical joke or prank.

    Some mathematical near misses are, like near-miss Johnson solids, little more than curiosities, while others have deeper significance for mathematics and physics.

    he ancient problems of squaring the circle and doubling the cube both fall under the umbrella of near misses. They look tantalizingly open to solution, but ultimately prove impossible, like a geometric figure that seems as though it must close, but can’t. Some of the compass-and-straight-edge constructions by Leonardo da Vinci and Albrecht Dürer fudged the angles, producing nearly regular pentagons rather than the real thing.

    2
    Shell game: When the top shape is cut up into four pieces and rearranged, a gap appears, due to warping. Wikipedia

    Then there’s the missing-square puzzle. In this one (above), a right triangle is cut up into four pieces. When the pieces are rearranged, a gap appears. Where’d it come from? It’s a near miss. Neither “triangle” is really a triangle. The hypotenuse is not a straight line, but has a little bend where the slope changes from 0.4 in the blue triangle to 0.375 in the red triangle. The defect is almost imperceptible, which is why the illusion is so striking.

    A numerical coincidence is perhaps the most useful near miss in daily life: 27/12 is almost equal to 3/2. This near miss is the reason pianos have 12 keys in an octave and the basis for the equal-temperament system in Western music. It strikes a compromise between the two most important musical intervals: an octave (a frequency ratio of 2:1) and a fifth (a ratio of 3:2). It is numerically impossible to subdivide an octave in a way that ensures all the fifths will be perfect. But you can get very close by dividing the octave into 12 equal half-steps, seven of which give you a frequency ratio of 1.498. That’s good enough for most people.

    Sometimes near misses arise within the realm of mathematics, almost as if mathematics is playing a trick on itself. In the episode “Treehouse of Horror VI” of The Simpsons, mathematically inclined viewers may have noticed something surprising: the equation 178212 + 184112 = 192212. It seemed for a moment that the screenwriters had disproved Fermat’s Last Theorem, which states that an equation of the form xn + yn = zn has no integer solution when n is larger than 2. If you punch those numbers into a pocket calculator, the equation seems valid. But if you do the calculation with more precision than most hand calculators can manage, you will find that the twelfth root of the left side of the equation is 1921.999999955867 …, not 1922, and Fermat can rest in peace. It is a striking near miss—off by less than a 10-millionth.

    But near misses are more than just jokes. “The ones that are the most compelling to me are the ones where they’re potentially a clue that there’s a big story,” says University of California-Riverside mathematician John Baez. That’s the case for a number sometimes called the Ramanujan constant. This number is eπ √163, which equals approximately 262,537,412,640,768,743.99999999999925—amazingly close to a whole number. A priori, there’s no reason we should expect that these three irrational numbers—e, π, and √163—should somehow combine to form a rational number, let alone a perfect integer. There’s a reason they get so close. “It’s not some coincidence we have no understanding of,” says mathematician John Baez of the University of California, Riverside. “It’s a clue to a deep piece of mathematics.” The precise explanation is complicated, but hinges on the fact that 163 is what is called a Heegner number. Exponentials related to these numbers are nearly integers.

    Or take the mathematical relationship fancifully known as “Monstrous Moonshine.” The story goes that in 1978 mathematician John McKay made an observation both completely trivial and oddly specific: 196,884 = 196,883 + 1. The first number, 196,884, had come up as a coefficient in an important polynomial called the j-invariant, and 196,883 came up in relation to an enormous mathematical object called the Monster group. Many people probably would have shrugged and moved along, but the observations intrigued some mathematicians, who decided to take a closer look. They uncovered connections between two seemingly unrelated subjects: number theory and the symmetries of the Monster group. These linkages may even have broader, as yet ungrasped, significance for other subjects. The physicist Edward Witten has argued that the Monster group may be related to quantum gravity and the deep structure of spacetime.

    Mathematical near misses show the power and playfulness of the human touch in mathematics. Johnson, Kaplan, and others made their discoveries by trial and error—by exploring, like biologists trudging through the rainforest to look for new species. But with mathematics it can be easier to search systematically. For instance, Jim McNeill, a mathematical hobbyist who collects near misses on his website, and Robert Webb, a computer programmer, have developed software for creating and studying polyhedra.

    Near misses live in the murky boundary between idealistic, unyielding mathematics and our indulgent, practical senses. They invert the logic of approximation. Normally the real world is an imperfect shadow of the Platonic realm. The perfection of the underlying mathematics is lost under realizable conditions. But with near misses, the real world is the perfect shadow of an imperfect realm. An approximation is “a not-right estimate of a right answer,” Kaplan says, whereas “a near-miss is an exact representation of an almost-right answer.”

    In this way, near misses transform the mathematician’s and mathematical physicist’s relationship with the natural world. “I am grateful for the imperfections of the real world because it allows me to achieve a kind of quasi-perfection with objects that I know are intrinsically not perfect,” Kaplan says. “It allows me to overcome the limitations of mathematics because of the beautiful brokenness of reality.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:21 am on May 25, 2017 Permalink | Reply
    Tags: "Unleashing the Power of Synthetic Proteins, , Nautilus, , ,   

    From Nautilus: “Unleashing the Power of Synthetic Proteins” 

    Nautilus

    Nautilus

    March 2017
    David Baker, Baker Lab, U Washngton, BOINC Rosetta@home project



    Dr. David Baker


    Rosetta@home project



    The opportunities for the design of synthetic proteins are endless.

    Proteins are the workhorses of all living creatures, fulfilling the instructions of DNA. They occur in a wide variety of complex structures and carry out all the important functions in our body and in all living organisms—digesting food, building tissue, transporting oxygen through the bloodstream, dividing cells, firing neurons, and powering muscles. Remarkably, this versatility comes from different combinations, or sequences, of just 20 amino acid molecules. How these linear sequences fold up into complex structures is just now beginning to be well understood (see box).

    Even more remarkably, nature seems to have made use of only a tiny fraction of the potential protein structures available—and there are many. Therein lies an amazing set of opportunities to design novel proteins with unique structures: synthetic proteins that do not occur in nature, but are made from the same set of naturally-occurring amino acids. These synthetic proteins can be “manufactured” by harnessing the genetic machinery of living things, such as in bacteria given appropriate DNA that specify the desired amino acid sequence. The ability to create and explore such synthetic proteins with atomic level accuracy—which we have demonstrated—has the potential to unlock new areas of basic research and to create practical applications in a wide range of fields.

    The design process starts by envisioning a novel structure to solve a particular problem or accomplish a specific function, and then works backwards to identify possible amino acid sequences that can fold up to this structure. The Rosetta protein modelling and design software identifies the most likely candidates—those that fold to the lowest energy state for the desired structure. Those sequences then move from the computer to the lab, where the synthetic protein is created and tested—preferably in partnership with other research teams that bring domain expertise for the type of protein being created.

    At present no other advanced technology can beat the remarkable precision with which proteins carry out their unique and beautiful functions. The methods of protein design expand the reach of protein technology, because the possibilities to create new synthetic proteins are essentially unlimited. We illustrate that claim with some of the new proteins we have already developed using this design process, and with examples of the fundamental research challenges and areas of practical application that they exemplify:

    2
    This image shows a designed synthetic protein of a type known as a TIM-barrel. Naturally occurring TIM-barrel proteins are found in a majority of enzymes, the catalysts that facilitate biochemical reactions in our bodies, in part because the circular cup-like or barrel shape at their core provides an appropriate space for the reaction to occur. The synthetic protein shown here has an idealized TIM-barrel template or blueprint that can be customized with pockets and binding sites and catalytic agents specific to particular reactants; the eight helical arms of the protein enhance the reaction space. This process can be used to design whole new classes of enzymes that do not occur in nature. Illustration and protein design prepared by Possu Huang in David Baker’s laboratory, University of Washington.

    Catalysts for clean energy and medicine. Protein enzymes are the most efficient catalysts known, far more so than any synthesized by inorganic chemists. Part of that efficiency comes from their ability to accurately position key parts of the enzyme in relation to reacting molecules, providing an environment that accelerates a reaction or lowers the energy needed for it to occur. Exactly how this occurs remains a fundamental problem which more experience with synthetic proteins may help to resolve.

    Already we have produced synthetic enzymes that catalyze potentially useful new metabolic pathways. These include: reactions that take carbon dioxide from the atmosphere and convert it into organic molecules, such as fuels, more efficiently than any inorganic catalyst, potentially enabling a carbon-neutral source of fuels; and reactions that address unsolved medical problems, including a potential oral therapeutic drug for patients with celiac disease that breaks down gluten in the stomach and other synthetic proteins to neutralize toxic amyloids found in Alzheimer’s disease.

    We have also begun to understand how to design, de novo, scaffolds that are the basis for entire superfamilies of known enzymes (Fig. 1) and other proteins known to bind the smaller molecules involved in basic biochemistry. This has opened the door for potential methods to degrade pollutants or toxins that threaten food safety.

    New super-strong materials. A potentially very useful new class of materials is that formed by hybrids of organic and inorganic matter. One naturally occurring example is abalone shell, which is made up of a combination of calcium carbonate bonded with proteins that results in a uniquely tough material. Apparently, other proteins involved in the process of forming the shell change the way in which the inorganic material precipitates onto the binding protein and also help organize the overall structure of the material. Synthetic proteins could potentially duplicate this process and expand this class of materials. Another class of materials are analogous to spider silk—organic materials that are both very strong and yet biodegradable—for which synthetic proteins might be uniquely suited, although how these are formed is not yet understood. We have also made synthetic proteins that create an interlocking pattern to form a surface only one molecule thick, which suggest possibilities for new anti-corrosion films or novel organic solar cells.

    Targeted therapeutic delivery. Self-assembling protein materials make a wide variety of containers or external barriers for living things, from protein shells for viruses to the exterior wall of virtually all living cells. We have developed a way to design and build similar containers: very small cage-like structures—protein nanoparticles—that self-assemble from one or two synthetic protein building blocks (Fig. 2). We do this extremely precisely, with control at the atomic level. Current work focuses on building these protein nanoparticles to carry a desired cargo—a drug or other therapeutic—inside the cage, while also incorporating other proteins of interest on their surface. The surface protein is chosen to bind to a similar protein on target cells.

    These self-assembling particles are a completely new way of delivering drugs to cells in a targeted fashion, avoiding harmful effects elsewhere in the body. Other nanoparticles might be designed to penetrate the blood-brain barrier, in order to deliver drugs or other therapies for brain diseases. We have also generated methods to design proteins that disrupt protein-protein interactions and proteins that bind to small molecules for use in biosensing applications, such as identifying pathogens. More fundamentally, synthetic proteins may well provide the tools that enable improved targeting of drugs and other therapies, as well as an improved ability to bond therapeutic packages tightly to a target cell wall.

    5
    A tiny 20-sided protein nanoparticle that can deliver drugs or other therapies to specific cells in the body with minimal side effects. The nanoparticle self-assembles from two types of synthetic proteins. Illustration and protein design prepared by Jacob Bale in David Baker’s laboratory, University of Washington.

    Novel vaccines for viral diseases. In addition to drug delivery, self-assembling protein nanoparticles are a promising foundation for the design of vaccines. By displaying stabilized versions of viral proteins on the surfaces of designed nanoparticles, we hope to elicit strong and specific immune responses in cells to neutralize viruses like HIV and influenza. We are currently investigating the potential of these nanoparticles as vaccines against a number of viruses. The thermal stability of these designer vaccines should help eliminate the need for complicated cold chain storage systems, broadening global access to life saving vaccines and supporting goals for eradication of viral diseases. The ability to shape these designed vaccines with atomic level accuracy also enables a systematic study of how immune systems recognize and defend against pathogens. In turn, the findings will support development of tolerizing vaccines, which could train the immune system to stop attacking host tissues in autoimmune disease or over-reacting to allergens in asthma.

    New peptide medicines. Most approved drugs are either bulky proteins or small molecules. Naturally occurring peptides (amino acid compounds) that are constrained or stabilized so that they precisely complement their biological target are intermediate in size, and are among the most potent pharmacological compounds known. In effect, they have the advantages of both proteins and small molecule drugs. The antibiotic cyclosporine is a familiar example. Unfortunately such peptides are few in number.

    We have recently demonstrated a new computational design method that can generate two broad classes of peptides that have exceptional stability against heat or chemical degradation. These include peptides that can be genetically encoded (and can be produced by bacteria) as well as some that include amino acids that do not occur in nature. Such peptides are, in effect, scaffolds or design templates for creating whole new classes of peptide medicines.

    In addition, we have developed general methods for designing small and stable proteins that bind strongly to pathogenic proteins. One such designed protein binds the viral glycoprotein hemagglutinin, which is responsible for influenza entry into cells. These designed proteins protect infected mice in both a prophylactic and therapeutic manner and therefore are potentially very powerful anti-flu medicines. Similar methods are being applied to design therapeutic proteins against the Ebola virus and other targets that are relevant in cancer or autoimmune diseases. More fundamentally, synthetic proteins may be useful as test probes in working out the detailed molecular chemistry of the immune system.

    Protein logic systems. The brain is a very energy-efficient logic system based entirely on proteins. Might it be possible to build a logic system—a computer—from synthetic proteins that would self-assemble and be both cheaper and more efficient than silicon logic systems? Naturally occurring protein switches are well studied, but building synthetic switches remains an unsolved challenge. Quite apart from bio-technology applications, understanding protein logic systems may have more fundamental results, such as clarifying how our brains make decisions or initiate processes.

    The opportunities for the design of synthetic proteins are endless, with new research frontiers and a huge variety of practical applications to be explored. In effect, we have an emerging ability to design new molecules to solve specific problems—just as modern technology does outside the realm of biology. This could not be a more exciting time for protein design.

    Predicting Protein Structure

    If we were unable to predict the structure that results from a given sequence of amino acids, synthetic protein design would be an almost impossible task. There are 20 naturally-occurring amino acids, which can be linked in any order and can fold into an astronomical number of potential structures. Fortunately the structure prediction problem is now well on the way toward being solved by the Rosetta protein modeling software.

    The Rosetta tool evaluates possible structures, calculates their energy states, and identifies the lowest energy structure—usually, the one that occurs in a living organism. For smaller proteins, Rosetta predictions are already reasonably accurate. The power and accuracy of the Rosetta algorithms are steadily improving thanks to the work of a cooperative global network of several hundred protein scientists. New discoveries—such as identifying amino acid pairs that co-evolve in living systems and thus are likely to be co-located in protein structures—are also helping to improve prediction accuracy.

    Our research team has already revealed the structures for more than a thousand protein families, and we expect to be able to predict the structure for nearly any protein within a few years. This is an important achievement with direct significance for basic biology and biomedical science, since understanding structure leads to understanding the function of the myriad proteins found in the human body and in all living things. Moreover, predicting protein structure is also the critical enabling tool for designing novel, “synthetic” proteins that do not occur in nature.

    How to Create Synthetic Proteins that Solve Important Problems

    6
    A graduate student in the Baker lab and a researcher at the Institute for Protein Design discuss a bacterial culture (in the Petri dish) that is producing synthetic proteins. Source: Laboratory of David Baker, University of Washington.

    Now that it is possible to design a variety of new proteins from scratch, it is imperative to identify the most pressing problems that need to be solved, and focus on designing the types of proteins that are needed to address these problems. Protein design researchers need to collaborate with experts in a wide variety of fields to take our work from initial protein design to the next stages of development. As the examples above suggest, those partners should include experts in industrial scale catalysis, fundamental materials science and materials processing, biomedical therapeutics and diagnostics, immunology and vaccine design, and both neural systems and computer logic. The partnerships should be sustained over multiple years in order to prioritize the most important problems and test successive potential solutions.

    A funding level of $100M over five years would propel protein design to the forefront of biomedical research, supporting multiple and parallel collaborations with experts worldwide to arrive at breakthroughs in medicine, energy, and technology, while also furthering a basic understanding of biological processes. Current funding is unable to meet the demands of this rapidly growing field and does not allow for the design and production of new proteins at an appropriate scale for testing and ultimately production, distribution, and implementation. Private philanthropy could overcome this deficit and allow us to jump ahead to the next generation of proteins—and thus to use the full capacity of the amino acid legacy that evolution has provided us.

    My BOINC

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 8:58 am on May 25, 2017 Permalink | Reply
    Tags: , , , , , , Nautilus   

    From Nautilus: “Opening a New Window into the Universe” 

    Nautilus

    Nautilus

    April 2017
    Andrea Ghez, UCLA, UCO

    7
    Andrea Ghez. PBS NOVA

    The UCO Lick C. Donald Shane telescope is a 120-inch (3.0-meter) reflecting telescope located at the Lick Observatory, Mt Hamilton, in San Jose, California

    Keck Observatory, Mauna Kea, Hawaii, USA

    New technology could bring new insights into the nature of black holes, dark matter, and extrasolar planets.

    Earthbound telescopes see stars and other astronomical objects through a haze. The light waves they gather have traveled unimpeded through space for billions of years, only to be distorted in the last millisecond by the Earth’s turbulent atmosphere. That distortion is now even more important, because scientists are preparing to build the three largest telescopes on Earth, each with light-gathering surfaces of 20 to 40 meters across.

    The new giant telescopes:

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile


    TMT-Thirty Meter Telescope, proposed for Mauna Kea, Hawaii, USA


    Giant Magellan Telescope, to be at Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile

    In principle, the larger the telescope, the higher the resolution of astronomical images. In practice, the distorting veil of the atmosphere has always limited what can be achieved. Now, a rapidly evolving technology known as adaptive optics can strip away the veil and enable astronomers to take full advantage of current and future large telescopes. Indeed, adaptive optics is already making possible important discoveries and observations, including: the discovery of the supermassive black hole at the center of our galaxy, proving that such exotic objects exist; the first images and spectra of planetary systems around other stars; and high-resolution observations of galaxies forming in the early universe.

    But adaptive optics has still not delivered its full scientific potential.

    ESO 4LGSF Adaptive Optics Facility (AOF)

    Existing technology can only partially correct the atmospheric blurring and cannot provide any correction for large portions of the sky or for the majority of the objects astronomers want to study.

    The project we propose here to fully exploit the potential of adaptive optics by taking the technology to the next level would boost research on a number of critical astrophysical questions, including:

    What are supermassive black holes and how do they work? Adaptive Optics has opened a new approach to studying supermassive black holes—through stellar orbits—but only the brightest stars, the tip of the iceberg, have been measured. With next generation adaptive optics we will be able to take the next leap forward in our studies of these poorly understood objects that are believed to play a central role in our universe. The space near the massive black hole at the center of our galaxy, for example, is a place where gravitational forces reach extreme levels. Does Einstein’s general theory of relativity still apply, or do exotic new physical phenomena emerge? How do these massive black holes shape their host galaxies? Early adaptive optics observations at the galactic center have revealed a completely unexpected environment, challenging our notions on the relationship between black holes and galaxies, which are a fundamental ingredient to cosmological models. One way to answer both of these questions is to find and measure the orbits of faint stars that are closer to the black hole than any known so far—which advanced adaptive optics would make possible.
    The first direct images of an extrasolar planet—obtained with adaptive optics—has raised fundamental questions about star and planet formation. How exactly do new stars form and then spawn planets from the gaseous disks around them? New, higher resolution images of this process—with undistorted data from larger telescopes—can help answer this question, and may also reveal how our solar system was formed. In addition, although only a handful of new-born planets has been found to date, advanced adaptive optics will enable astronomers to find many more and help determine their composition and life-bearing potential.
    Dark matter and dark energy are still completely mysterious, even though they constitute most of the universe.


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam

    But detailed observations using adaptive optics of how light from distant galaxies is refracted around a closer galaxy to form multiple images—so-called gravitational lensing—can help scientists understand how dark matter and dark energy change space itself.

    In addition, it is clear that telescopes endowed with advanced adaptive optics technology will inspire a whole generation of astronomers to design and carry out a multitude of innovative research projects that were previously not possible.

    4
    The laser system used to make artificial guide stars that sense the blurring effects of the Earth’s atmosphere being used on both Keck I and Keck II during adaptive optics observations of the center of our Galaxy. Next Generation Adaptive Optics would have multiple laser beams for each telescope. Ethan Tweedie

    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    The technology of adaptive optics is quite simple, in principle. First, astronomers measure the instantaneous turbulence in the atmosphere by looking at the light from a bright, known object—a “guide star”—or by using a laser tuned to make sodium atoms in a thin layer of the upper atmosphere fluoresce and glow as an artificial guide star.

    6
    ESO VLT Adaptive Optics new Guide Star laser light

    The turbulence measurements are used to compute (also instantaneously) the distortions that turbulence creates in the incoming light waves. Those distortions are then counteracted by rapidly morphing the surface of a deformable mirror in the telescope. Measurements and corrections are done hundreds of times per second—which is only possible with powerful computing capability, sophisticated opto-mechanical linkages, and a real-time control system. We know how to build these tools.

    Of course, telescopes that operate above the atmosphere, such as the Hubble Space Telescope, don’t need adaptive optics.

    NASA/ESA Hubble Telescope

    But both the Hubble and the coming next generation of space telescopes are small compared to the enormous earth-based telescopes now being planned.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    And for the kinds of research that require very high resolution, such as the topics mentioned above and many others, there is really no substitute for the light-gathering power of telescopes too huge to be put into space.

    The next generation of adaptive optics could effectively take even the largest earth-bound telescopes “above the atmosphere” and make them truly amazing new windows on the universe. We know how to create this capability—the technology is in hand and the teams are assembled. It is time to put advanced adaptive optics to work.

    Creating Next Generation Adaptive Optics

    Adaptive optics (AO) imaging technology is used to improve the performance of optical systems by correcting distortions on light waves that have traveled through a turbulent medium. The technology has revolutionized fields from ophthalmology and vision science to laser communications. In astronomy, AO uses sophisticated, deformable mirrors controlled by fast computers to correct, in real-time, the distortion caused by the turbulence of the Earth’s atmosphere. Telescopes equipped with AO are already producing sharper, clearer views of distant astronomical objects than had ever before been possible, even from space. But current AO systems only partially correct for the effects of atmospheric blurring, and only when telescopes are pointed in certain directions. The aim of Next Generation Adaptive Optics is to overcome these limitations and provide precise correction for atmospheric blurring anywhere in the sky.

    One current limitation is the laser guide star that energizes sodium atoms in the upper atmosphere and causes them to glow as an artificial star used to measure the atmospheric distortions. This guide “star” is relatively close, only about 90 kilometers above the Earth’s surface, so the technique only probes a conical volume of the atmosphere above the telescope, and not the full cylinder of air through which genuine star light must pass to reach the telescope. Consequently, much of the distorting atmospheric structure is not measured. The next generation AO we propose will employ seven laser guide stars, providing full coverage of the entire cylindrical path travelled by light from the astronomical object being studied.

    6
    The next generation of adaptive optics will have several laser-created artificial guide stars, better optics, higher performance computers, and more advanced science instruments. Such a system will deliver the highest-definition images and spectra over nearly the entire sky and will enable unique new means of measuring the properties of stars, planets, galaxies, and black holes.
    J.Lu (U of Hawaii) & T. Do (UCLA)

    This technique can map the 3-D structure of the atmosphere, similar to how MRI medical imaging maps the human body. Simulations demonstrate that the resulting corrections will be excellent and stable, yielding revolutionary improvements in imaging. For example, the light from a star will be concentrated into a tiny area of the focal plane camera, and be far less spread out than it is with current systems, giving sharp, crisp images that show the finest detail possible.

    This will be particularly important for existing large telescopes such as the W. M. Keck Observatory (WMKO) [above]—currently the world’s leading AO platform in astronomy. Both our team—the UCLA Galactic Center Group (GCG)—and the WMKO staff have been deeply involved in the development of next generation AO systems.

    The quantum leap in the quality of both imaging and spectroscopy that next generation AO can bring to the Keck telescopes will likely pave the way for advanced AO systems on telescopes around the globe. For the next generation of extremely large telescopes, however, these AO advances will be critical. This is because the cylindrical volume of atmosphere through which light must pass to reach the mirrors in such large telescopes is so broad that present AO techniques will not be able to provide satisfactory corrections. For that reason, next generation AO techniques are critical to the future of infrared astronomy, and eventually of optical astronomy as well.

    The total proposed budget is $80 million over five years. The three major components necessary to take the leap in science capability include the laser guide star system, the adaptive optics system, and a powerful new science instrument, consisting of an infrared imager and an infrared spectrograph, that provides the observing capability to take advantage of the new adaptive optics system. This investment in adaptive optics will also help develop a strong workforce for other critical science and technology industries, as many students are actively recruited into industry positions in laser communications, bio-medical optics, big-data analytics for finance and business, image sensing and optics for government and defense applications, and the space industry. This investment will also help keep the U.S. in the scientific and technological lead. Well-funded European groups have recognized the power of AO and are developing competitive systems, though the next generation AO project described here will set an altogether new standard.

    Federal funding agencies find the science case for this work compelling, but they have made clear that it is beyond present budgetary means. Therefore, this is an extraordinary opportunity for private philanthropy—for visionaries outside the government to help bring this ambitious breakthrough project to reality and open a new window into the universe.

    Andrea Ghez is the Lauren B. Leichtman & Arthur E. Levine Chair in Astrophysics Director, UCLA Galactic Center Group.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: