Tagged: Nautilus Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:53 am on April 17, 2022 Permalink | Reply
    Tags: "As Creation Stories Go the Big Bang Is a Good One", , , “Why is there something rather than nothing?”, , , Nautilus, , We have no physical theory of the initial moments of the Big Bang., Without Creation-without cosmology-we are lost.   

    From Nautilus: “As Creation Stories Go the Big Bang Is a Good One” 

    From Nautilus

    April 15, 2022
    Paul M. Sutter

    How science is like mythology when it pushes the boundaries of the known.

    1
    Credit: Andrea Danti / Shutterstock.

    Deep in the depths of time, there was the Ocean of Milk. The gods and demons both desired amrita, the nectar of immortal life, which could only be obtained from the great ocean. The supreme god Vishnu told them to use Mount Mandara as a churning stick, and to rotate the mountain with the giant serpent, Vasuki, as a rope. For a thousand years, they churned until amrita emerged. The gods and demons fought and quarreled over amrita until the gods prevailed. The churning produced other wonders: the physician of the gods Dhanvantari, the goddess of riches Lakshmi, the goddess of misfortune Jyestha, the white elephant, the seven-headed horse Uchchaisrava, and a wish-granting tree. And finally came the moon, Chandra.

    More recently, modern scientists are churning the universe for another treasure. They are searching for the ripples in spacetime, known as gravitational waves, leftover from the primordial Big Bang.

    Scientists believe that when our universe was less than a second old, it underwent a radical phase transition, dramatically inflating in size.

    _____________________________________________________________________________________
    Inflation

    4
    Alan Guth, from M.I.T., who first proposed cosmic inflation.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes. Credit: Alex Mittelmann.

    Alan Guth’s notes:
    Alan Guth’s original notes on inflation.
    _____________________________________________________________________________________

    That event shaped the future evolution of the cosmos, planting the seeds that would one day grow to become galaxies and clusters. That cataclysmic event, perhaps the most powerful episode the universe has ever experienced, left nothing else behind but the most subtle churning of gravitational waves. Scientists hope to find these gravitational waves because the earliest moments of the Big Bang are shrouded in mystery, and perhaps the only relics of that era are those faint whispers of gravity.

    “Why is there something rather than nothing?”

    The first story comes to us from the Hindu mythological tradition, and the second from modern cosmology. Both are creation stories, the story that defines how everything—literally, everything—came into being. Creation stories are perhaps the most important stories of all. As the great German philosopher Martin Heidegger pointedly asked, “Why is there something rather than nothing?” The creation story explains why there is this rather than not-this. It separates us from the unknown, from the dark. Without Creation, without cosmology, we are lost.

    You might think the two stories don’t merit comparison. One is a legend handed down through time, and one is based on the observational study of our cosmos. But the stories have more in common than we may want to let on. In particular, both propose some entity or act or form that already exists, and through some process the world as we know it emerges. In other words, all creation stories make some assumption about the (primordial) cosmos, and the story goes from there. This is as true for the Hindu tradition as it is for Big Bang cosmology, and all the stories struggle to move to a point before the beginning.

    Viewed through the lens of this commonality—this struggle to explain the most primordial of primordials—the ideas in physical cosmology, which are technical and mathematical, take on a new character: They can be viewed as rehashes of the old mythological stories. Scientists are human, and they are all drawing from the same well of inspiration as everybody else. Mythological creation stories and the scientific Big Bang theory aren’t in competition; in their shared attempts to explain the before-the-beginning, they are intertwined at a fundamental, human level, and it’s here where science can gain its greatest inspiration.

    Historians and anthropologists have attempted to categorize the world’s many creation stories, with some success. One category describes the universe as having come into its present form from some sort of primordial void or chaos, often by the will or actions of a divine being. The classic example is the story of Genesis in the Bible. In the beginning, there were two entities: the all-powerful God that initiated the act of Creation, and the formless void-like “nothing” from which He could work from. In these stories, there was a point in time in which our universe (as we know it) did not exist, and another point in time in which it did. From then, the usual machinations of nature led to the present day.

    Another category sees the creation of our universe as merely the latest act in an infinitely long chain stretching back to eternity. As espoused in many Hindu mythologies, the destruction of the universe follows from its creation, and the cycle starts again. There may be variations—the present iteration of the universe isn’t always like the last—and various divine agents may be involved in the act, but the cycle itself simply exists, a fact of reality, that enables the creation/destruction process to unfold.

    Still other myths, such as many Native American stories, feature a diving creature swimming into a vast and featureless primordial ocean that draws up bits of land and flesh, or a divine being that divides itself into the components of the universe, or a primitive seed that bears the universe as its fruit.

    Ideas in cosmology can be viewed as sequels to mythology.

    The same themes played out when a new creation story emerged in the early 1900s, one born from modern science. Scientists were not the first ones to attempt to put cosmology—the study of the universe—on empirical grounds (science does not hold a monopoly on empiricism), but they were the first to utilize the machinery of modern astronomy to the study of the whole cosmos.

    Most scientists in early modern Europe followed the prevailing religious views on creation: Genesis, Adam and Eve, a cunning serpent. But in the late 1800s evidence began to paint a different picture. The evolution of species, the formation of sedimentary layers, the existence of fossils, and even the first attempts to estimate the age of the sun all pointed to a universe far, far older than anyone had imagined.

    At the turn of the 20th century, most scientists believed that the universe was simply old, possibly eternally so, and had not significantly changed in all that time. Sure, stars could move around and maybe even explode, species could appear or disappear, and the forces of wind and water could reshape the surface of our planet. But on the big—cosmological—scales, everything that is, has always been.

    This view was so dominant that when Albert Einstein took his newly minted equations of General Relativity and applied them to the whole entire universe (because why not), he found himself in a bit of a pickle. His equations naturally predicted a dynamic, evolving universe—one that changed with time—but this ran counter to his intuitions that the universe was static. He added a little fixer, an additional term to the equations known as the “cosmological constant”, to balance everything out and keep a static cosmos.

    2
    WHAT CAME BEFORE? The Big Bang theory, for all its observational successes over the past century, can’t escape the question of what came before. That question has inspired theories that represent science at its boldest. Photo by NASA.

    A few years later, in the late 1920s, the Belgian scientist and Catholic priest Georges Lemaître looked at that same set of equations and proposed the earliest version of what we now call the Big Bang theory: That our universe started as a small “primaeval atom” which expanded and cooled into its present form. Most scientists at the time rejected Lemaître’s idea—it smelled a little too biblical for their tastes.

    The debate simmered for a few more years until Edwin Hubble clearly demonstrated the expansion of the universe.

    Through careful observations he found that at the very largest scales, all galaxies, on average, are moving away from all other galaxies.

    Lemaître’s math came in handy to explain these observations. This was no trick of light. No alternative theory could account for all the data. Even Einstein dropped the cosmological constant from his equations (calling it his “greatest blunder”). The verdict was in: Our universe is getting bigger with time. And if it’s getting bigger with time, that means that in the past it was smaller.

    The Big Bang was off to a momentous start. Here we have the most modern creation story of all, the one told by physical cosmology, and it contains some wonderful and powerful statements. Statements like:

    • We live in a dynamic, evolving cosmos, ruled by a set of physical laws that we can understand. The universe changes with time at the very largest scales. Nothing is fixed. The only constant is change.

    • Approximately 13.8 billion years ago, our entire observable universe— every galaxy, every star, the entire contents of the cosmos—was crammed into a volume no bigger than a peach with a temperature of over a quadrillion degrees.

    • When our universe was only 380,000 years old, the first atoms formed. The process released an invisible form of radiation that permeates the cosmos to this day.

    • In its earliest moments, microscopic fluctuations—random wiggles in the quantum fields that suffuse all reality—imprinted themselves in spacetime, forming the seeds that would eventually grow to become the largest structures in the universe.

    As creation stories go, it’s a good one. And like all other creation stories, the Big Bang struggles with the beginning. As Lemaître put it, the primeval atom of his theory started with the existence of “a day without yesterday,” which was the aspect of his theory that most troubled his fellow scientists, because it implied an act of creation from nothing, which was decidedly not a very scientific-sounding idea.

    And yet the Big Bang theory, for all its observational successes over the past century, cannot escape that conclusion. This is a feature of the mathematics of General Relativity used to describe the very early universe. We now have a well-motivated and well-tested physical understanding of the universe when it was merely a few minutes old. At that age, the universe was hot enough and dense enough to fuse the first elements (mostly hydrogen and helium) with abundances that match observations.

    Pushing earlier into the Big Bang, however, brings us deeper into the mists of unknown physics. Whether through the observations of the cosmos, the collisions of our most powerful particle colliders, or the most arcane mathematics of the chalkboard, we have little useful tools to understand the earliest moments of the history of our universe.

    Perhaps the Big Bang never ended … and never started.

    At the heart of it all lies the singularity. In General Relativity, at one specific moment in our past, everything was crammed into an infinitely tiny point. We know that the singularity did not actually exist; it’s an artifact of the math of General Relativity, informing us that the theory is breaking down. To tell us what actually happened requires a theory of Quantum Gravity (a workable theory of strong gravity at very small scales), which we currently lack.

    Put another way, we have no physical theory of the initial moments of the Big Bang. Indeed, since our understanding of the passage of time and the breadth of space is rooted in those very same theories, we have no way yet of knowing if our conceptions of spacetime even make sense at such extreme scales. It could be that our naive ideas like “before” or “beginning” simply don’t apply.

    It’s here where speculation gets really wild. Perhaps there is some fundamental unit of spacetime—a chunk that represents the smallest possible four-dimensional volume—and that at one time our universe contracted and “bounced” at the scale of that chunk, repeating a never-ending cycle of Big Bangs.

    Perhaps our cosmos is embedded in a higher-dimensional structure, with esoteric objects, known as branes, occasionally colliding.

    When they collide, their intersection point triggers a new Big Bang in that region of spacetime.

    Perhaps the Big Bang never ended … and never started. Maybe the universe is far larger than we thought—perhaps infinitely large. And maybe the universe at those grand scales has never stopped expanding, but pieces of that “multiverse” can pinch off, isolating themselves as island bubbles adrift in an eternal ocean of the cosmos.

    Modern cosmologists are dreamers and storytellers.

    Lemaître, though he espoused a view that religion and science shouldn’t mix, was certainly inspired by the creation story he was most familiar with. Repeated cycles of Big Bangs, stretching forward and backward to eternity, look a lot like many Hindu versions of cosmology. Extra-dimensional entities that interact, and through their interactions build a universe, would find welcome reception to cultures around the world.

    This isn’t a bad thing. Scientists are the latest in a long line of thinkers, mystics, philosophers, poets, and more who have interrogated the very nature of existence. The parallels and connections are manifest because they all spring from the same font of human creativity and ingenuity. And while scientists have learned a lot, they run into the same headaches as everyone else; namely, trying to explain what came before what-started-it-all.

    In the millennia of recorded human history, we’ve asked a lot of questions and managed to come up with many answers. But some questions—like the ultimate origins of the universe—always seem to be just out of grasp. Perhaps this is the best we’ll ever get: concrete, testable ideas going back to some finite point in our past, speculation beyond that, and unanswerable questions behind everything.

    Or perhaps not. Modern cosmologists are currently trying to tackle some of the most perplexing aspects of the Big Bang theory, attempting to push past the point of the “primaeval atom” and into the deepest origins of the universe. They are trying to determine if time itself has an origin or is simply a manifestation of some other process. They are trying to find experimental clues to pre-Big Bang processes whose artifacts might remain in our contemporary cosmos, such as those effervescent gravitational waves that cosmologists eagerly hunt for. They are trying to discover a more fundamental set of universal laws that naturally give rise to the physics that we know and love.

    Modern cosmologists are dreamers and storytellers. They are grounding their story of the Big Bang in evidence and reason, but they are seeking the same answers as all the dreamers and storytellers who came before them. They are trying to explain why there is something rather than nothing, and whether they know it or not, they are drawing from the stories and myths that surround them in the world.

    Here is where science can be its most beautiful, when it pulls hungrily from any source for a spark of inspiration to inform our knowledge of the universe, giving us a new tale to delight in. And here too is where science can be its most bold, when it finds the utmost boundaries of the known, a line once marked as impossible, and pushes unafraid into the dark.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 12:21 pm on June 20, 2021 Permalink | Reply
    Tags: "Do You Want AI to Be Conscious?", A function of consciousness is to broaden our temporal window on the world—to give the present moment an extended duration., Already there have been cases of discrimination by algorithms., Any pocket calculator programmed with statistical formulas can provide an estimate of confidence but no machine yet has our full range of metacognitive ability., , At the moment given the complexity of contemporary neural networks we have trouble discerning how AIs produce decisions much less translating the process into a language humans can make sense of., Beginning next year the European Union will give its residents a legal “right to explanation.”, Brains, Consciousness is an important function for us. Why not for our machines?, mechanisms of consciousness—the reasons we have a vivid and direct experience of the world and of the self—are an unsolved mystery in neuroscience., Nautilus, , our sensation of the present moment is a construct of the conscious mind., We can endow AI's with metacognition—an introspective ability to report their internal mental states.   

    From Nautilus (US) : “Do You Want AI to Be Conscious?” 

    From Nautilus (US)

    June 9, 2021
    Ryota Kanai

    This story, originally titled We Need Conscious Robots, first appeared in our Consciousness issue in April 2017.

    1
    PHOTOCREO Michal Bednarek / Shutterstock.

    Consciousness is an important function for us. Why not for our machines?

    People often ask me whether human-level artificial intelligence will eventually become conscious. My response is: “Do you want it to be conscious? I think it is largely up to us whether our machines will wake up.”

    That may sound presumptuous. The mechanisms of consciousness—the reasons we have a vivid and direct experience of the world and of the self—are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we’ve taken consciousness seriously as a target of scientific scrutiny, we have made significant progress. We have discovered neural activity that correlates with consciousness, and we have a better idea of what behavioral tasks require conscious awareness. Our brains perform many high-level cognitive tasks subconsciously.

    Consciousness, we can tentatively conclude, is not a necessary byproduct of our cognition. The same is presumably true of AIs. In many science-fiction stories, machines develop an inner mental life automatically, simply by virtue of their sophistication, but it is likelier that consciousness will have to be expressly designed into them.

    And we have solid scientific and engineering reasons to try to do that. Our very ignorance about consciousness is one. The engineers of the 18th and 19th centuries did not wait until physicists had sorted out the laws of thermodynamics before they built steam engines. It worked the other way round: Inventions drove theory. So it is today. Debates on consciousness are often too philosophical and spin around in circles without producing tangible results. The small community of us who work on artificial consciousness aims to learn by doing.

    Furthermore, consciousness must have some important function for us, or else evolution wouldn’t have endowed us with it. The same function would be of use to AIs. Here, too, science fiction might have misled us. For the AIs in books and TV shows, consciousness is a curse. They exhibit unpredictable, intentional behaviors, and things don’t turn out well for the humans. But in the real world, dystopian scenarios seem unlikely. Whatever risks AIs may pose do not depend on their being conscious. To the contrary, conscious machines could help us manage the impact of AI technology. I would much rather share the world with them than with thoughtless automatons.

    When AlphaGo was playing against the human Go champion, Lee Sedol, many experts wondered why AlphaGo played the way it did. They wanted some explanation, some understanding of AlphaGo’s motives and rationales. Such situations are common for modern AIs, because their decisions are not preprogrammed by humans, but are emergent properties of the learning algorithms and the data set they are trained on. Their inscrutability has created concerns about unfair and arbitrary decisions. Already there have been cases of discrimination by algorithms; for instance, a Propublica investigation last year found that an algorithm used by judges and parole officers in Florida flagged black defendants as more prone to recidivism than they actually were, and white defendants as less prone than they actually were.

    Beginning next year the European Union will give its residents a legal “right to explanation.” People will be able to demand an accounting of why an AI system made the decision it did. This new requirement is technologically demanding. At the moment given the complexity of contemporary neural networks we have trouble discerning how AIs produce decisions much less translating the process into a language humans can make sense of.

    If we can’t figure out why AIs do what they do, why don’t we ask them? We can endow them with metacognition—an introspective ability to report their internal mental states. Such an ability is one of the main functions of consciousness. It is what neuroscientists look for when they test whether humans or animals have conscious awareness. For instance, a basic form of metacognition, confidence, scales with the clarity of conscious experience. When our brain processes information without our noticing, we feel uncertain about that information, whereas when we are conscious of a stimulus, the experience is accompanied by high confidence: “I definitely saw red!”

    Any pocket calculator programmed with statistical formulas can provide an estimate of confidence but no machine yet has our full range of metacognitive ability. Some philosophers and neuroscientists have sought to develop the idea that metacognition is the essence of consciousness. So-called higher-order theories of consciousness posit that conscious experience depends on secondary representations of the direct representation of sensory states. When we know something, we know that we know it. Conversely, when we lack this self-awareness, we are effectively unconscious; we are on autopilot, taking in sensory input and acting on it, but not registering it.

    These theories have the virtue of giving us some direction for building conscious AI. My colleagues and I are trying to implement metacognition in neural networks so that they can communicate their internal states. We call this project “machine phenomenology” by analogy with phenomenology in philosophy, which studies the structures of consciousness through systematic reflection on conscious experience. To avoid the additional difficulty of teaching AIs to express themselves in a human language, our project currently focuses on training them to develop their own language to share their introspective analyses with one another. These analyses consist of instructions for how an AI has performed a task; it is a step beyond what machines normally communicate—namely, the outcomes of tasks. We do not specify precisely how the machine encodes these instructions; the neural network itself develops a strategy through a training process that rewards success in conveying the instructions to another machine. We hope to extend our approach to establish human-AI communications, so that we can eventually demand explanations from AIs.

    Besides giving us some (imperfect) degree of self-understanding, consciousness helps us achieve what neuroscientist Endel Tulving (University of Toronto (CA)) has called “mental time travel.” We are conscious when predicting the consequences of our actions or planning for the future. I can imagine what it would feel like if I waved my hand in front of my face even without actually performing the movement. I can also think about going to the kitchen to make coffee without actually standing up from the couch in the living room.

    In fact, even our sensation of the present moment is a construct of the conscious mind. We see evidence for this in various experiments and case studies. Patients with agnosia who have damage to object-recognition parts of the visual cortex can’t name an object they see, but can grab it. If given an envelope, they know to orient their hand to insert it through a mail slot. But patients cannot perform the reaching task if experimenters introduce a time delay between showing the object and cueing the test subject to reach for it. Evidently, consciousness is related not to sophisticated information processing per se; as long as a stimulus immediately triggers an action, we don’t need consciousness. It comes into play when we need to maintain sensory information over a few seconds.

    The importance of consciousness in bridging a temporal gap is also indicated by a special kind of psychological conditioning experiment. In classical conditioning, made famous by Ivan Pavlov and his dogs, the experimenter pairs a stimulus, such as an air puff to the eyelid or an electric shock to a finger, with an unrelated stimulus, such as a pure tone. Test subjects learn the paired association automatically, without conscious effort. On hearing the tone, they involuntarily recoil in anticipation of the puff or shock, and when asked by the experimenter why they did that, they can offer no explanation. But this subconscious learning works only as long as the two stimuli overlap with each other in time. When the experimenter delays the second stimulus, participants learn the association only when they are consciously aware of the relationship—that is, when they are able to tell the experimenter that a tone means a puff coming. Awareness seems to be necessary for participants to retain the memory of the stimulus even after it stopped.

    These examples suggest that a function of consciousness is to broaden our temporal window on the world—to give the present moment an extended duration. Our field of conscious awareness maintains sensory information in a flexible, usable form over a period of time after the stimulus is no longer present. The brain keeps generating the sensory representation when there is no longer direct sensory input. The temporal element of consciousness can be tested empirically. Francis Crick and Christof Koch proposed that our brain uses only a fraction of our visual input for planning future actions. Only this input should be correlated with consciousness if planning is its key function.

    A common thread across these examples is counterfactual information generation. It’s the ability to generate sensory representations that are not directly in front of us. We call it “counterfactual” because it involves memory of the past or predictions for unexecuted future actions, as opposed to what is happening in the external world. And we call it “generation” because it is not merely the processing of information, but an active process of hypothesis creation and testing. In the brain, sensory input is compressed to more abstract representations step by step as it flows from low-level brain regions to high-level ones—a one-way or “feedforward” process. But neurophysiological research suggests this feedforward sweep, however sophisticated, is not correlated with conscious experience. For that, you need feedback from the high-level to the low-level regions.

    Counterfactual information generation allows a conscious agent to detach itself from the environment and perform non-reflexive behavior, such as waiting for three seconds before acting. To generate counterfactual information, we need to have an internal model that has learned the statistical regularities of the world. Such models can be used for many purposes, such as reasoning, motor control, and mental simulation.

    Our AIs already have sophisticated training models, but they rely on our giving them data to learn from. With counterfactual information generation, AIs would be able to generate their own data—to imagine possible futures they come up with on their own. That would enable them to adapt flexibly to new situations they haven’t encountered before. It would also furnish AIs with curiosity. When they are not sure what would happen in a future they imagine, they would try to figure it out.

    My team has been working to implement this capability. Already, though, there have been moments when we felt that AI agents we created showed unexpected behaviors. In one experiment, we simulated agents that were capable of driving a truck through a landscape. If we wanted these agents to climb a hill, we normally had to set that as a goal, and the agents would find the best path to take. But agents endowed with curiosity identified the hill as a problem and figured out how to climb it even without being instructed to do so. We still need to do some more work to convince ourselves that something novel is going on.

    If we consider introspection and imagination as two of the ingredients of consciousness, perhaps even the main ones, it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine. We want our machines to explain how and why they do what they do. Building those machines will exercise our own imagination. It will be the ultimate test of the counterfactual power of consciousness.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus (US). We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:51 am on April 25, 2021 Permalink | Reply
    Tags: , , How do we do that for the entire universe?, In an infinite multiverse physics loses its ability to make predictions., Nautilus, , , The main idea is to make probabilistic predictions. By calculating what happens frequently and what happens rarely in the multiverse we can make statistical predictions for what we will observe., This is science’s version of the old philosophical question of First Cause., What sets the initial initial conditions?, Whenever we make a prediction in physics we need to specify what the initial conditions are.   

    From Nautilus : “The Crisis of the Multiverse” 

    From Nautilus

    4.25.21
    Ben Freivogel, University of Amsterdam [Universiteit van Amsterdam] (NL)

    1
    NASA/ESA Hubble Space Telescope (US) image of the Egg Nebula. Credit: National Aeronautics Space Agency (US), W. Sparks (NASA Space Telescope Science Institute (US)) and R. Sahai (NASA JPL-Caltech (US)).

    In an infinite multiverse physics loses its ability to make predictions.

    Physicists have always hoped that once we understood the fundamental laws of physics, they would make unambiguous predictions for physical quantities. We imagined that the underlying physical laws would explain why the mass of the Higgs particle must be 125 gigaelectron-volts, as was recently discovered, and not any other value, and also make predictions for new particles that are yet to be discovered. For example, we would like to predict what kind of particles make up Dark Matter.

    These hopes now appear to have been hopelessly naïve. Our most promising fundamental theory, string theory, does not make unique predictions. It seems to contain a vast landscape of solutions, or “vacua,” each with its own values of the observable physical constants. The vacua are all physically realized within an enormous eternally inflating multiverse.

    Has the theory lost its mooring to observation? If the multiverse is large and diverse enough to contain some regions where dark matter is made out of light particles and other regions where dark matter is made out of heavy particles, how could we possibly predict which one we should see in our own region? And indeed many people have criticized the multiverse concept on just these grounds. If a theory makes no predictions, it ceases to be physics.

    But an important issue tends to go unnoticed in debates over the multiverse.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    Cosmology has always faced a problem of making predictions. The reason is that all our theories in physics are dynamical: The fundamental physical laws describe what will happen, given what already is. So, whenever we make a prediction in physics we need to specify what the initial conditions are. How do we do that for the entire universe? What sets the initial initial conditions? This is science’s version of the old philosophical question of First Cause.

    The multiverse offers an answer. It is not the enemy of prediction, but its friend.

    The main idea is to make probabilistic predictions. By calculating what happens frequently and what happens rarely in the multiverse we can make statistical predictions for what we will observe. This is not a new situation in physics. We understand an ordinary box of gas in the same way. Although we cannot possibly keep track of the motion of all the individual molecules, we can make extremely precise predictions for how the gas as a whole will behave. Our job is to develop a similar statistical understanding of events in the multiverse.

    This understanding could take one of three forms. First, the multiverse, though very large, might be able to explore only a finite number of different states, just like an ordinary box of gas. In this case we know how to make predictions, because after a while the multiverse forgets about the unknown initial conditions. Second, perhaps the multiverse is able to explore an infinite number of different states, in which case it never forgets its initial conditions, and we cannot make predictions unless we know what those conditions are. Finally, the multiverse might explore an infinite number of different states, but the exponential expansion of space effectively erases the initial conditions.

    In many ways, the first option is the most agreeable to physicists, because it extends our well-established statistical techniques. Unfortunately, the predictions we arrive at disagree violently with observations. The second option is very troubling, because our existing laws are incapable of providing the requisite initial conditions. It is the third possibility that holds the most promise for yielding sensible predictions.

    But this program has encountered severe conceptual obstacles. At root, our problems arise because the multiverse is an infinite expanse of space and time. These infinities lead to paradoxes and puzzles wherever we turn. We will need a revolution in our understanding of physics in order to make sense of the multiverse.

    The first option for making statistical predictions in cosmology goes back to a paper by the Austrian physicist Ludwig Boltzmann in 1895. Although it turns out to be wrong, in its failure we find the roots of our current predicament.

    Boltzmann’s proposal was a bold extrapolation from his work on understanding gases. To specify completely the state of a gas would require specifying the exact position of every molecule. That is impossible. Instead, what we can measure—and would like to make predictions for—is the coarse-grained properties of the box of gas, such as the temperature and the pressure.

    1
    Now You’re Cooking With GasEven though physicists cannot track individual gas molecules, they can predict the overall behavior of a gas with near-certainty. Alas, when they try to apply the same principle to an infinite universe, they fail abjectly. Credit:bestanimations.com

    A key simplification allows us to do this. As the molecules bounce around, they will arrange and rearrange themselves in every possible way they can, thus exploring all their possible configurations, or “microstates.” This process will erase the memory of how the gas started out, allowing us to ignore the problem of initial conditions. Since we can’t keep track of where all the molecules are, and anyway their positions change with time, we assume that any microstate is equally likely.

    This gives us a way to calculate how likely it is to find the box in a given coarse-grained state, or “macrostate”: We simply count the fraction of microstates consistent with what we know about the macrostate. So, for example, it is more likely that the gas is spread uniformly throughout the box rather than clumped in one corner, because only very special microstates have all of the gas molecules in one region of the box.

    For this procedure to work, the total number of microstates, while very large, must be finite. Otherwise the system will never be able to explore all its states. In a box of gas, this finitude is guaranteed by the uncertainty principle of quantum mechanics. Because the position of each molecule cannot be specified exactly, the gas has only a finite number of distinct configurations.

    Gases that start off clumpy for some reason will spread out, for a simple reason: It is statistically far more likely for their molecules to be uniformly distributed rather than clustered. If the molecules begin in a fairly improbable configuration, they will naturally evolve to a more probable one as they bounce around randomly.

    Yet our intuition about gases must be altered when we consider huge spans of time. If we leave the gas in the box for long enough, it will explore some unusual microstates. Eventually all of the particles will accidentally cluster in one corner of the box.

    With this insight, Boltzmann launched into his cosmological speculations. Our universe is intricately structured, so it is analogous to a gas that clusters in one corner of a box—a state that is far from equilibrium. Cosmologists generally assume it must have begun that way, but Boltzmann pointed out that, over the vastness of the eons, even a chaotic universe will randomly fluctuate into a highly ordered state. Attributing the idea to his assistant, known to history only as “Dr. Schuetz,” Boltzmann wrote:

    “It may be said that the world is so far from thermal equilibrium that we cannot imagine the improbability of such a state. But can we imagine, on the other side, how small a part of the whole universe this world is? Assuming the universe is great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.

    “If this assumption were correct, our world would return more and more to thermal equilibrium; but because the whole universe is so great, it might be probable that at some future time some other world might deviate as far from thermal equilibrium as our world does at present.”

    It is a compelling idea. What a shame that it is wrong.

    The trouble was first pointed out by the astronomer and physicist Sir Arthur Eddington in 1931, if not earlier. It has to do with what are now called “Boltzmann brains.” Suppose the universe is like a box of gas and, most of the time, is in thermal equilibrium—just a uniform, undifferentiated gruel. Complex structures, including life, arise only when there are weird fluctuations. At these moments, gas assembles into stars, our solar system, and all the rest. There is no step-by-step process that sculpts it. It is like a swirling cloud that, all of a sudden, just so happens to take the shape of a person.

    The problem is a quantitative one. A small fluctuation that makes an ordered structure in a small part of space is far, far more likely than a large fluctuation that forms ordered structures over a huge region of space. In Boltzmann and Schuetz’s theory, it would be far, far more likely to produce our solar system without bothering to make all of the other stars in the universe. Therefore, the theory conflicts with observation: It predicts that typical observers should see a completely blank sky, without stars, when they look up at night.

    Taking this argument to an extreme, the most common type of observer in this theory is one that requires the minimal fluctuation away from equilibrium. We imagine this as an isolated brain that survives just long enough to notice it is about to die: the so-called Boltzmann brain.

    If you take this type of theory seriously, it predicts that we are just some very special Boltzmann brains who have been deluded into thinking that we are observing a vast, homogeneous universe. At the next instant our delusions are extremely likely to be shattered, and we will discover that there are no other stars in the universe. If our state of delusion lasts long enough for this article to appear, you can safely discard the theory.

    What are we to conclude? Evidently, the whole universe is not like a box of gas after all. A crucial assumption in Boltzmann’s argument is that there are only a finite (if very large) number of molecular configurations. This assumption must be incorrect. Otherwise, we would be Boltzmann brains.

    So, we must seek a new approach to making predictions in cosmology. The second option on our list is that the universe has an infinite number of states available to it. Then the tools that Boltzmann developed are no longer useful in calculating the probability of different things happening.

    But then we’re back to the problem of initial conditions. Unlike a finite box of gas, which forgets about its initial conditions as the molecules scramble themselves, a system with an infinite number of available states cannot forget its initial conditions, because it takes an infinite time to explore all of its available states. To make predictions, we would need a theory of initial conditions. Right now, we don’t have one. Whereas our present theories take the prior state of the universe as an input, a theory of initial conditions would have to give this state as an output. It would thus require a profound shift in the way physicists think.

    The multiverse offers a third way—that is part of its appeal. It allows us to make cosmological predictions in a statistical way within the current theoretical framework of physics. In the multiverse, the volume of space grows indefinitely, all the while producing expanding bubbles with a variety of states inside. Crucially, the predictions do not depend on the initial conditions. The expansion approaches a steady-state behavior, with the expanding high-energy state continually expanding and budding off lower-energy regions. The overall volume of space is growing, and the number of bubbles of every type is growing, but the ratio (and the probabilities) remain fixed.

    The basic idea of how to make predictions in such a theory is simple. We count how many observers in the multiverse measure a physical quantity to have a given value. The probability of our observing a given outcome equals the proportion of observers in the multiverse who observe that outcome.

    For instance, if 10 percent of observers live in regions of the multiverse where dark matter is made out of light particles (such as axions), while 90 percent of observers live in regions where dark matter is made out of heavy particles (which, counterintuitively, are called WIMPs), then we have a 10 percent chance of discovering that dark matter is made of light particles.

    The very best reason to believe this type of argument is that Steven Weinberg of the University of Texas at Austin used it to successfully predict the value of the cosmological constant a decade before it was observed. The combination of a theoretically convincing motivation with Weinberg’s remarkable success made the multiverse idea attractive enough that a number of researchers, including me, have spent years trying to work it out in detail.

    The major problem we faced is that, since the volume of space grows without bound, the number of observers observing any given thing is infinite, making it difficult to characterize which events are more or less likely to occur. This amounts to an ambiguity in how to characterize the steady-state behavior, known as the measure problem.

    Roughly, the procedure to make predictions goes as follows. We imagine that the universe evolves for a large but finite amount of time and count all of the observations. Then we calculate what happens when the time becomes arbitrarily large. That should tell us the steady-state behavior. The trouble is that there is no unique way to do this, because there is no universal way to define a moment in time. Observers in distant parts of spacetime are too far apart and accelerating away from each other too fast to be able to send signals to each other, so they cannot synchronize their clocks. Mathematically, we can choose many different conceivable ways to synchronize clocks across these large regions of space, and these different choices lead to different predictions for what types of observations are likely or unlikely.

    3
    Never Enough Time. Synchronizing clocks is impossible to do in an infinite universe, which in turn undercuts the ability of physics to make predictions. Credit: Matteo Ianeselli / Wikimedia Commons.

    One prescription for synchronizing clocks tells us that most of the volume will be taken up by the state that expands the fastest. Another tells us that most of the volume will be taken up by the state the decays the slowest. Worse, many of these prescriptions predict that the vast majority of observers are Boltzmann brains. A problem we thought we had eliminated came rushing back in.

    When Don Page at the University of Alberta pointed out the potential problems with Boltzmann brains in a paper in 2006, Raphael Bousso at U.C. Berkeley and I were thrilled to realize that we could turn the problem on its head. We found we could use Boltzmann brains as a tool—a way to decide among differing prescriptions for how to synchronize clocks. Any proposal that predicts that we are Boltzmann brains must perforce be wrong. We were so excited (and worried that someone else would have the same idea) that we wrote our paper in just two days after Page’s paper appeared. Over the course of several years, persistent work by a relatively small group of researchers succeeded in using these types of tests to eliminate many proposals and to form something of a consensus in the field on a nearly unique solution to the measure problem. We felt that we had learned how to tame the frightening infinities of the theory.

    Just when things were looking good, we encountered a conceptual problem that I see no escape from within our current understanding: the end-of-time problem. Put simply, the theory predicts that the universe is on the verge of self-destruction.

    The issue came into focus via a thought experiment suggested by Alan Guth of the Massachusetts Institute of Technology (US) and Vitaly Vanchurin at the University of Michigan (US) in Duluth. This experiment is unusual even by the standards of theoretical physics. Suppose that you flip a coin and do not see the result. Then you are put into a cryogenic freezer. If the coin came up heads, the experimenters wake you up after one year. If the coin came up tails, the experimenters instruct their descendants to wake you up after 50 billion years. Now suppose you have just woken up and have a chance to bet whether you have been asleep for 1 year or 50 billion years. Common sense tells us that the odds for such a bet should be 50/50 if the coin is fair.

    4
    Don’t Wake Me Up. Hibernation thought-experiments reveal a deep paradox with probability in an infinite multiverse. Credit: Twentieth Century Fox-Film Corporation / Photofest.

    But when we apply our rules for how to do calculations in an eternally expanding universe, we find that you should bet that you only slept for one year. This strange effect occurs because the volume of space is exponentially expanding and never stops. So the number of sleeper experiments beginning at any given time is always increasing. A lot more experiments started a year ago than 50 billion years ago, so most of the people waking up today were asleep for a short time.

    The scenario may sound extreme, even silly. But that’s just because the conditions we are dealing with in cosmology are extreme, involving spans of times and volumes of space that are outside human experience. You can understand the problem by thinking about a simpler scenario that is mathematically identical. Suppose that the population of Earth doubles every 30 years—forever. From time to time, people perform these sleeper experiments, except now the subjects sleep either for 1 year or for 100 years. Suppose that every day 1 percent of the population takes part.

    Now suppose you are just waking up in your cryogenic freezer and are asked to bet how long you were asleep. On the one hand, you might argue that obviously the odds are 50/50. On the other, on any given day, far more people wake up from short naps than from long naps. For example, in the year 2016, sleepers who went to sleep for a short time in 2015 will wake up, as will sleepers who began a long nap in 1916. But since far more people started the experiment in 2015 than in 1916 (always 1 percent of the population), the vast majority of people who wake up in 2016 slept for a short time. So it might be natural to guess that you are waking from a short nap.

    The fact that two logical lines of argument yield contradictory answers tells us that the problem is not well-defined. It just isn’t a sensible problem to calculate probabilities under the assumption that the human population grows exponentially forever, and indeed it is impossible for the population to grow forever. What is needed in this case is some additional information about how the exponential growth stops.

    Consider two options. In the first, one day no more babies are born, but every sleeper experiment that has begun eventually finishes. In the second, a huge meteor suddenly destroys the planet, terminating all sleeper experiments. You will find that in option one, half of all observers who ever wake up do so from short naps, while in option two, most observers who ever wake up do so from short naps. It’s dangerous to take a long nap in the second option, because you might be killed by a meteor while sleeping. Therefore, when you wake up, it’s reasonable to bet that you most likely took a short nap. Once the theory becomes well-defined by making the total number of people finite, probability questions have unique, sensible answers.

    In eternal expansion, more sleepers wake up from short naps. Bousso, Stefan Leichenauer at University of California at Berkeley (US), Vladimir Rosenhaus at the UC Santa Barbara Kavli Institute of Theoretical Physics (US), and I pointed out that these strange results have a simple physical interpretation: The reason that more sleepers wake up from short naps is that living in an eternally expanding universe is dangerous, because one can run into the end of time. Once we realized this, it became clear that this end-of-time effect was an inherent characteristic of the recipe we were using to calculate probabilities, and it is there whether or not anyone actually decides to undertake these strange sleeper experiments. In fact, given the parameters that define our universe, we calculated that there is about a 50 percent probability of encountering the end of time in the next 5 billion years.

    To be clear about the conclusion: No one thinks that time suddenly ends in spacetimes like ours, let alone that we should be conducting peculiar hibernation experiments. Instead, the point is that our recipe for calculating probabilities accidentally injected a novel type of catastrophe into the theory. This problem indicates that we are missing major pieces in our understanding of physics over large distances and long times.

    To put it all together: Theoretical and observational evidence suggests that we are living in an enormous, eternally expanding multiverse where the constants of nature vary from place to place. In this context, we can only make statistical predictions.

    If the universe, like a box of gas, can exist in only a finite number of available states, theory predicts that we are Boltzmann brains, which conflicts with observations, not to mention common sense. If, on the contrary, the universe has an infinite number of available states, then our usual statistical techniques are not predictive, and we are stuck. The multiverse appears to offer a middle way. The universe has an infinite number of states available, avoiding the Boltzmann brain problem, yet approaches a steady-state behavior, allowing for a straightforward statistical analysis. But then we still find ourselves making absurd predictions. In order to make any of these three options work, I think we will need a revolutionary advance in our understanding of physics.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 10:28 am on February 25, 2021 Permalink | Reply
    Tags: "The Joy of Condensed Matter", A “magnon” is a quasiparticle of magnetization., A “phonon” is a quasiparticle of sound formed from vibrations moving through a crystal., An exciton is a bound state of an electron and an electron hole which are attracted to each other by the electrostatic Coulomb force., , As of now research in fundamental physics has given us the Standard Model and General Relativity (which describes gravity)., , Big questions remain unanswered—like the nature of Dark Matter or whatever is fooling us into thinking there’s dark matter., , Everyone seems to be talking about the problems with physics., Hard times in fundamental physics got you down? Let’s talk excitons., Nautilus, Physicists have a name for things that act like particles even though they’re not: “quasiparticles.”, , The idea of excitons goes back all the way to 1931., Traditionally the job of condensed matter physics was to predict the properties of solids and liquids found in nature.   

    From Nautilus: “The Joy of Condensed Matter” 

    From Nautilus

    February 24, 2021
    John C. Baez

    Hard times in fundamental physics got you down? Let’s talk excitons.

    Everyone seems to be talking about the problems with physics: Peter Woit’s book Not Even Wrong, Lee Smolin’s The Trouble With Physics, and Sabine Hossenfelder’s Lost in Math leap to mind, and they have started a wider conversation. But is all of physics really in trouble, or just some of it? If you actually read these books, you’ll see they’re about so-called “fundamental” physics. Some other parts of physics are doing just fine, and I want to tell you about one. It’s called “condensed matter physics,” and it’s the study of solids and liquids. We are living in the golden age of condensed matter physics.

    But first, what is “fundamental” physics? It’s a tricky term. You might think any truly revolutionary development in physics counts as fundamental. But in fact physicists use this term in a more precise, narrowly delimited way. One of the goals of physics is to figure out some laws that, at least in principle, we could use to predict everything that can be predicted about the physical universe. The search for these laws is fundamental physics.

    The fine print is crucial. First: “in principle.” In principle we can use the fundamental physics we know to calculate the boiling point of water to immense accuracy—but nobody has done it yet, because the calculation is hard. Second: “everything that can be predicted.” As far we can tell, quantum mechanics says there’s inherent randomness in things, which makes some predictions impossible, not just impractical, to carry out with certainty. And this inherent quantum randomness sometimes gets amplified over time, by a phenomenon called chaos. For this reason, even if we knew everything about the universe now, we couldn’t predict the weather precisely a year from now. So, even if fundamental physics succeeded perfectly, it would be far from giving the answer to all our questions about the physical world. But it’s important nonetheless, because it gives us the basic framework in which we can try to answer these questions.

    As of now research in fundamental physics has given us the Standard Model (which seeks to describe matter and all the forces except gravity) and General Relativity (which describes gravity).

    Standard Model of Particle Physics (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS).

    These theories are tremendously successful, but we know they are not the last word. Big questions remain unanswered—like the nature of Dark Matter or whatever is fooling us into thinking there’s dark matter. Unfortunately, progress on these questions has been very slow since the 1990s. Luckily fundamental physics is not all of physics, and today it is no longer the most exciting part of physics. There is still plenty of mind-blowing new physics being done. And a lot of it—though by no means all—is condensed matter physics.

    Traditionally the job of condensed matter physics was to predict the properties of solids and liquids found in nature. Sometimes this can be very hard: for example, computing the boiling point of water. But now we know enough fundamental physics to design strange new materials—and then actually make these materials, and probe their properties with experiments, testing our theories of how they should work. Even better, these experiments can often be done on a table top. There’s no need for enormous particle accelerators here.

    Let’s look at an example. We’ll start with the humble “hole.” A crystal is a regular array of atoms, each with some electrons orbiting it. When one of these electrons gets knocked off somehow, we get a “hole”: an atom with a missing electron. And this hole can actually move around like a particle! When an electron from some neighboring atom moves to fill the hole, the hole moves to the neighboring atom. Imagine a line of people all wearing hats except for one whose head is bare: If their neighbor lends them their hat, the bare head moves to the neighbor. If this keeps happening, the bare head will move down the line of people. The absence of a thing can act like a thing!

    The famous physicist Paul Dirac came up with the idea of holes in 1930. He correctly predicted that since electrons have negative electric charge, holes should have positive charge. Dirac was working on fundamental physics: He hoped the proton could be explained as a hole. That turned out not to be true. Later physicists found another particle that could: the “positron.” It’s just like an electron with the opposite charge. And thus antimatter—the evil twin of ordinary matter, with the same mass but the opposite charge—was born. (But that’s another story.)

    In 1931, Werner Heisenberg applied the idea of holes to condensed matter physics. He realized that, just as electrons create an electrical current as they move along, so do holes—but because they’re positively charged, their electrical current goes in the other direction! It became clear that holes carry electrical current in some of the materials called “semiconductors”: for example, silicon with a bit of aluminum added to it. After many further developments, in 1948 the physicist William Schockley patented transistors that use both holes and electrons to form a kind of switch. He later won the Nobel Prize for this, and now they’re widely used in computer chips.

    Holes in semiconductors are not really particles in the sense of fundamental physics. They are just a convenient way of thinking about the motion of electrons. But any sufficiently convenient abstraction takes on a life of its own. The equations that describe the behavior of holes are just like the equations that describe the behavior of particles. So, we can treat holes as if they are particles. We’ve already seen that a hole is positively charged. But because it takes energy to get a hole moving, a hole also acts like it has a mass. And so on: The properties we normally attribute to particles also make sense for holes.

    Physicists have a name for things that act like particles even though they’re not: “quasiparticles.” There are many kinds; holes are just one of the simplest. The beauty of quasiparticles is that we can practically make them to order, having a vast variety of properties. As the quantum physicist Michael Nielsen put it, we now live in the era of “designer matter.”

    For example, consider the “exciton.” Since an electron is negatively charged and a hole is positively charged, they attract each other. And if the hole is much heavier than the electron—remember, a hole has a mass—an electron can orbit a hole much as an electron orbits a proton in a hydrogen atom. Thus, they form a kind of artificial atom called an exciton. It’s a ghostly dance of presence and absence!

    1
    OPPOSITES ATTRACT: This is how an exciton, the binding together of a positively charged “hole” and an electron, moves inside a crystal lattice. Credit: Wikipedia.

    The idea of excitons goes back all the way to 1931. By now we can make excitons in large quantities in certain semiconductors and other materials. They don’t last for long: The electron quickly falls back into the hole. It often takes less than a billionth of a second for this to happen. But that’s enough time to do some interesting things. Just as two atoms of hydrogen can stick together and form a molecule, two excitons can stick together and form a “biexciton.” An exciton can stick to another hole and form a “trion.” An exciton can even stick to a photon—a particle of light—and form something called a “polariton.” It’s a blend of matter and light!

    Can you make a gas of artificial atoms? Yes! At low densities and high temperatures, excitons zip around very much like atoms in a gas. Can you make a liquid? Again, yes: At higher densities, and colder temperatures, excitons bump into each other enough to act like a liquid. At even colder temperatures, excitons can even form a “superfluid,” with almost zero viscosity: if you could somehow get it swirling around, it would go on practically forever.

    This is just a small taste of what researchers in condensed matter physics are doing these days. Besides excitons, they are studying a host of other quasiparticles. A “phonon” is a quasiparticle of sound formed from vibrations moving through a crystal. A “magnon” is a quasiparticle of magnetization: a pulse of electrons in a crystal whose spins have flipped. The list goes on, and becomes ever more esoteric.

    But there is also much more to the field than quasiparticles. Physicists can now create materials in which the speed of light is much slower than usual, say 40 miles an hour. They can even create materials in which light moves as if there were two space dimensions and two time dimensions, instead of the usual three dimensions of space and one of time! Normally we think that time can go forward in just one direction, but in these substances light has a choice between many different directions it can go “forward in time.” On the other hand, its motion in space is confined to a plane.

    In short, the possibilities of condensed matter are limited only by our imagination and the fundamental laws of physics.

    At this point, usually some skeptic comes along and questions whether these things are useful. Indeed, some of these new materials are likely to be useful. In fact a lot of condensed matter physics, while less glamorous than what I have just described, is carried out precisely to develop new improved computer chips—and also technologies like “photonics,” which uses light instead of electrons. The fruits of photonics are ubiquitous—it saturates modern technology, like flat-screen TVs—but physicists are now aiming for more radical applications, like computers that process information using light.

    Then typically some other kind of skeptic comes along and asks if condensed matter physics is “just engineering.” Of course the very premise of this question is insulting: There is nothing wrong with engineering! Trying to build useful things is not only important in itself, it’s a great way to raise deep new questions about physics. For example, the whole field of thermodynamics, and the idea of entropy, arose in part from trying to build better steam engines. But condensed matter physics is not just engineering. Large portions of it are blue-sky research into the possibilities of matter, like I’ve been talking about here.

    These days, the field of condensed matter physics is just as full of rewarding new insights as the study of elementary particles or black holes. And unlike fundamental physics, progress in condensed matter physics is rapid—in part because experiments are comparatively cheap and easy, and in part because there is more new territory to explore.

    So, when you see someone bemoaning the woes of fundamental physics, take them seriously—but don’t let it get you down. Just find a good article on condensed matter physics and read that. You’ll cheer up immediately.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 9:51 am on February 25, 2021 Permalink | Reply
    Tags: "If Aliens Exist Here’s How We’ll Find Them", , , , , But suppose we are not alone. What evidence would we expect to find?, , Don’t expect mass emigration from Earth. It’s a dangerous delusion to think that space offers an escape from Earth’s problems., Even if messages were being transmitted we may not recognize them as artificial because we may not know how to decode them., Fermi Paradox—the surprise expressed by physicist Enrico Fermi over the absence of any signs for the existence of other intelligent civilizations in the Milky Way., Few doubt machines will gradually surpass or enhance more and more of our distinctively human capabilities., If those hypothetical aliens continued to keep watch what would they witness in the next century?, Let’s think specifically about the future of space exploration. Successful missions such as Viking; Cassini; New Horizons; Juno; and Rosetta were all done with last-century technology., Machine learning is advancing fast as is sensor technology. In contrast the cost gap between crewed and autonomous missions remains huge., Nautilus, One of the most plausible long-term energy sources available to an advanced technology is starlight., Powerful alien civilizations might build a mega-structure known as a “Dyson Sphere” to harvest stellar energy from one star or many stars or even from an entire galaxy., , SETI so far has focused on the radio part of the spectrum. But we should explore all wavebands including the optical and X-ray band., Suppose aliens existed., Technological evolution of intelligent beings is only just beginning., , The habit of referring to “alien civilizations” may in itself be too restrictive., The most crucial impediment to space flight stems from the intrinsic inefficiency of chemical fuel and the requirement to carry a weight of fuel far exceeding that of the payload., We believe the future of crewed spaceflight lies with privately funded adventurers like SpaceX and Blue Origin., We can realistically expect that during this century the entire solar system—planets; moons; and asteroids—will be explored by flotillas of robotic spacecraft., We would argue that inspirationally led private companies should front all missions involving humans as cut-price high-risk ventures., Will an armada of spacecraft launched from Earth spawn new oases of life elsewhere?   

    From Nautilus: “If Aliens Exist Here’s How We’ll Find Them” 

    From Nautilus

    February 24, 2021
    Martin Rees & Mario Livio

    Suppose aliens existed, and imagine that some of them had been watching our planet for its entire four and a half billion years. What would they have seen? Over most of that vast timespan, Earth’s appearance altered slowly and gradually. Continents drifted; ice cover waxed and waned; successive species emerged, evolved, with many of them becoming extinct.

    But in just a tiny sliver of Earth’s history—the last hundred centuries—the patterns of vegetation altered much faster than before. This signaled the start of agriculture—and later urbanization. The changes accelerated as the human population increased.

    Then came even faster changes. Within just a century, the amount of carbon dioxide in the atmosphere began to rise dangerously fast. Radio emissions that couldn’t be explained by natural processes appeared and something else unprecedented happened: Rockets launched from the planet’s surface escaped the biosphere completely. Some spacecraft were propelled into orbits around the Earth; others journeyed to the moon, Mars, Jupiter, and even Pluto.

    If those hypothetical aliens continued to keep watch, what would they witness in the next century? Will a final spasm of activity be followed by silence due to climate change? Or will the planet’s ecology stabilize? Will there be massive terraforming? Will an armada of spacecraft launched from Earth spawn new oases of life elsewhere?

    1
    LASER POWER: A crucial impediment to space flight is the inefficiency of chemical fuel. One day a laser power station, located on Earth, might generate a beam to “push” a craft through space. Credit: NASA / Pat Rawlings (SAIC).

    Let’s think specifically about the future of space exploration. Successful missions such as Viking, Cassini, New Horizons, Juno, and Rosetta were all done with last-century technology.

    NASA/Viking 1 Lander


    NASA Viking 2 Lander

    NASA/ESA/ASI Cassini-Huygens Spacecraft.

    NASA/New Horizons spacecraft.

    NASA/Juno at Jupiter.

    ESA/Rosetta spacecraft, European Space Agency’s legendary comet explorer Rosetta annotated.

    We can realistically expect that during this century, the entire solar system—planets, moons, and asteroids—will be explored by flotillas of robotic craft.

    Will there still be a role for humans in crewed spacecraft?

    There’s no denying that NASA’s new Perseverance rover speeding across the Jezero crater on Mars may miss some startling discoveries that no human geologist could reasonably overlook.

    Perseverence

    NASA Perseverance Mars Rover annotated.

    But machine learning is advancing fast, as is sensor technology. In contrast, the cost gap between crewed and autonomous missions remains huge.

    We believe the future of crewed spaceflight lies with privately funded adventurers like SpaceX and Blue Origin, prepared to participate in a cut-price program far riskier than western nations could impose on publicly supported projects. These ventures—bringing a Silicon-Valley-type culture into a domain long-dominated by NASA and a few aerospace conglomerates—have innovated and improved rocketry far faster than NASA or the European Space Agency have done. The future role of the national agencies will be attenuated—becoming more akin to an airport rather than to an airline.

    We would argue that inspirationally led private companies should front all missions involving humans as cut-price high-risk ventures. There would still be many volunteers—a few perhaps even accepting one-way tickets—driven by the same motives as early explorers and mountaineers. The phrase “space tourism” should be avoided. It lulls people into believing such ventures are routine and low-risk. If that’s the perception, the inevitable accidents will be as traumatic as those of the space shuttle were. These exploits must be sold as dangerous, extreme sports, or intrepid exploration.

    The most crucial impediment to space flight stems from the intrinsic inefficiency of chemical fuel and the requirement to carry a weight of fuel far exceeding that of the payload. So long as we are dependent on chemical fuels, interplanetary travel will remain a challenge. Nuclear power could be transformative. Allowing much higher in-course speeds would drastically cut the transit times in the solar system, reducing not only astronauts’ boredom, but their exposure to damaging radiation. It’s more efficient if the fuel supply can be on the ground; for instance, propelling spacecraft into orbit via a “space elevator”—and then using a “star-shot”-type laser beam generated on Earth to push on a “sail” attached to the spacecraft.

    By 2100, thrill seekers in the mold of Felix Baumgartner (the Austrian skydiver who in 2012 broke the sound barrier in free fall from a high-altitude balloon) may have established bases on Mars, or maybe even on asteroids. Elon Musk has said he wants to die on Mars—“but not on impact.” It’s a realistic goal, and an alluring one to some.

    But don’t expect mass emigration from Earth. It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve those here. Coping with climate change or the COVID-19 pandemic may seem daunting, but it’s a piece of cake compared to terraforming Mars. There’s no place in our solar system that offers an environment even as clement as the Antarctic or the top of Mount Everest. Simply put, there’s no Planet B for ordinary risk-averse people.

    Still, we (and our progeny here on Earth) should cheer on the brave space adventurers. They have a pivotal role to play in spearheading the post-human future and determining what happens in the 22nd century and beyond.

    Pioneer explorers will be ill-adapted to their new habitat, so they will have a compelling incentive to re-design themselves. They’ll harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. This might be the first step toward divergence into a new species.

    Organic creatures need a planetary surface environment on which life could emerge and evolve. But if post-humans make the transition to fully inorganic intelligence, they won’t need an atmosphere. They may even prefer zero-gravity, especially for constructing massive artifacts. It’s in deep space that non-biological brains may develop powers that humans can’t even imagine.

    There are chemical and metabolic limits to the size and processing power of organic brains. Maybe we are close to these limits already. But no such limits apply to or constrain electronic computers (still less, perhaps, quantum computers). So, by any definition of “thinking,” the amount and intensity that can be achieved by organic human-type brains will be swamped by the cerebrations of AI.

    We are perhaps near the end of Darwinian evolution, but technological evolution of intelligent beings is only just beginning. It may happen fastest away from Earth—we wouldn’t expect (and certainly wouldn’t wish for) such rapid changes in humanity here on the Earth, though our survival may depend on ensuring the AI on Earth remains “benevolent.”

    Few doubt machines will gradually surpass or enhance more and more of our distinctively human capabilities. Disagreements are only about the timescale on which this will happen. Inventor and futurist Ray Kurzweil says it will be just a matter of a few decades. More cautious scientists envisage centuries. Either way, the timescales for technological advances are an instant compared to the timescales of the Darwinian evolution that led to humanity’s emergence—and more relevantly, less than a millionth of the vast expanses of cosmic time ahead. The products of future technological evolution could surpass humans by as much as we have surpassed slime mold.

    But, you may wonder, what about consciousness?

    Philosophers and computer scientists debate whether consciousness is something that characterizes only the type of wet, organic brains possessed by humans, apes, and dogs. Would electronic intelligences, even if their intellects would seem superhuman, lack self-awareness? The ability to imagine things that do not exist? An inner life? Or is consciousness an emergent property that any sufficiently complex network will eventually possess? Some say it’s irrelevant and semantic, like asking whether submarines can swim.

    We don’t think it is. If the machines are what computer scientists refer to as “zombies,” we would not accord their experiences the same value as ours, and the post-human future would seem rather bleak. On the other hand, if they are conscious, we should welcome the prospect of their future hegemony.

    What will their guiding motivation be if they become fully autonomous entities? We have to admit we have absolutely no idea. Think of the variety of bizarre motives (ideological, financial, political, egotistical, and religious) that have driven human endeavors in the past. Here’s one simple example of how different they could be from our naive expectations: They could be contemplative. Even less obtrusively, they may realize it’s easier to think at low temperatures, therefore getting far away from any star, or even hibernating for billions of years until the cosmic microwave background cooled down far below its current 3 degrees Kelvin. At the other edge of the spectrum, they could be expansionist, which seems to be the expectation of most who’ve thought about the future trajectory of civilizations.

    Even if life had originated only on Earth, it need not remain a marginal, trivial feature of the cosmos. Humans could jump-start a diaspora whereby ever-more complex intelligence spreads through the galaxy, transcending our limitations. The “sphere of influence” (or some would envisage a “frontier of conquest”) could encompass the entire galaxy, spreading via self-reproducing machines, transmitting DNA or instructions for 3-D printers. The leap to neighboring stars is just an early step in this process. Interstellar voyages—or even intergalactic voyages—would hold no terrors for near-immortals.

    Moreover, even if the only propellants used were the currently known ones, this galactic colonization would take less time, measured from today, than the more than 500 million years elapsed since the Cambrian explosion. And even less than the 55 million years since the advent of primates, if it proceeds relativistically.

    The expansionist scenarios would have the consequence that our descendants would become so conspicuous that any alien civilization would become aware of them.

    The crucial question remains: Are there other expansionists whose domain may impinge on ours?

    We don’t know. The emergence of intelligence may require such a rare chain of events and happenstance contingencies—like winning a lottery—that it has not occurred anywhere else. That will disappoint SETI searchers and explain the so-called Fermi Paradox—the surprise expressed by physicist Enrico Fermi over the absence of any signs for the existence of other intelligent civilizations in the Milky Way. But suppose we are not alone. What evidence would we expect to find?

    2
    The Allen Telescope Array, located at the Hat Creek Observatory in the Cascade Mountains, about 300 miles north of San Francisco, makes astronomical observations and stays attuned to signs of extraterrestrial life. Credit: Seth Shostak / SETI Institute.


    SETI@home, a BOINC [Berkeley Open Infrastructure for Network Computing] project originated in the Space Science Lab at UC Berkeley.

    Suppose that there are indeed many other planets where life emerged, and that on some of them Darwinian evolution followed a similar track to the one on Earth. Even then, it’s highly unlikely that the key stages would be synchronized. If the emergence of intelligence and technology on a planet lags significantly behind what has happened on Earth (because, for example, the planet is younger, or because some bottlenecks in evolution have taken longer to negotiate) then that planet would reveal no evidence of ET. Earth itself would probably not have been detected as a life-bearing planet during the first 2 billion years of its existence.

    But around a star older than the sun, life could have had a head start of a billion years or more. Note that the current age of the solar system is about half the age of our galaxy and also half of the sun’s predicted total lifetime. We expect that a significant fraction of the stars in our galaxy are older than the sun.

    The history of human technological civilization is measured in mere millennia. It may be only a few more centuries before humans are overtaken or transcended by inorganic intelligence, which will then persist, continuing to evolve on a faster-than-Darwinian timescale for billions of years. Organic human-level intelligence may be, generically, just a brief interlude before the machines take over, so if alien intelligence had evolved similarly, we’d be most unlikely to catch it in the brief sliver of time when it was still embodied in that form. Were we to detect ET, it would be far more likely to be electronic where the dominant creatures aren’t flesh and blood—and perhaps aren’t even tied to a planetary surface.

    Astronomical observations have now demystified many of the probability factors in the so-called Drake Equation—the probabilistic attempt traditionally used to estimate the number of advanced civilizations in the Milky Way.

    Frank Drake with his Drake Equation. Credit Frank Drake.

    Drake Equation, Frank Drake, Seti Institute.

    The number of potentially habitable planets has changed from being completely unknown only a couple of decades ago to being directly determined from the observations. At the same time, we must reinterpret one of the key factors in the Drake equation. The lifetime of an organic civilization may be millennia at most. But its electronic diaspora could continue for billions of years.

    If SETI succeeded, it would then be unlikely that the signal would be a decodable message. It would more likely reveal a byproduct (or maybe even a malfunction) of some super-complex machine beyond our comprehension.

    The habit of referring to “alien civilizations” may in itself be too restrictive. A civilization connotes a society of individuals. In contrast, ET might be a single integrated intelligence. Even if messages were being transmitted, we may not recognize them as artificial because we may not know how to decode them, in the same way that a veteran radio engineer familiar only with amplitude-modulation (AM) transmission might have a hard time decoding modern wireless communications. Indeed, compression techniques aim to make the signal as close to noise as possible; insofar as a signal is predictable, there’s scope for more compression.

    SETI so far has focused on the radio part of the spectrum. But we should explore all wavebands, including the optical and X-ray band. We should also be alert for other evidence of non-natural phenomena or activity. What might then be a relatively generic signature? Energy consumption, one of the potential hallmarks of an advanced civilization, appears to be hard to conceal.

    One of the most plausible long-term energy sources available to an advanced technology is starlight. Powerful alien civilizations might build a mega-structure known as a “Dyson Sphere” to harvest stellar energy from one star or many stars or even from an entire galaxy.

    The other potential long-term energy source is controlled fusion of hydrogen into heavier nuclei. In both cases, waste heat and a detectable mid-infrared signature would be an inevitable outcome. Or, one might seek evidence for massive artifacts such as the Dyson Sphere itself. Intriguingly, it’s worth looking for artifacts within our own solar system: Maybe we can rule out visits by human-scale aliens, but if an extraterrestrial civilization had mastered nanotechnology and transferred its intelligence to machines, the “invasion” might consist of a swarm of microscopic probes that could have evaded notice. Still, it would be easier to send a radio or laser signal than to traverse the mind-boggling distances of interstellar space.

    Finally, let’s fast forward not for just a few millennia, but for an astronomical timescale, millions of times longer. As interstellar gas will be consumed, the ecology of stellar births and deaths in our galaxy will proceed more gradually, until jolted by the environmental shock of a collision with the Andromeda galaxy, about 4.5 billion years hence. The debris of our galaxy, Andromeda, and their smaller companions (known as the Local Group) will aggregate into one amorphous (or perhaps elliptical) galaxy. Due to the accelerating cosmic expansion, distant galaxies will move farther away, receding faster and faster until they disappear—rather like objects falling into a black hole—encountering a horizon beyond which they are lost from view and causal contact. But the remnants of our Local Group could continue for a far longer time. Long enough perhaps for what has been dubbed a “Kardashev Type III” phenomenon, in which a civilization is using the energy from one or more galaxies, and perhaps even that released from supermassive black holes, to emerge as the culmination of the long-term trend for living systems to gain complexity and negative entropy (a higher degree of order).

    The only limitations set by fundamental physics would be the number of accessible protons (since those can in principle be transmuted into any elements), and the total amount of accessible energy (E=mc2, where m is mass and c is the speed of light) again transformable from one form to another.

    Essentially all the atoms that were once in stars and gas could be transformed into structures as intricate as a living organism or silicon chips but on a cosmic scale. A few science-fiction authors envisage stellar-scale engineering to create black holes and wormholes—concepts far beyond any technological capability that we can imagine, but not in violation of basic physical laws.

    If we want to go to further extremes, the total mass-energy content in the Local Group isn’t the limit of the available resources. It would still be consistent with physical laws for an incredibly advanced civilization to lasso the galaxies that are receding because of the cosmic expansion of space before they accelerate and disappear over the horizon. Such a hyper-intelligent species could pull them in to construct a segment resembling Einstein’s original idea of a static universe in equilibrium, with a mean density such that the cosmic repulsion caused by dark energy is precisely balanced by gravity.

    Everything we’ve said is consistent with the laws of physics and the cosmological model as we understand them. Our speculations assume that the repulsive force causing cosmic acceleration persists (and is described by dark energy or Einstein’s cosmological constant). But we should be open-minded about the possibility that there is much we don’t understand.

    Human brains have changed relatively little since our ancestors roamed the African savannah and coped with the challenges that life then presented. It is surely remarkable that these brains have allowed us to make sense of the quantum subatomic world and the cosmos at large—far removed from the common sense, everyday world in which we have evolved.

    Scientific frontiers are now advancing fast. But we may at some point hit the buffers. There may be phenomena, some of which may be crucial to our long-term destiny, that we are not aware of any more than a gorilla comprehends the nature of stars and galaxies. Physical reality could encompass complexities that neither our intellect nor our senses can grasp. Electronic brains may have a rather different perception of reality. Consequently, we cannot predict or perhaps even understand the motives of such brains. We cannot assess whether the Fermi paradox signifies their absence or simply their preference.

    Conjectures about advanced or intelligent life are shakier than those about simple life. Yet there are three features that may characterize the entities that SETI searches could reveal.

    • Intelligent life is likely not to be organic or biological

    • It will not remain on the surface of the planet where its biological precursor emerged and evolved.

    • We will not be able to fathom the intentions of such life forms.

    Two familiar maxims should pertain to all SETI searches. On one hand, “absence of evidence isn’t evidence of absence,” but on the other, “extraordinary claims require extraordinary proof.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 2:44 pm on January 17, 2021 Permalink | Reply
    Tags: "A Breakthrough in Measuring the Building Blocks of Nature", Alexey Grinin-Max Planck Institute for Quantum Optics [MPG Institut für Quantenoptik] (DE), , Future experiments-as always-will be the ultimate arbiter in this matter., Grinin’s team used a technique known as frequency comb spectroscopy., It’s not meaningless to ask what the “size” of the proton is. The study by Grinin’s team highlights the fact that defining this notion remains a rather tricky affair., , Nautilus, Not only did Grinin’s team find a value for the charge radius of the proton consistent with the value obtained in muonic hydrogen they also inferred a more precise value for the Rydberg constant., , The smaller value has been adopted as the official value on the National Institute of Standards and Technology CODATA list of recommended physical constants., The technique reduced to only about one part in ten trillion the observational uncertainties in the frequency of light these transitions emitted—a staggering degree of accuracy by any standard.   

    From Nautilus: “A Breakthrough in Measuring the Building Blocks of Nature” 

    From Nautilus

    Jan 08, 2021
    Subodh Patil

    1
    An artistic rendering of the quarks and gluons that make up a proton. Credit: D. Dominguez/CERN.

    In a recent experiment done at the Max Planck Institute for Quantum Optics [MPG Institut für Quantenoptik] (DE), physicist Alexey Grinin and his colleagues came a step closer to resolving one of the more significant puzzles to have arisen in particle physics over the past decade. The puzzle is this: Ordinarily, when you set about measuring the size of something, you’d expect to get the same answer no matter what you use to measure it—a soda can has the diameter it does whether you measure it with a tape measure or callipers (provided these are properly calibrated, of course). Something must be amiss if your attempts to measure the can return different answers depending on the equipment, yet this is precisely what’s happened over multiple attempts to measure the spatial extent of a proton. What’s potentially at stake is our understanding of the building blocks of reality: the differing measurements could be heralding the existence of new forces or particles.

    What does it mean for a subatomic particle to have a measurable “size”? Mathematically, fundamental particles are idealized as point particles, which is to say that, as far as we can tell, they have no meaningfully discernible spatial extent, or substructure, at all. True, all fundamental particles are associated with a quantum mechanical wave packet, which does have a spatial extent that depends on the energy of the particle. Yet these basic bits of Lego are entities whose wave packets you can, in principle, pack into as small a region as you’d like before the very notion of continuum geometry starts, at the Planck scale, to lose meaning. Fundamental particles organize into something analogous to a mini periodic table—consisting of the various force carrying particles, such as photons and gluons (the carrier particles of the strong nuclear force), along with three generations of quarks and leptons and the mass-generating Higgs boson—and can stack together in different combinations to form a zoo of so-called composite particles.

    Standard Model of Particle Physics via http://www.plus.maths.org .

    CERN CMS Higgs Event May 27, 2012.


    CERN ATLAS Higgs Event
    June 12, 2012.

    Perhaps the most familiar and ubiquitous of these is the proton. With at least one in every kind of element, it’s made up of two up quarks and a down quark that dance around each other in a tightly bound orbit maintained by exchanging gluons.

    The quark structure of the proton 16 March 2006 Arpad Horvath.

    This exchange process is so energetic that most of the mass of the proton (or for that matter, most of the material that makes us up) derives from the energy contained in these gluons—a consequence, as Einstein informed us, of E being equal to mc^2.

    3
    Fundamental particles organize into something analogous to a mini periodic table. Credit: CERN.

    So it’s not meaningless to ask what the “size” of the proton is. The study by Grinin’s team highlights the fact that defining this notion remains a rather tricky affair. And, as we’ll see, their results serve to sharpen the mystery as to why other measurement methods researchers have used previously disagree.

    A physicist can reasonably infer a proton’s size from the “charge radius”—roughly the averaged spatial extent of quark orbits inside. This quantity is probed in slightly different ways by electrons and muons (another sort of fundamental particle), when you probe their orbital configurations as they form “bound states” with the proton—atomic hydrogen in the case of electrons, muonic hydrogen in the case of muons. Because muons are about 200 times heavier than electrons, their lowest energy orbital configurations are much more tightly bound around the proton than are electrons in atomic hydrogen. Consequently, the differences in the energies of various orbitals in muonic hydrogen are much more sensitive to the proton’s size as well as being more “high pitched” than that of regular atomic hydrogen.

    In other words, similar to how plucking a guitar string at a given tension produces a much higher note were we to fret it open, or at 1/200th its open length, the typical frequencies of the radiation emitted by transitions in muonic hydrogen are about 200 times higher than that in atomic hydrogen. These frequencies relate to something called the Rydberg constant—the tension of the guitar string in the analogy—which appears to be one of the potentially more significant sources of uncertainty proton size-wise. Orbital energy levels depend on both this constant and the charge radius of the proton.

    Proton-size measurements didn’t conflict for decades. Different methods—like measuring the radius by observing electrons orbit within hydrogen atoms, or by scattering energetic electrons off of unbound protons—had converged on a value of 0.875 (give or take 0.006) femtometers. That’s a little less than a trillionth of a millimeter. That convergence was disrupted in 2010, when a paper came out titled, The size of the proton [Nature]. As the researchers reported, measurements involving orbital configurations in muonic hydrogen returned a value of 0.842, give or take 0.001 femtometers. This may not seem like much of a difference, but it’s the accompanying error bars that matter. The measurements are, individually, so precise that their disagreement is over seven standard deviations—there is less than one in about a trillion chance that the discrepancy could be a statistical fluke.

    There are only two possibilities for the anomalous result if the equipment used in the experiments and their calibrations all check out after careful scrutiny. Either some combination of physical constants, which researchers assume in order to experimentally infer the proton charge radius, isn’t known as accurately as we thought, or there is something different about the way muons interact with protons, compared to electrons, that renders particle physics incomplete.

    The latter possibility, if substantiated, would, of course, cause a flurry of excitement among theoretical physicists to say the least, as it could imply the existence of new forces and particles. Not only would it reshape our understanding of the universe, it would represent a throwback to the days when physicists discovered particles (such as the muon itself) using equipment that could fit on a proverbial tabletop.

    Over the past few years, various teams have been attempting to get to the bottom of the matter by looking at different orbital transitions in atomic hydrogen that are sensitive to different combinations of the Rydberg constant and the charge radius. A 2019 measurement [Science] by a group of researchers at York University in Canada looked at a particular orbital transition that was independent of the value of this constant, finding a value of 0.833 ± 0.010 femtometers, consistent with the smaller value obtained in muonic hydrogen.

    Grinin’s team went a step further. They used a technique known as frequency comb spectroscopy. It involves pulses of laser light that are a superposition of equally spaced frequencies—a ruler in frequency space if you will—that allowed them to look at two different orbital transitions in atomic hydrogen sensitive to two different combinations of the proton size and the Rydberg constant. This permitted them to determine both with unprecedented accuracy. The technique reduced, to only about one part in ten trillion, the observational uncertainties in the frequency of light these transitions emitted—a staggering degree of accuracy by any standard.

    Not only did Grinin’s team find a value for the charge radius of the proton consistent with the value obtained in muonic hydrogen, they inferred a much more precise value for the Rydberg constant. This accounted for some part of the discrepancy seen in other measurements in atomic hydrogen (which presumed a less accurate value).

    It thus appears that the experimental value of the proton charge radius Grinin’s team obtained in atomic hydrogen is converging on the smaller values for the proton charge radius other researchers initially obtained in muonic hydrogen. The smaller value has by now even been adopted as the official value on the National Institute of Standards and Technology CODATA list of recommended physical constants—the official almanac for nuclear and atomic chemists and physicists.

    Although this convergence, based on the continued refinement of experimental techniques, did not deliver the new physics some may have been hoping for, even the most despondent theoretical physicist can acknowledge the experimental artistry that seems to be bringing the matter closer to conclusion. What remains unresolved is the reason why measurements, relying on different spectroscopic methods in atomic hydrogen, return different values for the charge radius of the proton. The mystery, and along with it, the diminishing hope of particle physicists, endures for the time being.

    This was enough motivation for a team of theoretical physicists, led by Cliff Burgess at the Perimeter Institute, in Canada, to systematically catalogue all possible sources of theoretical uncertainty in atomic spectroscopy over a series of papers [iNSPIRE!HEP]. By isolating the ways in which new forces and particles might leave a tell-tale signature, they’ve thrown the gauntlet firmly back to the experimentalists. Future experiments, as always, will be the ultimate arbiter in this matter.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:54 pm on January 17, 2021 Permalink | Reply
    Tags: "Even Physicists Find the Multiverse Faintly Disturbing", 'How do you feel about the multiverse?' The question was not out of place in our impromptu dinner-table lecture yet it caught me completely off-guard., , , If we find out that all we know and all we can ever know is just one pocket in the multiverse the entire foundation upon which we have laid our coordinate grid shifts., It’s not the immensity or even the inscrutability but that it reduces physical law to happenstance., Nautilus, The multiverse has been hotly debated and continues to be a source of polarization among some of the most prominent scientists of the day.   

    From Nautilus: “Even Physicists Find the Multiverse Faintly Disturbing” 

    From Nautilus

    January 12, 2017 [Brought forward today 1.17.21]
    Tasneem Zehra Husain

    It’s not the immensity or even the inscrutability, but that it reduces physical law to happenstance.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    1
    A Tate Modern employee views The Passing Winter 2005 by Japanese artist Yayoi Kusama. Credit: Daniel Leal-Olivas/AFP/Getty Images.

    ” ‘How do you feel about the multiverse?’ The question was not out of place in our impromptu dinner-table lecture, yet it caught me completely off-guard. It’s not that I’ve never been asked about the multiverse before, but explaining a theoretical construct is quite different to saying how you feel about it. I can put forth all the standard arguments and list the intellectual knots a multiverse would untangle; I can sail through the facts and technicalities, but I stumble over the implications.

    In physics we’re not supposed to talk about how we feel. We are a hard-nosed, quantitative, and empirical science. But even the best of our dispassionate analysis begins only after we have decided which avenue to pursue. When a field is nascent, there tend to be a range of options to consider, all of which have some merit, and often we are just instinctively drawn to one. This choice is guided by an emotional reasoning that transcends logic. Which position you choose to align yourself with is, as Stanford University physicist Leonard Susskind says, ‘about more than scientific facts and philosophical principles. It is about what constitutes good taste in science. And like all arguments about taste, it involves people’s aesthetic sensibilities.’

    My own research is in string theory, and one of its features is that there exist many logically consistent versions of the universe other than our own. The same process that created our universe can also bring those other possibilities to life, creating an infinity of other universes where everything that can occur, does. The chain of arguments starts from a place I’m familiar with, and I can follow the flourishes that the equations make as they dance down the page toward this particular conclusion, but, while I understand the multiverse as a mathematical construction, I cannot bring myself to believe it will leap out of the realm of theory and find a manifestation in physical reality. How do I pretend I have no problem accepting the fact that infinite copies of me might be parading around in parallel worlds making choices both identical to, and different from, mine?

    I am not alone in my ambivalence. The multiverse has been hotly debated and continues to be a source of polarization among some of the most prominent scientists of the day. The debate over the multiverse is not a conversation about the particulars of a theory. It is a fight about identity and consequence, about what constitutes an explanation, what proof consists of, how we define science, and whether there is a point to it all.

    2
    An Infinity of Galaxies
    Galaxies like the Sombrero Galaxy fill space for as far as we can see—and presumably farther.
    Credit:NASA/ESA and The Hubble Heritage Team (STScI/AURA).

    Whenever I talk about the multiverse, one of the questions that inevitably comes up is one I actually have an answer to. Whether we live in a universe or multiverse, these classifications relate to scales so large they defy imagination. No matter the outcome, life around us isn’t going to change one way or another. Why does it matter?

    It matters because where we are influences who we are. Different places call forth different reactions, give rise to different possibilities; the same object can look dramatically different against different backgrounds. In more ways than we are perhaps conscious of, we are molded by the spaces we inhabit. The universe is the ultimate expanse. It contains every arena, every context in which we can realize existence. It represents the sum total of possibilities, the complete set of all we can be.

    A measurement makes sense only within a reference frame. Numbers are clearly abstract until paired with units, but even vague assessments such as ‘too far,’ ‘too small,’ and ‘too strange’ presume a coordinate system. Too far invokes an origin; too small refers to a scale; too strange implies a context. Unlike units, which are always stated, the reference frame of assumptions is seldom specified, and yet the values we assign to things—objects, phenomena, experiences—are calibrated against these invisible axes.

    If we find out that all we know, and all we can ever know, is just one pocket in the multiverse, the entire foundation upon which we have laid our coordinate grid shifts. Observations don’t change, but implications do. The presence of those other bubble universes out there might not impact the numbers we measure here on our instruments, but could radically impact the way we interpret them.

    The first thing that strikes you about the multiverse is its immensity. It is larger than anything humankind has ever dealt with before—the aggrandizement is implicit in the name. It would be understandable if the passionate responses provoked by the multiverse came from feeling diminished. Yet the size of the multiverse is perhaps its least controversial feature.

    Gian Giudice, head of CERN’s theory group, speaks for most physicists when he says that one look at the sky sets us straight. We already know our scale. If the multiverse turns out to be real, he says, ‘the problem of me versus the vastness of the universe won’t change.’ In fact, many find comfort in the cosmic perspective. Framed against the universe, all our troubles, all the drama of daily life, diminishes so dramatically that ‘anything that happens here is irrelevant,’ says physicist and author Lawrence Krauss. ‘I find great solace in that.’

    From the stunning photographs the Hubble Space telescope has beamed back to Octavio Paz’s poems of ‘the enormous night‘ to Monty Python’s ‘Galaxy Song’ to be sung ‘whenever life gets you down,’ there is Romanticism associated with our Lilliputian magnitude. At some point in our history, we appear to have made peace with the fact we are infinitesimal.

    If it isn’t because we are terrified of the scale, are we resistant to the notion of the multiverse because it involves worlds that are out of sight and seem doomed to remain so? This is indeed a common complaint I hear from my colleagues. South African physicist George Ellis (who is strongly opposed to the multiverse) and British cosmologist Bernard Carr (an equally strong advocate) have discussed such issues in a series of fascinating conversations. Carr suggests their fundamental point of diversion concerns ‘which features of science are to be regarded as sacrosanct.’ Experimentation is the traditional benchmark. Comparative observations are an acceptable substitute: Astronomers cannot manipulate galaxies, but do observe them by the millions, in various forms and stages. Neither approach fits the multiverse. Does it therefore lie outside the domain of science?

    Susskind, one of the fathers of string theory, sounds a reassuring note. There is a third approach to empirical science: to infer unseen objects and phenomena from those things we do see. We don’t have to go as far as causally disconnected regions of spacetime to find examples. Subatomic particles will do. Quarks are permanently bound together into protons, neutrons, and other composite particles. ‘They are, so to speak, hidden behind a … veil, Susskind says, but by now, although no single quark has ever been seen in isolation, there is no one who seriously questions the correctness of the quark theory. It is part of the bedrock foundation of modern physics.’

    Because the universe is now expanding at an accelerating rate, galaxies that currently lie on the horizon of our field of vision will soon be pushed over the edge. We don’t expect them to tumble into oblivion anymore than we expect a ship to disintegrate when it sails over the horizon. If galaxies we know of can exist in some distant region beyond sight, who’s to say other things can’t be there, too? Things we’ve never seen and never will? Once we admit the possibility that there are regions beyond our purview, the implications grow exponentially. The British Astronomer Royal, Martin Rees, compares this line of reasoning to aversion therapy. When you admit to there being galaxies beyond our present horizon, you ‘start out with a little spider a long distance away, but, before you know it, you unleash the possibility of a multiverse—populated with infinite worlds, perhaps quite different to your own—find a tarantula crawling all over you.’

    The lack of ability to directly manipulate objects has never really figured in my personal criteria for a good physical theory, anyway. Whatever bothers me about the multiverse, I’m sure it isn’t this.

    The multiverse challenges yet another of our most cherished beliefs—that of uniqueness. Could this be the root of our trouble with it? As Tufts cosmologist Alexander Vilenkin explains, no matter how large our observable region is, as long as it is finite, it can only be in a finite number of quantum states; specifying these states uniquely determines the contents of the region. If there are infinitely many such regions, the same configuration will necessarily be replicated elsewhere. Our exact world here—down to the last detail—will be replicated. Since the process continues into infinity, there will eventually be not one, but infinite copies of us.

    ‘I did find the presence of all these copies depressing, Vilenkin says. Our civilization may have many drawbacks, but at least we could claim it is unique—like a piece of art. And now we can no longer say that.’ I know what he means. That bothers me, too, but I’m not sure it quite gets to the root of my discontent. As Vilenkin says, somewhat wistfully: ‘I am not presumptuous enough to tell reality what it should be.’

    The crux of the debate, at least for me, lies in a strange irony. Although the multiverse enlarges our concept of physical reality to an almost unimaginable extent, it feels claustrophobic in that it demarcates an outer limit to our knowledge and our capacity to acquire knowledge. We theorists dream of a world without arbitrariness, whose equations are entirely self-contained. Our goal is to find a theory so logically complete, so tightly constrained by self-consistency, that it can only take that one unique form. Then, at least, even if we don’t know where the theory came from or why, the structure will not seem arbitrary. All the fundamental constants of nature would emerge ‘out of math and π and 2’s,’ as Berkeley physicist Raphael Bousso puts it.

    This is the lure of Einstein’s general theory of relativity—the reason physicists all over the world exclaim at its extraordinary, enduring beauty. Considerations of symmetry dictate the equations so clearly that the theory seems inevitable. That is what we have wanted to replicate in other domains of physics. And so far we have failed.

    For decades, scientists have looked for a physical reason why the fundamental constants should take on the values they do, but none has thus far been found. In fact, when we use our current theories to guess at the probable value at some of these parameters, the answers are so far from what is measured that it is laughable. But then how do we explain these parameters? If there is just this one unique universe, the parameters governing its design are invested with a special significance. Either the process governing them is completely random or there must be some logic, perhaps even some design, behind the selection.

    Neither option seems particularly appealing. As scientists, we spend our lives looking for laws because we believe there are reasons why things happen, even when we don’t understand them; we look for patterns because we think there is some order to the universe even if we don’t see it. Pure, random chance is not something that fits in with that worldview.

    But to invoke design isn’t very popular either, because it entails an agency that supersedes natural law. That agency must exercise choice and judgment, which—in the absence of a rigid, perfectly balanced, and tightly constrained structure, like that of general relativity—is necessarily arbitrary. There is something distinctly unsatisfying about the idea of there being several logically possible universes, of which only one is realized. If that were the case, as cosmologist Dennis Sciama said, you would have to think ‘there’s [someone] who looks at this list and says well we’re not going to have that that one, and we won’t have that one. We’ll have that one, only that one.’

    Personally speaking, that scenario, with all its connotations of what could have been, makes me sad. Floating in my mind is a faint collage of images: forlorn children in an orphanage in some forgotten movie when one from the group is adopted; the faces of people who feverishly chased a dream, but didn’t make it; thoughts of first-trimester miscarriages. All these things that almost came to life, but didn’t, rankle. Unless there’s a theoretical constraint ruling out all possibilities but one, the choice seems harsh and unfair.

    In such a carefully calibrated creation, how are we to explain needless suffering? Since such philosophical, ethical, and moral concerns are not the province of physics, most scientists avoid commenting on them, but Nobel laureate Steven Weinberg spelled it out: ‘Whether our lives show evidence for a benevolent designer … is a question you will all have to answer for yourselves. My life has been remarkably happy … but even so, I have seen a mother painfully die of cancer, a father’s personality destroyed by Alzheimer’s disease and scores of second and third cousins murdered in the Holocaust. Signs of a benevolent designer are pretty well hidden.’

    In the face of pain, an element of randomness is far easier to accept than either the callous negligence or the deliberate malevolence of an otherwise meticulously planned universe.

    The multiverse promised to extricate us from these awful thoughts, to provide a third option that overcame the dilemma of explanation.

    To be sure, physicists didn’t invent it for that purpose. The multiverse emerged out of other lines of thought. The theory of cosmic inflation was intended to explain the broad-scale smoothness and flatness of the universe we see. ‘We were looking for a simple explanation of why the universe looks like a big balloon, says Stanford physicist Andrei Linde. We didn’t know we had bought something else.’ This something else was the realization that our big bang was not unique, and that there should in fact be an infinite number of them, each creating a disconnected domain of spacetime.

    Then string theory came along. String theory is currently the best contender we have for a unified theory of everything. It not only achieves the impossible—reconciling gravity and quantum mechanics—but insists upon it. But for a scheme which reduces the enormous variety of our universe to a minimalist set of building blocks, string theory suffers from a singularly embarrassing problem: We don’t know how to determine the precise values of the fundamental constants of nature. Current estimates say there are about 10500 potential options—a number so unfathomably large we don’t even have a name for it. String theory lists all the possible forms physical laws can take, and inflation creates a way for them to be realized. With the birth of each new universe, an imaginary deck of cards is shuffled. The hand that is dealt determines the laws that govern that universe.

    The multiverse explains how the constants in our equations acquire the values they do, without invoking either randomness or conscious design. If there are vast numbers of universes, embodying all possible laws of physics, we measure the values we do because that’s where our universe lies on the landscape. There’s no deeper explanation. That’s it. That’s the answer.

    But as much as the multiverse frees us from the old dichotomy, it leaves a profound unease. The questions we have spent so long pondering might have no deeper answer than just this: that it is the way it is. That might be the best we can do, but it’s not the kind of answer we’re used to. It doesn’t pull back the covers and explain how something works. What’s more, it dashes the theorists’ dream, with the claim that no unique solution will ever be found because no unique solution exists.

    There are some who don’t like that answer, others who don’t think it even qualifies to be called an answer, and some who accept it.

    To Nobel laureate David Gross, the multiverse ‘smells of angels.’ Accepting the multiverse, he says, is tantamount to throwing up your hands and accepting that you’ll never really understand anything, because whatever you see can be chalked up to a ‘historical accident.’ His fellow Nobelist Gerard ’t Hooft complains he cannot accept a scenario where you are supposed to ‘try all of these solutions until you find a universe that looks like the world we live in. He says: This is not the way physics has worked for us in the past, and it is not too late to hope that we will be able to find better arguments in the future.’

    Princeton cosmologist Paul Steinhardt refers to the multiverse as the ‘Theory of Anything,’ because it allows everything but explains nothing. ‘A scientific theory ought to be selective, he says. Its power is determined by the number of possibilities it excludes. If it includes every possibility, then it excludes nothing, then it has zero power.’ Steinhardt was one of the early champions of inflation until he realized that it generically gave rise to the multiverse, carving out a space of possibilities rather than making specific predictions. He has since become one of inflation’s most vocal critics. On a recent episode of Star Talk, he introduced himself as a proponent of alternatives to the multiverse. ‘What did the multiverse ever do to you?’ the host joked. ‘It destroyed one of my favorite ideas,’ Steinhardt replied.

    Physics was supposed to be the province of truth, of absolutes, of predictions. Things either are, or aren’t. Theories aren’t meant to be elastic or inclusive, but instead restrictive, rigid, dismissive. Given a situation, you want to be able to predict the likely—ideally, the unique and inevitable—outcome. The multiverse gives us none of that.

    4
    Beyond GalaxiesThis unassuming splotch is galaxy cluster Abell 2029, seen in both visible light and x-rays. Such clusters are the largest bound structures in our universe. Credit: NASA/Chandra X-ray Center/IoA.

    The debate over the multiverse sometimes gets vociferous, with skeptics accusing proponents of betraying science. But it’s important to realize that nobody chose this. We all wanted a universe that flowed organically from some beautiful deep principles. But from what we can tell so far, that’s not the universe we got. It is what it is.

    Must the argument for the multiverse be negative? Must it be a distant second-best option? Many of my colleagues are trying to put the multiverse in a more hopeful light. Logically speaking, an infinity of universes is simpler than a single universe would be—there is less to explain. As Sciama said, the multiverse ‘in a sense satisfies Occam’s razor, because you want to minimize the arbitrary constraints you place on the universe.’ Weinberg says that a theory that is free of arbitrary assumptions and hasn’t been ‘carefully tinkered with to make it match observations’ is beautiful in its own way. It might turn out, he says, that the beauty we find here is similar to that of thermodynamics, a statistical kind of beauty, which explains the state of the macroscopic system, but not of its every individual constituent. ‘You search for beauty, but you can’t be too sure in advance where you’ll find it, or what kind of beauty you’ll have,’ Weinberg says.

    Several times, while contemplating these weighty intellectual issues, my thoughts circled back to the simple, beautiful wisdom of Antoine de Saint-Exupéry’s Little Prince who, having considered his beloved rose unique in all the worlds, finds himself in a rose garden. Bewildered by this betrayal and saddened by the loss of consequence—his rose’s and his own—he breaks down in tears. Eventually he comes to realize that his rose is ‘more important than all the hundreds of others’ because she is his.

    There may well be nothing special about our entire universe, except for the fact that it is ours. But isn’t that enough? Even if our entire lives, the sum of all we can ever know, turn out to be cosmically insignificant, they are still ours. There is something distinguished about here, now, mine. Meaning is something we confer.

    Several times over these past few months, I found myself replaying my conversation with Gian Giudice. I found it reassuring how unperturbed he was by the vast range of possible universes and the seemingly arbitrary choices made by our own. Perhaps the multiverse is just telling us that we’re focusing on the wrong questions, he says. Maybe, as Kepler did with the orbits of the planets, we’re trying to read a deeper meaning into these numbers than is there.

    Since the solar system was all Kepler knew, he thought the shapes of the planetary orbits and the specific values of their various distances from the sun must carry important information, but that turned out to not be the case. These quantities were not fundamental; they were merely environmental parameters. That may have seemed lamentable at the time, but looking back now from the vantage point of general relativity, we no longer feel any sense of loss. We have a beautiful description of gravity; it just happens to be one in which these values of the planetary orbits are not fundamental constants.

    Perhaps, says Giudice, the multiverse implies something similar. Perhaps we need to let go of something we’re holding onto too tightly. Maybe we need to think bigger, refocus, regroup, reframe our questions to nature. The multiverse, he says, could open up ‘extremely satisfying, gratifying, and mind-opening possibilities.’

    Of all the pro-multiverse arguments I heard, this is the one that appeals to me the most. In every scenario, for every physical system, we can pose infinitely many questions. We try to strip a problem back to the essentials and ask the most basic questions, but our intuition is built upon what came before, and it is entirely possible that we are drawing upon paradigms that are no longer relevant for the new realms we are trying to probe.

    The multiverse is less like a closed door and more like a key. To me, the word is now tinged with promise and fraught with possibility. It seems no more wasteful than a bower full of roses.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 10:57 am on December 31, 2020 Permalink | Reply
    Tags: "An Existential Crisis in Neuroscience", , , DNNs are mathematical models that string together chains of simple functions that approximate real neurons., , , It’s clear now that while science deals with facts a crucial part of this noble endeavor is making sense of the facts., , Nautilus,   

    From Nautilus: “An Existential Crisis in Neuroscience” 

    From Nautilus

    December 30, 2020 [Re-issued “Maps” issue January 23, 2020.]
    Grigori Guitchounts

    1
    A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Credit: Lichtman Lab at Harvard University.

    We’re mapping the brain in amazing detail—but our brain can’t understand the picture.

    On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.

    Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.

    But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?

    2
    LIVING COLOR: This electron microscopy image of a slice of mouse cortex, which shows different neurons labeled by color, is just the beginning. “We’re working on a cortical slab of a human brain, where every synapse and every connection of every nerve cell is identifiable,” says Harvard’s Jeff Lichtman. “It’s amazing.” Credit: Lichtman Lab at Harvard University.

    The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?

    Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.

    Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.

    The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.

    Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.

    Lichtman likes to shoot first, ask questions later. The 68-year-old neuroscientist’s weapon of choice is a 61-beam electron microscope, which Lichtman’s team uses to visualize the tiniest of details in brain tissue. The way neurons are packed in a brain would make canned sardines look like they have a highly evolved sense of personal space. To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotated in three dimensions, the result of which is a wiring diagram. Done at the scale of an entire brain, the effort constitutes a complete wiring diagram, or the connectome.

    To capture that diagram, Lichtman employs a machine that can only be described as a fancy deli slicer. The machine cuts pieces of brain tissue into 30-nanometer-thick sections, which it then pastes onto a tape conveyor belt. The tape goes on silicon wafers, and into Lichtman’s electron microscope, where billions of electrons blast the brain slices, generating images that reveal nanometer-scale features of neurons, their axons, dendrites, and the synapses through which they exchange information. The Technicolor images are a beautiful sight that evokes a fantastic thought: The mysteries of how brains create memories, thoughts, perceptions, feelings—consciousness itself—must be hidden in this labyrinth of neural connections.

    2
    THE MAPMAKER: Jeff Lichtman, a leader in brain mapping, says the word “understanding” has to undergo a revolution in reference to the human brain. “There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Credit: Lichtman Lab at Harvard University.

    A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain. But Lichtman is not daunted. He is determined to map whole brains, exorbitant exabyte-scale storage be damned.

    Lichtman’s office is a spacious place with floor-to-ceiling windows overlooking a tree-lined walkway and an old circular building that, in the days before neuroscience even existed as a field, used to house a cyclotron. He was wearing a deeply black sweater, which contrasted with his silver hair and olive skin. When I asked if a completed connectome would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.

    “I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”

    “But we understand specific aspects of the brain,” I said. “Couldn’t we put those aspects together and get a more holistic understanding?”

    “I guess I would retreat to another beachhead, which is, ‘Can we describe the brain?’ ” Lichtman said. “There are all sorts of fundamental questions about the physical nature of the brain we don’t know. But we can learn to describe them. A lot of people think ‘description’ is a pejorative in science. But that’s what the Hubble telescope does. That’s what genomics does. They describe what’s actually there. Then from that you can generate your hypotheses.”

    “Why is description an unsexy concept for neuroscientists?”

    “Biologists are often seduced by ideas that resonate with them,” Lichtman said. That is, they try to bend the world to their idea rather than the other way around. “It’s much better—easier, actually—to start with what the world is, and then make your idea conform to it,” he said. Instead of a hypothesis-testing approach, we might be better served by following a descriptive, or hypothesis-generating methodology. Otherwise we end up chasing our own tails. “In this age, the wealth of information is an enemy to the simple idea of understanding,” Lichtman said.

    “How so?” I asked.

    “Let me put it this way,” Lichtman said. “Language itself is a fundamentally linear process, where one idea leads to the next. But if the thing you’re trying to describe has a million things happening simultaneously, language is not the right tool. It’s like understanding the stock market. The best way to make money on the stock market is probably not by understanding the fundamental concepts of economy. It’s by understanding how to utilize this data to know what to buy and when to buy it. That may have nothing to do with economics but with data and how data is used.”

    “Maybe human brains aren’t equipped to understand themselves,” I offered.

    “And maybe there’s something fundamental about that idea: that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. Which is the great irony here. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. But if I asked you if your dog can understand something you’d say, ‘Well, my dog’s brain is small.’ Well, your brain is only a little bigger,” he continued, chuckling. “Why, suddenly, are you able to understand everything?”

    Was Lichtman daunted by what a connectome might achieve? Did he see his efforts as Sisyphean?

    “It’s just the opposite,” he said. “I thought at this point we would be less far along. Right now, we’re working on a cortical slab of a human brain, where every synapse is identified automatically, every connection of every nerve cell is identifiable. It’s amazing. To say I understand it would be ridiculous. But it’s an extraordinary piece of data. And it’s beautiful. From a technical standpoint, you really can see how the cells are connected together. I didn’t think that was possible.”

    Lichtman stressed his work was about more than a comprehensive picture of the brain. “If you want to know the relationship between neurons and behavior, you gotta have the wiring diagram,” he said. “The same is true for pathology. There are many incurable diseases, such as schizophrenia, that don’t have a biomarker related to the brain. They’re probably related to brain wiring but we don’t know what’s wrong. We don’t have a medical model of them. We have no pathology. So in addition to fundamental questions about how the brain works and consciousness, we can answer questions like, Where did mental disorders come from? What’s wrong with these people? Why are their brains working so differently? Those are perhaps the most important questions to human beings.”

    Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, On Exactitude in Science, a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”

    The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?

    Humans have a critical weapon in this fight. Machine learning has been a boon to brain mapping, and the self-reinforcing relationship promises to transform the whole endeavor. Deep learning algorithms (also known as deep neural networks, or DNNs) have in the past decade allowed machines to perform cognitive tasks once thought impossible for computers—not only object recognition, but text transcription and translation, or playing games like Go or chess. DNNs are mathematical models that string together chains of simple functions that approximate real neurons. These algorithms were inspired directly by the physiology and anatomy of the mammalian cortex, but are crude approximations of real brains, based on data gathered in the 1960s. Yet they have surpassed expectations of what machines can do.

    The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each blob in the image (a neuron’s soma, axon, or dendrite, in addition to everything else in the brain), and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks. DNNs trained on image recognition are now doing the heavy lifting automatically, turning a job that took months or years into one that’s complete in a matter of hours or days. Recently, Google identified each neuron, axon, dendrite, and dendritic spike—and every synapse—in slices of the human cerebral cortex. “It’s unbelievable,” Lichtman said.

    Scientists still need to understand the relationship between those minute anatomical features and dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks. This is a point on which connectomics has received considerable criticism, mainly by way of example from the worm: Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.

    Still, structure and function go hand-in-hand in biology, so it’s reasonable to expect one day neuroscientists will know how specific neuronal morphologies contribute to activity profiles. It wouldn’t be a stretch to imagine a mapped brain could be kickstarted into action on a massive server somewhere, creating a simulation of something resembling a human mind. The next leap constitutes the dystopias in which we achieve immortality by preserving our minds digitally, or machines use our brain wiring to make super-intelligent machines that wipe humanity out. Lichtman didn’t entertain the far-out ideas in science fiction, but acknowledged that a network that would have the same wiring diagram as a human brain would be scary. “We wouldn’t understand how it was working any more than we understand how deep learning works,” he said. “Now, suddenly, we have machines that don’t need us anymore.”

    Yet a masterly deep neural network still doesn’t grant us a holistic understanding of the human brain. That point was driven home to me last year at a Computational and Systems Neuroscience conference, a meeting of the who’s-who in neuroscience, which took place outside Lisbon, Portugal. In a hotel ballroom, I listened to a talk by Arash Afraz, a 40-something neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. The model neurons in DNNs are to real neurons what stick figures are to people, and the way they’re connected is equally as sketchy, he suggested.

    Afraz is short, with a dark horseshoe mustache and balding dome covered partially by a thin ponytail, reminiscent of Matthew McConaughey in True Detective. As sturdy Atlantic waves crashed into the docks below, Afraz asked the audience if we remembered René Magritte’s Ceci n’est pas une pipe painting, which depicts a pipe with the title written out below it. Afraz pointed out that the model neurons in DNNs are not real neurons, and the connections among them are not real either. He displayed a classic diagram of interconnections among brain areas found through experimental work in monkeys—a jumble of boxes with names like V1, V2, LIP, MT, HC, each a different color, and black lines connecting the boxes seemingly at random and in more combinations than seems possible. In contrast to the dizzying heap of connections in real brains, DNNs typically connect different brain areas in a simple chain, from one “layer” to the next. Try explaining that to a rigorous anatomist, Afraz said, as he flashed a meme of a shocked baby orangutan cum anatomist. “I’ve tried, believe me,” he said.

    I, too, have been curious why DNNs are so simple compared to real brains. Couldn’t we improve their performance simply by making them more faithful to the architecture of a real brain? To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.

    How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”

    As 2020 has arrived, I have thought a lot about what I have learned from Lichtman, Afraz, and Saxe and the holy grail of neuroscience: understanding the brain. I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).

    It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.

    It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 1:37 pm on December 3, 2020 Permalink | Reply
    Tags: "The Arow of time", "Time Flows Toward Order", , , Nautilus,   

    From Nautilus: “Time Flows Toward Order” 

    From Nautilus

    December 2, 2020
    Julian Barbour

    1
    Credit: turgaygundogdu / Shutterstock.

    Revisiting the gospel of the second law of thermodynamics.

    The one law of physics that virtually all scientists believe will never be found to be wrong is the second law of thermodynamics. Despite this exalted status, it has long been associated with a great mystery and a bleak implication. The mystery is that all the known laws of nature except one do not distinguish a temporal direction. The second law, however, asserts the existence of an all-powerful unidirectionality in the way all events throughout the universe unfold. According to standard accounts, the second law says that entropy, described as a measure of disorder, will always (with at most small fluctuations) increase. That’s the rub: Time has an arrow that points to heat death.

    Surprisingly, evidence that a more nuanced account is needed is hiding in plain sight: the universe itself. Very soon after the Big Bang, the universe was in an extremely uniform state, which since then has become ever more varied and structured.

    CMB per ESA/Planck

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey.

    Gaia EDR3 StarTrails 600.

    Even if uniformity equates to order, that initial state was surely bland and dull. And who can see disorder in the fabulously structured galaxies or the colors and shapes of the trees in the fall? In fact, the sequence in which two of the greatest discoveries in science were made resolves the paradox: The second law was discovered eight decades before the expansion of the universe.

    3
    BOTH SIDES NOW: Two people walking down opposite sides of Mount Fuji would see the terrain change in much the same way. To author Julian Barbour, the hikers’ perceptions offers an apt analogy for how beings on either side of what he calls a “Janus Point” in the universe would experience moving orderly in time. Credit: Martina Badini / Shutterstock.

    The time lag is critical for one simple reason. The laws of thermodynamics, discovered in 1850 by William Thomson (later ennobled to Lord Kelvin) and Rudolf Clausius, emerged from a brilliant study that Sadi Carnot (son of Napoleon’s greatest general) published in 1824. In a slim booklet that laid out all but one of the foundational principles of thermodynamics, he sought to establish the maximum efficiency steam engines could achieve. Steam engines can only function if their working medium is confined in a cylinder. This led all early work on thermodynamics to be based on systems in a conceptual box. Clausius’s discovery and definition of entropy—one of the wonders of science—relied totally on infinitesimal changes from one equilibrium state of a confined system to another. The pioneers of statistical mechanics, the theoretical framework created above all by Clausius, James Clerk Maxwell, and Ludwig Boltzmann to provide a microscopic atomistic explanation of phenomenological thermodynamics, invariably considered models of gas molecules trapped in a box and forced to bounce off its walls and each other.

    A rich conceptual framework, completely valid and immensely fruitful for confined systems, developed out of this simple model, and reached its definitive form in the work of J. Willard Gibbs. The model proved the existence of atoms and molecules, established their sizes, determined the incredible number of them in a grain of sand, and struck the death knell of Newtonian classical physics. That was when Planck discovered the first quantum effect in 1900. What’s more, both the first and second law appeared to be founded on a rock-solid principle: the impossibility of creating perpetual motion machines.

    It’s therefore not surprising that few, if any, scientists have disagreed with the great astrophysicist Arthur Eddington’s warning, “If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” Einstein, surely a greater scientist than Eddington, was more cautious. A few years before his death, Einstein said of thermodynamics, “It is the only physical theory of universal content which I am convinced that, within the framework of applicability of its basic concepts, will never be overthrown.” The caveat is all important: Do conditions in an expanding universe remain within the framework of applicability?

    That is what I question. I don’t suggest we can ever alter the facts that Thomson and Clausius first brought to light. Neither you nor I are going to get younger or see a shattered cup miraculously reassemble itself and jump back onto the table. There is a pervasive unidirectionality, an arrow of time, about the way things happen in the universe. Kelvin, the first to recognize its significance, called it “a universal tendency in nature to the dissipation of mechanical energy.” I don’t deny the existence of the arrow, but I do suggest that the “box mentality” has led us to misunderstand what is happening in the universe and even blinded us to the beauty that it is creating. A one-way street need not lead to a scrap yard; it might bring us to a finely landscaped park.

    Compare two situations. First, the molecules in their box. If, every now and then, you open it to look at them, you can be sure to find them filling the box uniformly and going through their habitual routine—bumping into each other with random outcomes. Nothing of interest develops. This, nevertheless, was the model used to interpret mundane measurements of pressure and temperature. It led to all those marvelous discoveries and much of the technology on which today we so depend. No wonder it inspired confidence.

    But now picture the box in space with its walls suddenly removed. What will the molecules do? The answer’s in Siegfried Sassoon’s poem “Everyone Sang”:

    As prisoned birds must find in freedom, Winging wildly across the white
    Orchards and dark-green fields; on – on – and out of sight.

    In mathematical rather than poetic terms, the molecules soon cease to interact and fly apart, maintaining forever their release velocities and getting ever further from each other. In fact, a simple calculation may surprise you: The speed with which the molecules move apart approximates ever better the law of galactic recession that Hubble announced in 1929. This simple Big Bang model does not look like disorder on the increase.

    There is a greater mismatch between entropic disorder and reality in the very heart of Newton’s theory of universal gravitation. He achieved fame by explaining not only Kepler’s laws of planetary motion but also the fall of an apple. However, the problem of three bodies—he had in mind the earth, sun, and moon moving in their mutual gravitational fields—gave him headaches. Although a famously difficult problem, in 1772 the great mathematician Joseph-Louis Lagrange made some progress, including a significant discovery about the behavior of a “three-body universe” that was later shown to be true for any number of bodies. It concerns what is now called the center-of-mass moment of inertia, I. This measures the extent of the system—for bees it would be about the diameter of a swarm—and behaves in a characteristic universal way if a single condition is satisfied: The total energy of the system is not negative.

    To understand what the behavior is, assume with Newton that time flows forever forward from past to future. Then what Lagrange found is that I decreases from infinity in the distant past, passes through a unique minimum, and grows to infinity in the distant future. I call this unique minimum a Janus Point. The Roman divinity can be invoked because he looks simultaneously in two opposite directions of time at once. What he sees is striking. In the region around the threshold on which he traditionally stands, the distribution of the particles (especially when there are many) is more uniform than anywhere else on the timeline of the universe. Then, in both directions, the particles cluster, taking on a shape that is more ordered and forming “galaxies.” From his vantage point, Janus can see this, but if you, being a mere mortal, were in such a universe you would necessarily be on one or the other side of the Janus point and could not “see through it” to the other side. You would find that the laws of nature around you do not distinguish a direction of time but that your universe gets ever more clumpy in one direction.

    There is a precise, mathematically significant quantity that may be called complexity and increases (with small fluctuations) in both directions from Janus. The big difference from what entropy does is that growth of complexity reflects an increase of order, not disorder. The effects in confined and unconfined systems are the exact opposites of each other. Moreover, the increase of complexity in unconfined systems follows directly from the governing dynamical law whereas entropy increases in confined systems for statistical reasons.

    Traditional arguments assume that somehow, for an as yet unfathomable reason, the universe gets in a special state of low entropy and correspondingly high order that is then remorselessly destroyed. A model often given is molecules confined to a little box in the corner of a big box. That’s the special initial condition. Now lift the lid of the little box; the laws of dynamics allow two quite different outcomes. It’s conceivable, but barely so, that the molecules will collect in the corner of the little box and then be in an even more special state. But it is statistically more likely that the molecules will spread out into the large box and eventually fill it uniformly. This is a statistical explanation of the entropic arrow. Applied to the whole universe, a special initial condition of this kind has been dubbed the “Past Hypothesis” by the philosopher of science David Albert. The difficulty is that nothing in the known laws of nature explains the special initial condition.

    Let’s now think about the Janus point. It’s unique and a special point. It isn’t there for some inexplicable reason. Newton’s laws say it must be there. Even if the weak condition on the energy is relaxed, something very like it in the form of a “Janus region” will almost always be present. The complexity will increase in both directions from it. The increase in order has a dynamical explanation and is what puts the direction into time. Moreover, beings on either side of the Janus point would think they are going forward in time and would have typically the same kind of experiences, just as two people walking down opposite sides of Mount Fuji in Japan would find the terrain and vegetation change in much the same way.

    Doubts about the repeatedly claimed growth of disorder in the universe get greater when we consider what happens when, as the particles emerge from the melee at the Janus point—they are there typically going in all possible directions—some of them come together as “Kepler pairs.” Except for encounters with other particles, which become exceptionally rare as the model universe expands and the typical distances between the particles become greater and greater, such pairs bond forever, settling down into ever more perfect elliptical motion about their common center of mass. They form exquisitely accurate rods, compasses, and clocks all in one. Their major axis defines an astronomical unit of length and a fixed compass direction (true north), while the period of their motion defines a unit of time. Does that look like disorder?

    William Blake hated Newton’s clockwork universe, as it came to be known. The poet thought it was reductive, but that was based on the appearance of the solar system, which does resemble clockwork. In fact, Newton was so amazed by its beauty and order he could not believe his laws could explain its existence—God must have set it up that way and keep a careful watch on it to see the order does not get disturbed. In fact, Kepler pairs are miniature solar systems and Newton’s laws create them out of chaos. His universe first makes clocks and then allows them to tell the time.

    In the 19th century nobody had the remotest idea that the universe could be expanding, though the possibility might have been recognized when Lagrange made his discovery.

    Lamda Cold Dark Matter Accerated Expansion of The universe http scinotions.com the-cosmic-inflation-suggests-the-existence-of-parallel-universes
    Alex Mittelmann, Coldcreation.

    The significance of the expanding universe is that it creates room for things to happen. William Thomson may have inadvertently set people thinking in the wrong way when he spoke about universal dissipation of mechanical energy. Among multiple talents, he was a brilliant engineer and, like Carnot with his study of steam engines, was always looking for ways to improve the human lot. A negative connotation attaches to “dissipation”; had Thomson used the neutral word “spreading,” positive as well as negative possibilities might have been seen.

    There is a beautiful effect I often go to watch on afternoon walks near my home. A tree hangs above a brook where the water flows smoothly over a ford. If it has rained, drops of water fall from the tree onto the water, creating circular waves that spread out over the flowing water. You can watch the effect for far too long if you need to get on with work. The waves created by drops that hit the water at different points meet and pass through each other, each emerging intact. If the brook had no banks and the water no viscosity, that would create the condition that radiation finds in the vast voids of our expanding universe, and the patterns would remain as beautiful forever. That’s the difference between an open and a closed system.

    Before the banks have their effect in the brook, mechanical energy is first entirely within each falling drop but is then spread out in the circular waves. Except at or near equilibrium, Clausius’s entropy is a quantity difficult to define and measure; Thomson’s dissipation is a qualitative effect that is both universal and easy to recognize. He mentioned the heat created by friction as an example of dissipation; the illustration has been endlessly repeated. But Thomson loved the river Kelvin in Glasgow and took his baronial name from it. He must have often seen water drops falling onto the river. If in the title of his 1852 paper, which was so influential, he had changed “dissipation” into “spreading”—it would have been a better characterization—who can say how that might have changed the interpretation of the second law, especially after Hubble’s monumental discovery?

    The circle is the most perfect geometrical figure and pi, which relates its circumference to its radius, bids fair to be the most perfect number. “Ah,” you say, “a thing of beauty may have been born, but it decays as the waves get shallower and shallower.” To which I answer that you forget the lessons of Gulliver’s Travels and the relativity of size. It is only ratios that have physical meaning. The beauty is in the ratios, and they persist forever even in the expanding universe.

    I will end by saying that the notion of the Janus point, which resolves the mismatch between a law that does not distinguish a direction of time and solutions that do, might turn out to be less significant than how expansion of the universe can change the way we interpret the second law and look at the universe. I say this because the scenario I have described assumes that the size of the universe remains finite at the Janus point. But it can actually become zero in Newtonian theory and does so typically in Einstein’s general theory of relativity. That opens up extremely interesting possibilities that just might lead to a truly new theory of time.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
  • richardmitnick 10:10 am on November 26, 2020 Permalink | Reply
    Tags: "A Supermassive Lens on the Constants of Nature", 2020 Nobel Prize winners in Physics Roger Penrose; Reinhard Genzel; and Andrea Ghez, , , , , Nautilus, , Star S0-2 Andra Ghez' favorite star   

    From Nautilus: “A Supermassive Lens on the Constants of Nature” 

    From Nautilus

    November 25, 2020
    Sidney Perkowitz

    1
    What this year’s Nobel-winning discovery of the black hole at our galaxy’s center reveals.

    The 2020 Nobel Prize in Physics went to three researchers who confirmed that Einstein’s general relativity predicts black holes, and established that the center of our own galaxy houses a supermassive black hole with the equivalent of 4 million suns packed into a relatively small space.

    Sgr A* from ESO VLT.

    SGR A* , the supermassive black hole at the center of the Milky Way. NASA’s Chandra X-Ray Observatory.

    Star S0-2 Andrea Ghez Keck/UCLA Galactic Center Group at SGR A*, the supermassive black hole at the center of the Milky Way.

    Besides expanding our understanding of black holes, the strong gravitational field around the supermassive black hole is a lab to study nature under extreme conditions. Researchers, including one of the new Nobel Laureates, Andrea Ghez at UCLA, have measured how the intense gravity changes the fine structure constant, one of the constants of nature that defines the physical universe, and in this case, life within it. This research extends other ongoing efforts to understand the constants and whether they vary in space and time. The hope is to find clues to resolve issues in the Standard Model of elementary particles and in current cosmology.

    2
    Roger Penrose, Reinhard Genzel and Andrea Ghez.CNN.com

    Besides Ghez, the other Nobel Laureates honored in 2020 are Roger Penrose at Cambridge University (UK), who deepened our theoretical understanding of black holes; and Reinhard Genzel, of the MPG Institut für extraterrestrische Physik in Garching, Germany. Ghez and Genzel carried out parallel but separate observations and analysis that led each to deduce the presence of our galactic supermassive black hole. At 27,000 light-years away, obtaining good data required huge telescopes. Ghez worked with the Keck Observatory on Mauna Kea in Hawaii, and Genzel used the Very Large Telescope in Chile.

    W.M. Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, altitude 4,207 m (13,802 ft). Credit: Caltech.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo.

    Each researcher found that the motion of the stars they observed arose from an enormous mass at the center of the galaxy. They obtained the same value, 4 million times the mass of our sun, in a region only as big as our solar system—definitive evidence of a supermassive black hole.

    Ghez’s research at Keck made her a co-author in a paper published this year, in which Aurélien Hees of the Paris Observatory and 13 international colleagues presented results for the fine structure constant near our galactic supermassive black hole. Remarkably, Ghez’s Nobel Prize-winning results supporting this research combined today’s theories and astronomical techniques with ideas dating back to Johannes Kepler and Isaac Newton to examine the motion of stars near the supermassive black hole. This is another example of Newton’s insight about how science advances when he wrote in 1675, “If I have seen further it is by standing on the shoulders of giants.”

    German astronomer Kepler is one such giant who changed science when he presented his laws of planetary motion in 1609. He was the first to show that the planets do not orbit the sun in divinely inspired perfect circles, as had been assumed. The orbits are ellipses with the sun at a focus of the ellipse, one of the two points symmetrically offset from the center that define how to construct an ellipse. Kepler also found a mathematical relation between the size of a planetary orbit and how long it takes the planet to complete a circuit.

    In 1687 Newton gave Kepler’s laws a deeper, more coherent physical basis. Newton’s law of gravitation, based on mutual attraction between bodies, showed that a celestial object in a closed orbit around a mass follows an elliptical path that depends on that mass. This result, which today is taught in introductory astronomy, is the heart of how Ghez found the mass of the supermassive black hole. Her years of careful observations precisely defined the elliptical paths of stars orbiting the galactic center; then she used Newton’s theory to calculate the mass at the center (general relativity, which replaces Newton’s law, predicts black holes but Newton’s approach is sufficiently accurate for the stellar orbits around the supermassive black hole). Knowledge of these orbits would be crucial for measuring the fine structure constant in the strong gravity near the supermassive black hole. How that constant depends on gravity could be a clue to modifying the Standard Model or general relativity to deal with dark matter and dark energy, the two great puzzles of contemporary physics.

    This particular examination fits into a bigger, long-term examination of the fundamental constants of nature, each of which tells us something about the scope or scale of our deepest theories. Along with other constants, the fine structure constant (denoted by the Greek letter α), appears in the Standard Model, the quantum field theory of elementary particles. The numerical value of α defines how strongly photons and electrically charged particles interact through the electromagnetic force, which controls the universe along with gravity and the strong and weak nuclear forces. Among its effects, electromagnetism determines the degree of repulsion between protons and how electrons behave in an atom. If the value of α were much different from the one we know, that would affect whether nuclear fusion within stars produces the element carbon or whether atoms can form stable complex molecules. Both are necessary for life, another reason α is significant.

    Other constants represent other major physical theories: c, the speed of light in vacuum, is crucial in relativity; h, the constant derived by Max Planck (now taken as “h-bar,” or ħ = h/2π), sets the tiny size of quantum effects; and G, the gravitational constant in Newton’s theory and general relativity, determines how astronomical bodies interact. In 1899 Planck used just these three to define a universal measurement system based on natural properties and not on any human artifacts. This system, he wrote, would be the same “for all times and all civilizations, extraterrestrial and non-human ones.”

    Planck derived natural units of length, time, and mass from c, ħ, and G: LP = 1.6 x 10-35 meters, TP = 5.4 x 10^-44 seconds, and MP = 2.2 x 10^-8 kilograms. Too small to be practical, they have conceptual weight. In today’s universe the gravitational interaction between elementary particles is too weak to affect their quantum behavior. But place the bodies a tiny Planck length LP apart, less than the diameter of an elementary particle, and their gravitational interaction becomes strong enough to rival quantum effects. This defines the “Planck era” 10^-44 seconds after the Big Bang, when gravitational and quantum effects were of similar strength and would require a combined theory of quantum gravity instead of the two separate theories we have today.

    Nevertheless, to some physicists, c, ħ, and G are not truly fundamental because they depend on units of measurement. Consider for instance that c is 299,792 km/sec in metric units but 186,282 miles/sec in English units, This shows that physical units are cultural constructs rather than inherent in nature (in 1999, NASA’s Mars Climate Orbiter fatally crashed because two scientific teams forgot to check which measurement system the other had used). Constants that are pure numbers, however, would translate perfectly between cultures and even between us and aliens with unimaginably different units of measurement.

    The fine structure constant α stands out as carrying this favored purity. In 1916 it appeared in calculations for the wavelengths of light emitted or absorbed as the single electron in hydrogen atoms jumps between quantum levels. Niels Bohr’s early quantum theory predicted the main wavelengths but spectra showed additional features. To explain these, the German theorist Arnold Sommerfeld added relativity to the quantum theory of the hydrogen atom. His calculations depended on a quantity he called the fine structure constant. It includes ħ, c, and the charge on the electron e, another constant of nature; and the permittivity ε0 that represents the electrical properties of vacuum. Remarkably, the physical units in this odd collection cancel out, leaving only the pure number 0.0072973525693.

    Sommerfeld used α just as a parameter, but it gained fame in the late 1920s when it reappeared in advanced work on relativistic quantum mechanics by the French physicist Paul Dirac, and then in what the English astronomer Arthur Eddington hoped would be a Theory of Everything. He planned to merge quantum theory and relativity to derive the properties of the universe such as the number of elementary particles in it, and its constants, α among them.

    One twist in Eddington’s approach was that he considered the quantity 1/α rather than α, because his analysis showed that it must be an integer as well as a pure number. This was consistent with a contemporary measurement that yielded 1/α = 137.1, tantalizingly near 137 exactly. Eddington’s calculations gave instead 136, close enough to raise interest. Further measurements however confirmed that 1/α = 137.036. Eddington’s attempts to justify his different result were unconvincing and for this and other reasons his theory has not survived.

    But α and “137” remain linked, which is why Richard Feynman called 137 a “magic number.” What he meant has nothing to do with numerology. Rather it is that we know how to measure the value of α but not how to derive it from any theories we know. This is true also for the other fundamental constants, including pure numbers such as the ratio of the proton and electron masses, and is a lack in the Standard Model. Nevertheless, the value of α is critical in quantum electrodynamics, the quantum theory of electromagnetism. Feynman fully understood this, since he earned the 1965 Nobel Prize with two other theorists for developing quantum electrodynamics.

    So α is accepted as one of the important constants of nature. Now, with the values of these quantities accurately known, physicists ask, are they truly constant? In 1937, considerations about the forces in the universe led Dirac to speculate that α and G change with time as the universe ages. Another suggestive and even older speculation is to wonder whether the constants vary across the universe. In 1543, when the Polish astronomer Nicolaus Copernicus put the sun and not the Earth at the center of the universe, he moved humanity from its special cosmic location. This implies that the universe is the same everywhere, but this is only an assumption.

    Varying “constants” would alter both the Standard Model and the cosmology based on it and general relativity, which among other issues fail to explain dark matter and dark energy. Add the role of α in the notion that the universe is “fine-tuned” to support life and the related idea that out of many multiverses, the one where we exist is the one with that winning value of α. All this spurs research on the constants of nature, much of it focused on α.

    Earthly measurements confirm that α is fixed to within parts per tens of billions. A more challenging project is measuring it over astronomical distances. This also determines α at early cosmic times, since light from billions of light-years away took that many years to reach us from a younger universe. Since 1999, John Webb at the University of New South Wales, Australia, with colleagues has been making such measurements by gathering light from the distant galactic cores called quasars where black holes pull in dust that glows. This light traverses interstellar gas clouds and is absorbed at wavelengths characteristic of the atoms in the clouds. Analyzing the wavelengths gives α at the distant location, just as hydrogen wavelengths first defined α on Earth.

    Webb’s early results showed that α has increased 0.0006 percent over the last 6 billion years or more, and that it depended on distance from the Earth. Results published in 2020 show a smaller change in α between now and 13 billion years ago, when the universe was only 0.8 billion years old, which the authors interpret as “consistent with no temporal change.” The cumulative results also suggest that α varies along different directions in space. Overall, the experimental errors are too large to inspire confidence that any single measured change in α is exactly correct, but the changes are certainly extremely small.

    Now α has also been measured within a strong gravitational field, where it can theoretically change. The strongest gravity we know comes from a black hole, where a spacecraft would have to reach the unattainable speed of light to escape. But strong gravity also accompanies a white dwarf, a star that has expelled its outer layers to leave a massive but only planet-sized core. In 2013, J.C. Berengut of the University of New South Wales, with Webb and others analyzed spectral data from a white dwarf and obtained a change in α of 0.004 percent relative to the Earth.

    No one, however, had measured α near a supermassive black hole until this year’s work by Hees and co-authors including Ghez. Her results from Keck helped in choosing five stars whose orbits bring them near the supermassive black hole to maximize its gravitational effects, and of a type whose spectra display strong absorption features due to the surrounding stellar atmosphere. This facilitated deriving α from the absorption wavelengths for each star. The final composite result again shows only a small change in α, of 0.001 percent or less compared to Earth.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: