Tagged: Nautilus Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:08 pm on September 30, 2018 Permalink | Reply
    Tags: Nautilus, Planck time, , Time, Time can now be sliced into slivers as thin as one ten-trillionth of a second   

    From Nautilus: “Is It Time to Get Rid of Time?” 


    From Nautilus

    September 20, 2018
    Marcia Bartusiak

    No image credit.

    The crisis inside the physics of time.

    Poets often think of time as a river, a free-flowing stream that carries us from the radiant morning of birth to the golden twilight of old age. It is the span that separates the delicate bud of spring from the lush flower of summer.

    Physicists think of time in somewhat more practical terms. For them, time is a means of measuring change—an endless series of instants that, strung together like beads, turn an uncertain future into the present and the present into a definite past.

    The very concept of time allows researchers to calculate when a comet will round the sun or how a signal traverses a silicon chip. Each step in time provides a peek at the evolution of nature’s myriad phenomena.

    In other words, time is a tool. In fact, it was the first scientific tool. Time can now be sliced into slivers as thin as one ten-trillionth of a second.

    Planck Time. Universe Today


    But what is being sliced? Unlike mass and distance, time cannot be perceived by our physical senses. We don’t see, hear, smell, touch, or taste time. And yet we somehow measure it. As a cadre of theorists attempt to extend and refine the general theory of relativity, Einstein’s momentous law of gravitation, they have a problem with time. A big problem.

    Slicing it thin: A hydrogen maser clock keeps time by exploiting the so-called hyperfine transition.Wikimedia Commons

    “It’s a crisis,” says mathematician John Baez, of the University of California at Riverside, “and the solution may take physics in a new direction.” Not the physics of our everyday world. Stopwatches, pendulums, and hydrogen maser clocks will continue to keep track of nature quite nicely here in our low-energy earthly environs. The crisis arises when physicists attempt to merge the macrocosm—the universe on its grandest scale—with the microcosm of subatomic particles.

    Under Newton, time was special. Every moment was tallied by a universal clock that stood separate and apart from the phenomenon under study. In general relativity, this is no longer true. Einstein declared that time is not absolute—no particular clock is special—and his equations describing how the gravitational force works take this into account. His law of gravity looks the same no matter what timepiece you happen to be using as your gauge. “In general relativity time is completely arbitrary,” explains theoretical physicist Christopher Isham of Imperial College in London. “The actual physical predictions that come out of general relativity don’t depend on your choice of a clock.” The predictions will be the same whether you are using a clock traveling near the speed of light or one sitting quietly at home on a shelf.

    The choice of clock is still crucial, however, in other areas of physics, particularly quantum mechanics. It plays a central role in Erwin Schrödinger’s celebrated wave equation of 1926. The equation shows how a subatomic particle, whether traveling alone or circling an atom, can be thought of as a collection of waves, a wave packet that moves from point to point in space and from moment to moment in time.

    According to the vision of quantum mechanics, energy and matter are cut up into discrete bits, called quanta, whose motions are jumpy and blurry. They fluctuate madly. The behavior of these particles cannot be worked out exactly, the way a rocket’s trajectory can. Using Schrödinger’s wave equation, you can only calculate the probability that a particle—a wave packet—will attain a certain position or velocity. This is a picture so different from the world of classical physics that even Einstein railed against its indeterminacy. He declared that he could never believe that God would play dice with the world.

    You might say that quantum mechanics introduced a fuzziness into physics: You can pinpoint the precise position of a particle, but at a trade-off; its velocity cannot then be measured very well. Conversely, if you know how fast a particle is going, you won’t be able to know exactly where it is. Werner Heisenberg best summarized this strange and exotic situation with his famous uncertainty principle. But all this action, uncertain as it is, occurs on a fixed stage of space and time, a steadfast arena. A reliable clock is always around—is always needed, really—to keep track of the goings-on and thus enable physicists to describe how the system is changing. At least, that’s the way the equations of quantum mechanics are now set up.

    And that is the crux of the problem. How are physicists expected to merge one law of physics—namely gravity—that requires no special clock to arrive at its predictions, with the subatomic rules of quantum mechanics, which continue to work within a universal, Newtonian time frame? In a way, each theory is marching to the beat of a different drummer (or the ticking of a different clock).

    That’s why things begin to go a little crazy when you attempt to blend these two areas of physics. Although the scale on which quantum gravity comes into play is so small that current technology cannot possibly measure these effects directly, physicists can imagine them. Place quantum particles on the springy, pliable mat of spacetime, and it will bend and fold like so much rubber. And that flexibility will greatly affect the operation of any clock keeping track of the particles. A timepiece caught in that tiny submicroscopic realm would probably resemble a pendulum clock laboring amid the quivers and shudders of an earthquake. “Here the very arena is being subjected to quantum effects, and one is left with nothing to stand on,” explains Isham. “You can end up in a situation where you have no notion of time whatsoever.” But quantum calculations depend on an assured sense of time.

    For Karel Kucha, a general relativist and professor emeritus at the University of Utah, the key to measuring quantum time is to devise, using clever math, an appropriate clock—something he has been attempting, off and on, for several decades. Conservative by nature, Kucha believes it is best to stick with what you know before moving on to more radical solutions. So he has been seeking what might be called the submicroscopic version of a Newtonian clock, a quantum timekeeper that can be used to describe the physics going on in the extraordinary realm ruled by quantum gravity, such as the innards of a black hole or the first instant of creation.

    Unlike the clocks used in everyday physics, Kucha’s hypothetical clock would not stand off in a corner, unaffected by what is going on around it. It would be set within the tiny, dense system where quantum gravity rules and would be part and parcel of it. This insider status has its pitfalls: The clock would change as the system changed—so to keep track of time, you would have to figure out how to monitor those variations. In a way, it would be like having to pry open your wristwatch and check its workings every time you wanted to refer to it.

    The most common candidates for this special type of clock are simply “matter clocks.” “This, of course, is the type of clock we’ve been used to since time immemorial. All the clocks we have around us are made up of matter,” Kucha points out. Conventional timekeeping, after all, means choosing some material medium, such as a set of particles or a fluid, and marking its changes. But with pen and paper, Kucha mathematically takes matter clocks into the domain of quantum gravity, where the gravitational field is extremely strong and those probabilistic quantum-mechanical effects begin to arise. He takes time where no clock has gone before.

    But as you venture into this domain, says Kucha, “matter becomes denser and denser.” And that’s the Achilles heel for any form of matter chosen to be a clock under these extreme conditions; it eventually gets squashed. That may seem obvious from the start, but Kucha needs to examine precisely how the clock breaks down so he can better understand the process and devise new mathematical strategies for constructing his ideal clock.

    More promising as a quantum clock is the geometry of space itself: monitoring spacetime’s changing curvature as the infant universe expands or a black hole forms. Kucha surmises that such a property might still be measurable in the extreme conditions of quantum gravity. The expanding cosmos offers the simplest example of this scheme. Imagine the tiny infant universe as an inflating balloon. Initially, its surface bends sharply around. But as the balloon blows up, the curvature of its surface grows shallower and shallower. “The changing geometry,” explains Kucha, “allows you to see that you are at one instant of time rather than another.” In other words, it can function as a clock.

    Unfortunately, each type of clock that Kucha has investigated so far leads to a different quantum description, different predictions of the system’s behavior. “You can formulate your quantum mechanics with respect to one clock that you place in spacetime and get one answer,” explains Kucha.

    “But if you choose another type of clock, perhaps one based on an electric field, you get a completely different result. It is difficult to say which of these descriptions, if any, is correct.”

    More than that, the clock that is chosen must not eventually crumble. Quantum theory suggests there is a limit to how fine you can cut up space. The smallest quantum grain of space imaginable is 10^33 centimeter wide, the Planck length, named after Max Planck, inventor of the quantum. On that infinitesimal scale, the spacetime canvas turns choppy and jumbled, like the whitecaps on an angry sea. Space and time become unglued and start to wink in and out of existence in a probabilistic froth. Time and space, as we know them, are no longer easily defined. This is the point at which the physics becomes unknown and theorists start walking on shaky ground. As physicist Paul Davies points out in his book About Time, “You must imagine all possible geometries—all possible spacetimes, space warps and time warps—mixed together in a sort of cocktail, or ‘foam.’ ”

    Only a fully developed theory of quantum gravity will show what’s really happening at this unimaginably small level of spacetime. Kucha conjectures that some property of general relativity (as yet unknown) will not undergo quantum fluctuations at this point. Something might hold on and not come unglued. If that’s true, such a property could serve as the reliable clock that Kucha has been seeking for so long. And with that hope, Kucha continues to explore, one by one, the varied possibilities.

    Kucha has been trying to mold general relativity into the style of quantum mechanics, to find a special clock for it. But some other physicists trying to understand quantum gravity believe that the revision should happen the other way around—that quantum gravity should be made over in the likeness of general relativity, where time is pushed into the background. Carlo Rovelli is a champion of this view.

    Forget time,” Rovelli declares emphatically. “Time is simply an experimental fact.” Rovelli, a physicist at the Center of Theoretical Physics in France, has been working on an approach to quantum gravity that is essentially timeless. To simplify the calculations, he and his collaborators, physicists Abhay Ashtekar and Lee Smolin, set up a theoretical space without a clock. In this way, they were able to rewrite Einstein’s general theory of relativity, using a new set of variables so that it could more easily be interpreted and adapted for use on the quantum level.

    Their formulation has allowed physicists to explore how gravity behaves on the subatomic scale in a new way. But is that really possible without any reference to time at all? “First with special relativity and then with general relativity, our classical notion of time has only gotten weaker and weaker,” answers Rovelli. “We think in terms of time. We need it. But the fact that we need time to carry out our thinking does not mean it is reality.”

    Rovelli believes if physicists ever find a unified law that links all the forces of nature under one banner, it will be written without any reference to time. “Then, in certain situations,” says Rovelli, “as when the gravitational field is not dramatically strong, reality organizes itself so that we perceive a flow that we call time.”

    Getting rid of time in the most fundamental physical laws, says Rovelli, will probably require a grand conceptual leap, the same kind of adjustment that 16th-century scientists had to make when Copernicus placed the sun, and not the Earth, at the center of the universe. In so doing, the Polish cleric effectively kicked the Earth into motion, even though back then it was difficult to imagine how the Earth could zoom along in orbit about the sun without its occupants being flung off the surface. “In the 1500s, people thought a moving earth was impossible,” notes Rovelli.

    But maybe the true rules are timeless, including those applied to the subatomic world. Indeed, a movement has been under way to rewrite the laws of quantum mechanics, a renovation that was spurred partly by the problem of time, among other quantum conundrums. As part of that program, theorists have been rephrasing quantum mechanics’ most basic equations to remove any direct reference to time.

    The roots of this approach can be traced to a procedure introduced by the physicist Richard Feynman in the 1940s, a method that has been extended and broadened by others, including James Hartle of the University of California at Santa Barbara and physics Nobel laureate Murray Gell-Mann.

    Basically, it’s a new way to look at Schrödinger’s equation. As originally set up, this equation allows physicists to compute the probability of a particle moving directly from point A to point B over specified slices of time. The alternate approach introduced by Feynman instead considers the infinite number of paths the particle could conceivably take to get from A to B, no matter how slim the chance. Time is removed as a factor; only the potential pathways are significant. Summing up these potentials (some paths are more likely than others, depending on the initial conditions), a specific path emerges in the end.

    The process is sometimes compared to interference between waves. When two waves in the ocean combine, they may reinforce one another (leading to a new and bigger wave) or cancel each other out entirely. Likewise, you might think of these many potential paths as interacting with one another—some getting enhanced, others destroyed—to produce the final path. More important, the variable of time no longer enters into the calculations.

    Hartle has been adapting this technique to his pursuits in quantum cosmology, an endeavor in which the laws of quantum mechanics are applied to the young universe to discern its evolution. Instead of dealing with individual particles, though, he works with all the configurations that could possibly describe an evolving cosmos, an infinite array of potential universes. When he sums up these varied configurations—some enhancing one another, others canceling each other out—a particular spacetime ultimately emerges. In this way, Hartle hopes to obtain clues to the universe’s behavior during the era of quantum gravity. Conveniently, he doesn’t have to choose a special clock to carry out the physics: Time disappears as an essential variable.

    Of course, as Isham points out, “having gotten rid of time, we’re then obliged to explain how we get back to the ordinary world, where time surrounds us.” Quantum gravity theorists have their hunches. Like Rovelli, many are coming to suspect that time is not fundamental at all. This theme resounds again and again in the various approaches aimed at solving the problem of time. Time, they say, may more resemble a physical property such as temperature or pressure. Pressure has no meaning when you talk about one particle or one atom; the concept of pressure arises only when we consider trillions of atoms. The notion of time could very well share this statistical feature. If so, reality would then resemble a pointillist painting. On the smallest of scales—the Planck length—time would have no meaning, just as a pointillist painting, built up from dabs of paint, cannot be fathomed close up.

    Quantum gravity theorists like to compare themselves to archeologists. Each investigator is digging away at a different site, finding a separate artifact of some vast subterranean city. The full extent of the find is not yet realized. What theorists desperately need are data, experimental evidence that could help them decide between the different approaches.

    It seems an impossible task, one that would appear to require recreating the hellish conditions of the Big Bang. But not necessarily. For instance, future generations of “gravity-wave telescopes,” instruments that detect ripples in the rubberlike mat of spacetime, might someday sense the Big Bang’s reverberating thunder, relics from the instant of creation when the force of gravity first emerged. Such waves could provide vital clues to the nature of space and time.

    “We wouldn’t have believed just [decades] ago that it would be possible to say what happened in the first 10 minutes of the Big Bang,” points out Kucha. “But we can now do that by looking at the abundances of the elements. Perhaps if we understand physics on the Planck scale well enough, we’ll be able to search for certain consequences—remnants—that are observable today.” If found, such evidence would bring us the closest ever to our origins and possibly allow us to perceive at last how space and time came to well up out of nothingness some 14 billion years ago.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 6:37 am on August 30, 2018 Permalink | Reply
    Tags: , , , Black Hole Firewalls Could Be Too Tepid to Burn, , , Nautilus, ,   

    From Nautilus: “Black Hole Firewalls Could Be Too Tepid to Burn” 


    From Nautilus

    Aug 29, 2018
    Charlie Wood

    Artist’s conception of two merging black holes similar to those detected by LIGO Credit LIGO-Caltech/MIT/Sonoma State /Aurore Simonnet

    String theorists elide a paradox about black holes by extinguishing the walls of fire feared to surround them. NASA

    Despite its ability to bend both minds and space, an Einsteinian black hole looks so simple a child could draw it. There’s a point in the center, a perfectly spherical boundary a bit farther out, and that’s it

    The point is the singularity, an infinitely dense, unimaginably small dot contorting space so radically that anything nearby falls straight in, leaving behind a vacuum. The spherical boundary marks the event horizon, the point of no return between the vacuum and the rest of the universe. But according to Einstein’s theory of gravity, the event horizon isn’t anything that an unlucky astronaut would immediately notice if she were to cross it. “It’s like the horizon outside your window,” said Samir Mathur, a physicist at Ohio State University. “If you actually walked over there, there’s nothing.”

    In 2012, however, this placid picture went up in flames. A team of four physicists took a puzzle first put forward by Stephen Hawking about what happens to all the information that falls into the black hole, and turned it on its head. Rather than insisting that an astronaut (often named Alice) pass smoothly over the event horizon, they prioritized a key postulate of quantum mechanics: Information, like matter and energy, must never be destroyed. That change ended up promoting the event horizon from mathematical boundary to physical object, one they colorfully named the wall of fire.

    “It can’t be empty, and it turns out it has to be full of a lot of stuff, a lot of hot stuff,” said Donald Marolf, a physicist at the University of California, Santa Barbara, and one of the four co-authors [no cited paper]. The argument caused an uproar in the theoretical physics community, much as if cartographers suggested that instead of an imaginary line on their maps, Earth’s equator was actually a wall of bright red bricks.

    The news of a structure at the boundary didn’t shock Mathur, however. For more than a decade he had been arguing that black holes are really balls of strings (from string theory) with hot, fuzzy surfaces. “As you come closer and closer it gets hotter and hotter, and that’s what causes the burning,” he explained.

    In recent years, Mathur has been refining his “fuzzball” description, and his most recent calculations bring marginally good news for Alice. While she wouldn’t live a long and healthy life, the horizon’s heat might not be what does her in.

    Fuzzballs are what you get when you apply string theory, a description of nature that replaces particles with strings, to extremely dense objects. Energize a particle and it can only speed up, but strings stretch and swell as well. That ability to expand, combined with additional flexibility from postulated extra dimensions, makes strings fluff up when enough of them are packed into a small space. They form a fuzzy ball that looks from afar like an ordinary black hole—it has the same size (for a given mass) and emits the same kind of “Hawking radiation” that all black holes emit. As a bonus, the slightly bumpy surface changes the way it emits particles and declaws Hawking’s information puzzle, according to Mathur. “It’s more like a planet,” he said, “and it radiates from that surface just like anything else.”

    Olena Shmahalo / Quanta Magazine

    His new work extends arguments from 2014, which asked what would happen to Alice if she were to fall onto a supermassive fuzzball akin to the one at the heart of our galaxy—one with the mass of millions of suns. In such situations, the force of gravity dominates all others. Assuming this constraint, Mathur and his collaborator found that an incoming Alice particle had almost no chance of smashing into an outgoing particle of Hawking radiation. The surface might be hot, he said, but the way the fuzzball expands to swallow new material prevents anything from getting close enough to burn, so Alice should make it to the surface.

    In response, Marolf suggested that a medium-size fuzzball might still be able to barbecue Alice in other ways. It wouldn’t drag her in as fast, and in a collision at lower energies, forces other than gravity could singe her, too.

    Mathur’s team recently took a more detailed look at Alice’s experience with new calculations published in the Journal of High Energy Physics. They concluded that for a modest fuzzball—one as massive as our sun—the overall chance of an Alice particle hitting a radiation particle was slightly higher than they had found before, but still very close to zero. Their work suggested that you’d have to shrink a fuzzball down to a thousand times smaller than the nanoscale before burning would become likely.

    By allowing Alice to reach the surface more or less intact (she would still undergo an uncontroversial and likely fatal stretching), the theory might even end up restoring the Einsteinian picture of smooth passage across the boundary, albeit in a twisted form. There might be a scenario in which Alice went splat on the surface while simultaneously feeling as if she were falling through open space, whatever that might mean.

    “If you jump onto [fuzzballs] in one description, you break up into little strings. That’s the splat picture,” Mathur said. We typically assume that once her particles start breaking up, Alice ceases to be Alice. A bizarre duality in string theory, however, allows her strings to spread out across the fuzzball in an orderly way that preserves their connections, and, perhaps, her sense of self. “If you look carefully at what [the strings] are doing,” Mathur continued, “they’re actually spreading in a very coherent ball.”

    The details of Mathur’s picture remain rough. And the model rests entirely on the machinery of string theory, a mathematical framework with no experimental evidence. What’s more, not even string theory can handle the messiness of realistic fuzzballs. Instead, physicists focus on contrived examples such as highly organized, extra-frigid bodies with extreme features, said Marika Taylor, a string theorist at the University of Southampton in the U.K.

    Mathur’s calculations are exploratory, she said, approximate generalizations from the common features of the simple models. The next step is a theory that can describe the fuzzball’s surface at the quantum level, from the point of view of the string. Nevertheless, she agreed that the hot firewall idea has always smelled fishy from a string-theory perspective. “You suddenly transition from ‘I’m falling perfectly happily’ to ‘Oh my God, I’m completely destroyed’? That’s unsatisfactory,” she said.

    Marolf refrained from commenting on the latest results until he finished discussing them with Mathur, but said that he was interested in learning more about how the other forces had been accounted for and how the fuzzball surface would react to Alice’s visit. He also pointed out that Mathur’s black hole model was just one of many tactics for resolving Hawking’s puzzle, and there was no guarantee that anyone had hit on the right one. “Maybe the real world is crazier than even the things we’ve thought of yet,” he said, “and we’re just not being clever enough.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 11:47 am on August 2, 2018 Permalink | Reply
    Tags: , , Nautilus, ,   

    From Quanta Magazine via Nautilus: “How Artificial Intelligence Can Supercharge the Search for New Particles” 



    Quanta Magazine
    From Quanta Magazine

    Jul 25, 2018
    Charlie Wood

    In the hunt for new fundamental particles, physicists have always had to make assumptions about how the particles will behave. New machine learning algorithms don’t.
    Image by ATLAS Experiment © 2018 CERN

    The Large Hadron Collider (LHC) smashes a billion pairs of protons together each second.


    CERN map

    CERN LHC Tunnel

    CERN LHC particles

    Occasionally the machine may rattle reality enough to have a few of those collisions generate something that’s never been seen before. But because these events are by their nature a surprise, physicists don’t know exactly what to look for. They worry that in the process of winnowing their data from those billions of collisions to a more manageable number, they may be inadvertently deleting evidence for new physics. “We’re always afraid we’re throwing the baby away with the bathwater,” said Kyle Cranmer, a particle physicist at New York University who works with the ATLAS experiment at CERN.


    Faced with the challenge of intelligent data reduction, some physicists are trying to use a machine learning technique called a “deep neural network” to dredge the sea of familiar events for new physics phenomena.

    In the prototypical use case, a deep neural network learns to tell cats from dogs by studying a stack of photos labeled “cat” and a stack labeled “dog.” But that approach won’t work when hunting for new particles, since physicists can’t feed the machine pictures of something they’ve never seen. So they turn to “weakly supervised learning,” where machines start with known particles and then look for rare events using less granular information, such as how often they might take place overall.

    In a paper posted on the scientific preprint site arxiv.org in May, three researchers proposed applying a related strategy to extend “bump hunting,” the classic particle-hunting technique that found the Higgs boson. The general idea, according to one of the authors, Ben Nachman, a researcher at the Lawrence Berkeley National Laboratory, is to train the machine to seek out rare variations in a data set.

    Consider, as a toy example in the spirit of cats and dogs, a problem of trying to discover a new species of animal in a data set filled with observations of forests across North America. Assuming that any new animals might tend to cluster in certain geographical areas (a notion that corresponds with a new particle that clusters around a certain mass), the algorithm should be able to pick them out by systematically comparing neighboring regions. If British Columbia happens to contain 113 caribous to Washington state’s 19 (even against a background of millions of squirrels), the program will learn to sort caribous from squirrels, all without ever studying caribous directly. “It’s not magic but it feels like magic,” said Tim Cohen, a theoretical particle physicist at the University of Oregon who also studies weak supervision.

    By contrast, traditional searches in particle physics usually require researchers to make an assumption about what the new phenomena will look like. They create a model of how the new particles will behave—for example, a new particle might tend to decay into particular constellations of known particles. Only after they define what they’re looking for can they engineer a custom search strategy. It’s a task that generally takes a Ph.D. student at least a year, and one that Nachman thinks could be done much faster, and more thoroughly.

    The proposed CWoLa algorithm, which stands for Classification Without Labels, can search existing data for any unknown particle that decays into either two lighter unknown particles of the same type, or two known particles of the same or different type. Using ordinary search methods, it would take the LHC collaborations at least 20 years to scour the possibilities for the latter, and no searches currently exist for the former. Nachman, who works on the ATLAS project, says CWoLa could do them all in one go.

    Other experimental particle physicists agree it could be a worthwhile project. “We’ve looked in a lot of the predictable pockets, so starting to fill in the corners we haven’t looked in is an important direction for us to go in next,” said Kate Pachal, a physicist who searches for new particle bumps with the ATLAS project. She batted around the idea of trying to design flexible software that could deal with a range of particle masses last year with some colleagues, but no one knew enough about machine learning. “Now I think it might be the time to try this,” she said.

    The hope is that neural networks could pick up on subtle correlations in the data that resist current modeling efforts. Other machine learning techniques have successfully boosted the efficiency of certain tasks at the LHC, such as identifying “jets” made by bottom-quark particles. The work has left no doubt that some signals are escaping physicists’ notice. “They’re leaving information on the table, and when you spend $10 billion on a machine, you don’t want to leave information on the table,” said Daniel Whiteson, a particle physicist at the University of California, Irvine.

    Yet machine learning is rife with cautionary tales of programs that confused arms with dumbbells (or worse). At the LHC, some worry that the shortcuts will end up reflecting gremlins in the machine itself, which experimental physicists take great pains to intentionally overlook. “Once you find an anomaly, is it new physics or is it something funny that went on with the detector?” asked Till Eifert, a physicist on ATLAS.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:38 pm on April 8, 2018 Permalink | Reply
    Tags: , , , , How ‘Oumuamua Got Shredded, Nautilus,   

    From Nautilus: “How ‘Oumuamua Got Shredded” 



    Apr 01, 2018
    Sean Raymond

    ‘Oumuamua may be a piece of a torn-apart comet, gravitationally launched into interstellar space, that roamed the galaxy before dropping on our doorstep.ESO / M. Kornmesser / Wikicommons.

    Our solar system’s first houseguest—at least, the first one we have seen in our midst—is a strange one. Scientists have taken to calling it ‘Oumuamua (pronounced “Oh-MOO-ah-MOO-ah”), after it was seen, last October, as a faint streak against a backdrop of stars, by the Pan-STARRS (Panoramic Survey Telescope and Rapid Response System) telescope, in Hawaii.

    Pannstars telescope, U Hawaii, located at Haleakala Observatory, Hawaii, USA, Altitude 3,052 m (10,013 ft)

    In Hawaiian, ‘Oumuamua means “a messenger from afar arriving first.”

    How do we know it’s “from afar”? ‘Oumuamua is fast. Minus the sun’s gravitational tug, it’s clocking 16 miles per second. A massive planet like Jupiter can gravitationally kick an object hard enough to reach that speed, but get this: ‘Oumuamua entered the solar system from above the plane of the planets! There is nothing in the solar system (including Planet Nine if it exists) that can explain its speed. That is why scientists are confident that it came from beyond.

    ‘Oumuamua’s trajectory through the solar system. It was only found after its closest passage to Earth.Brooks Bays / SOEST Publication Services / UH Institute for Astronomy.

    ‘Oumuamua also looks like nothing we’ve seen before. Its brightness in the sky oscillates by about a factor of 10 every seven hours or so, and models show that this may be due to the spinning of a cigar-shaped body (as in Fig. 2). A pair of potato-shaped bodies with different reflectivities could also account for the oscillation. The brightness pattern does not perfectly repeat, indicating that ‘Oumuamua is spinning chaotically—“tumbling” might be the right way to put it.

    A simulation of ‘Oumuamua’s rotation (left) and the variations in observed brightness that this produces.nagualdesign / Wikicommons.

    Another peculiar thing: ‘Oumuamua looks like a water-rich object, but it has no surface water. Measurements of its colors at different wavelengths show an object similar to volatile-rich bodies in the solar system—think comets and water-rich asteroids. But ‘Oumuamua passed closer than Mercury’s orbit, and showed no signs of activity: No gases escaping to form a coma, no jets, no tails. So, even though it looks like a comet, it does not behave like a comet, at least not like the flamboyant, bright comets that we know and love.

    So what is it? The simplest explanation is that ‘Oumuamua is a planetary leftover called a planetesimal, born in a planet-forming disk around another star but got left out of the finished product. Instead it was kicked out into interstellar space by a giant planet similar to Neptune, or maybe Jupiter. ‘Oumuamua wandered for millions to billions of years, then happened to pass close to the sun.

    That can explain how ‘Oumuamua came to roam outside of a planetary system—but it falls short: It doesn’t explain ‘Oumuamua’s weird shape and spin, nor why it looks like a comet with no surface water.

    This is where computer simulations come in. Almost a decade ago, I ran thousands of simulations of how giant planets interact with disks of planetesimals. My goal was to study how planets behave, but what I’ve found is that these same simulations can be used to understand ‘Oumuamua’s origins. Before they are kicked out into interstellar space, some planetesimals pass super close to a giant planet, so close that they should be shredded to pieces, due to gravity: The pull on the planet-facing side of the planetesimal is much larger than on the opposite side. The strong stretching force might play a role in explaining ‘Oumuamua’s unusual shape and tumbling spin, although it has not been carefully modeled yet. This is still speculation.

    This sort of shredding event isn’t just based on calculations, though. We have actually seen it in action. In 1992, comet Shoemaker-Levy 9 passed too close to Jupiter and was torn into a string of fragments. They fell back onto Jupiter in 1994 (some on my 17th birthday).

    Fragments of comet Shoemaker-Levy 9 observed with the Hubble Space telescope in 1994.NASA / ESA / H. Weaver and E. Smith (STScI).

    My simulations find that about 1 percent of planetesimals are torn apart by coming too close to a giant planet, like comet Shoemaker–Levy 9. Instead of bashing into the planet, most of the pieces are eventually thrown out of their planetary systems. If ‘Oumuamua is a planetesimal fragment, it must have gotten shredded very violently.

    This still doesn’t explain ‘Oumuamua’s comet-like appearance. In this way, ‘Oumuamua is like another class of objects in the solar system called the Damocloids: They have comet-like orbits and surfaces but don’t give off any gases when they are heated. We think they are extinct comets that, after a certain number of trips too close to the sun, burned off all of their surface ices. Over the past few decades, researchers have figured out how quickly extinction must happen.

    Back to my simulations. Remember: about 1 percent of planetesimals should have been torn apart before being kicked out into interstellar space. And guess what? About two-thirds of them passed close to their stars a bunch of times before getting kicked out—enough times that they should have become extinct.

    So, ‘Oumuamua may be a piece of a torn-apart comet, as my colleagues and I argue in two recent papers (here [The Astrophysical Journal] and here [MNRAS]): After the disruption, ‘Oumuamua passed close enough to its star enough times to lose its surface volatiles, becoming extinct. Then it was gravitationally launched into interstellar space and roamed the galaxy before dropping on our doorstep.

    ‘Oumuamua’s origin story?Sean Raymond / planetplanet.net.

    We could test this by cracking ‘Oumuamua open. Is there ice buried deep, too deep for the sun to vaporize it? It’s zooming away so fast that tracking it down—the goal of Project Lyra—is no small feat. If that fails we can bank on the Large Synoptic Survey Telescope, which is coming online around 2021, to help us find objects similar to what ‘Oumuamua seems to be. Our story predicts that extinct comets should outnumber “normal” ones about two-to-one, and almost all of these objects should be fragments of larger bodies, meaning they might bear some trace of their violent pasts, either in terms of their shapes and spins or something else.


    LSST Camera, built at SLAC

    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    If we end up finding a lot of extinct comet fragments, we can be confident ‘Oumuamua is one, too.

    Sean Raymond is an astronomer studying the formation and evolution of planetary systems. He also blogs at htp://www.planetplanet.net.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 12:51 pm on November 12, 2017 Permalink | Reply
    Tags: , , Caleb Scharf, , Nautilus, The Zoomable Universe, This Will Help You Grasp the Sizes of Things in the Universe   

    From Nautilus: “This Will Help You Grasp the Sizes of Things in the Universe” 



    Nov 08, 2017
    Dan Garisto

    In The Zoomable Universe, Scharf puts the notion of scale—in biology and physics—center-stage. “The start of your journey through this book and through all known scales of reality is at that edge between known and unknown,” he writes. Illustration by Ron Miller

    Caleb Scharf wants to take you on an epic tour. His latest book, The Zoomable Universe, starts from the ends of the observable universe, exploring its biggest structures, like groups of galaxies, and goes all the way down to the Planck length—less than a billionth of a billionth of a billionth of a meter. It is a breathtaking synthesis of the large and small. Readers journeying through the book are treated to pictures, diagrams, and illustrations all accompanied by Scharf’s lucid, conversational prose. These visual aids give vital depth and perspective to the phenomena that he points out like a cosmic safari guide. Did you know, he offers, that all the Milky Way’s stars can fit inside the volume of our solar system?

    Scharf, the director of Columbia University’s Astrobiology Center, is a suitably engaging guide. He’s the author of the 2012 book Gravity’s Engines: How Bubble-Blowing Black Holes Rule Galaxies, Stars, and Life in the Universe, and last year, he speculated in Nautilus about whether alien life could be so advanced as to be indistinguishable from physics.

    In The Zoomable Universe, Scharf puts the notion of scale—in biology and physics—center-stage. “The start of your journey through this book and through all known scales of reality is at that edge between known and unknown,” he writes. Nautilus caught up with him to talk about our experience with scale and why he thinks it’s mysterious. (Scharf is a member of Nautilus’ advisory board.)

    Why is scale interesting?

    Scale is fascinating. Scientifically it’s a fundamental property of reality. We don’t even think about it. We talk about space and time—and perhaps we puzzle more over the nature of time than we do over the nature of scale or space—but it’s equally mysterious.

    What’s mysterious about scale?

    It’s something we all have direct experience of, even intuitively. We learn to evaluate the size of things. But we’re operating as humans in a very, very narrow slice of what is out there. And we’re aware of a very narrow range of scales: In some sense, we know more about the very large than we do about the very small.

    We know about atoms, kind of, but if you go smaller, it gets more uncertain—not just because of intrinsic uncertainty, but the completeness of our physics gets worse. We don’t really know what’s happening here. That leads you to a mystery at the Planck scale. On the big scale, it’s stuff we can actually see, we can actually chart.

    Not an alien planet, but the faceted eye of a louse embedded in an elephant’s skin. The Zoomable Universe.

    At certain scales, there’s not much happening. Does that hint at some underlying mystery?

    I think that is something worth contemplating. There’s quarks and then there’s 20 orders of magnitude smaller where—what do you say about it? That was the experience for the very small, but on the larger scale there’s some of that too…the emptiness of interstellar space. It is striking how empty most of everything is on the big scale and the small scale.

    We have all this rich stuff going on in the scale of the solar system and the earth and our biological scale. That’s where we’ve gained the most insight, accumulated the most knowledge. It is the scale where matter seems to condense down, where things appear solid, when in fact, it’s equally empty on the inside. But is that a human cultural bias? Or is that telling us something profound about the nature of the universe? I don’t really know the answer to that. But there’s something about the way we’re built, the way we think about the world. We’re clearly not attuned to that emptiness.

    Yet we’re drawn to it.

    We are drawn to it—like the example in the book with the stars packed together. Taking all the stars from the galaxy put together and being able to fit them inside the volume of the solar system? It is shocking. Trust me, I had to run the numbers a couple of times just to go, “Oh wow, okay, that really does work.”

    As the Earth eclipses the Sun, our high wilderness of the lunar landscape is bathed in reddened light. Illustration by Ron Miller

    How did you represent things that we don’t have pictures of, like the surface of an exoplanet, or things at really small scales?

    That’s something we definitely talked a lot about in putting the book together. Ron Miller, the artist, would produce a landscape for an exoplanet. As a scientist, my inclination is to say, “We can’t do that—we can’t say what it looks like.” So we had this dialogue. We wanted an informed artistic approach. It became tricky when we got down to a small scale. I wanted to avoid the usual trope, which is an atom is a sphere, or a molecule is a sphere connected by things. You can’t have a picture of these things in the sense that we’re used to. We tried to compromise. We made something people kind of recognize, but we avoid the ball and stick models that are glued in everyone’s head.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 12:03 pm on November 9, 2017 Permalink | Reply
    Tags: , , Cosmologists have come to realize that our universe may be only one component of the multiverse, , Fred Adams, Mordehai Milgrom and MOND theory, Nautilus, , , The forces are not nearly as finely tuned as many scientists think, The parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets, The strong interaction- the weak interaction- electromagnetism- gravity   

    From Nautilus: “The Not-So-Fine Tuning of the Universe” 



    January 19, 2017 [Just found this referenced in another article.]
    Fred Adams
    Illustrations by Jackie Ferrentino

    Before there is life, there must be structure. Our universe synthesized atomic nuclei early in its history. Those nuclei ensnared electrons to form atoms. Those atoms agglomerated into galaxies, stars, and planets. At last, living things had places to call home. We take it for granted that the laws of physics allow for the formation of such structures, but that needn’t have been the case.

    Over the past several decades, many scientists have argued that, had the laws of physics been even slightly different, the cosmos would have been devoid of complex structures. In parallel, cosmologists have come to realize that our universe may be only one component of the multiverse, a vast collection of universes that makes up a much larger region of spacetime. The existence of other universes provides an appealing explanation for the apparent fine-tuning of the laws of physics. These laws vary from universe to universe, and we live in a universe that allows for observers because we couldn’t live anywhere else.

    Setting The Parameters: The universe would have been habitable even if the forces of electromagnetism and gravity had been stronger or weaker. The crosshatched area shows the range of values consistent with life. The asterisk shows the actual values in our universe; the axes are scaled to these values. The constraints are that stars must be able to undergo nuclear fusion (below black curve), live long enough for complex life to evolve (below red curve), be hot enough to support biospheres (left of blue curve), and not outgrow their host galaxies (right of the cyan curve). Fred C. Adams.

    Astrophysicists have discussed fine-tuning so much that many people take it as a given that our universe is preternaturally fit for complex structures. Even skeptics of the multiverse accept fine-tuning; they simply think it must have some other explanation. But in fact the fine-tuning has never been rigorously demonstrated. We do not really know what laws of physics are necessary for the development of astrophysical structures, which are in turn necessary for the development of life. Recent work on stellar evolution, nuclear astrophysics, and structure formation suggest that the case for fine-tuning is less compelling than previously thought. A wide variety of possible universes could support life. Our universe is not as special as it might seem.

    The first type of fine-tuning involves the strengths of the fundamental forces of nature in working stars. If the electromagnetic force had been too strong, the electrical repulsion of protons would shut down nuclear fusion in stellar cores, and stars would fail to shine. If electromagnetism had been too weak, nuclear reactions would run out of control, and stars would blow up in spectacular explosions. If gravity had been too strong, stars would either collapse into black holes or never ignite.

    On closer examination, though, stars are remarkably robust. The strength of the electric force could vary by a factor of nearly 100 in either direction before stellar operations would be compromised. The force of gravity would have to be 100,000 times stronger. Going in the other direction, gravity could be a billion times weaker and still allow for working stars. The allowed strengths for the gravitational and electromagnetic forces depend on the nuclear reaction rate, which in turn depends on the strengths of the nuclear forces. If the reaction rate were faster, stars could function over an even wider range of strengths for gravitation and electromagnetism. Slower nuclear reactions would narrow the range.

    In addition to these minimal operational requirements, stars must meet a number of other constraints that further restrict the allowed strength of the forces. They must be hot. The surface temperature of a star must be high enough to drive the chemical reactions necessary for life. In our universe, there are ample regions around most stars where planets are warm enough, about 300 kelvins, to support biology. In universes where the electromagnetic force is stronger, stars are cooler, making them less hospitable.

    Stars must also have long lives. The evolution of complex life forms takes place over enormous spans of time. Since life is driven by a complex ensemble of chemical reactions, the basic clock for biological evolution is set by the time scales of atoms. In other universes, these atomic clocks will tick at different rates, depending on the strength of electromagnetism, and this variation must be taken into account. When the force is weaker, stars burn their nuclear fuel faster, and their lifetimes decrease.

    Mordehai Milgrom
    Also in Physics
    The Physicist Who Denies Dark Matter
    By Oded Carmeli
    He is one of those dark matter people,” Mordehai Milgrom said about a colleague stopping by his office at the Weizmann Institute of Science. Milgrom introduced us, telling me that his friend is searching for evidence of dark matter in READ MORE

    Finally, stars must be able to form in the first place. In order for galaxies and, later, stars to condense out of primordial gas, the gas must be able to lose energy and cool down. The cooling rate depends (yet again) on the strength of electromagnetism. If this force is too weak, gas cannot cool down fast enough and would remain diffuse instead of condensing into galaxies. Stars must also be smaller than their host galaxies—otherwise star formation would be problematic. These effects put another lower limit on the strength of electromagnetism.

    Putting it all together, the strengths of the fundamental forces can vary by several orders of magnitude and still allow planets and stars to satisfy all the constraints (as illustrated in the figure below). The forces are not nearly as finely tuned as many scientists think.

    A second example of possible fine-tuning arises in the context of carbon production. After moderately large stars have fused the hydrogen in their central cores into helium, helium itself becomes the fuel. Through a complicated set of reactions, helium is burned into carbon and oxygen. Because of their important role in nuclear physics, helium nuclei are given a special name: alpha particles. The most common nuclei are composed of one, three, four, and five alpha particles. The nucleus with two alpha particles, beryllium-8, is conspicuously absent, and for a good reason: It is unstable in our universe.

    The instability of beryllium creates a serious bottleneck for the creation of carbon. As stars fuse helium nuclei together to become beryllium, the beryllium nuclei almost immediately decay back into their constituent parts. At any given time, the stellar core maintains a small but transient population of beryllium. These rare beryllium nuclei can interact with helium to produce carbon. Because the process ultimately involves three helium nuclei, it is called the triple-alpha reaction. But the reaction is too slow to produce the amount of carbon observed in our universe.

    To resolve this discrepancy, physicist Fred Hoyle predicted in 1953 that the carbon nucleus has to have a resonant state at a specific energy, as if it were a little bell that rang with a certain tone. Because of this resonance, the reaction rates for carbon production are much larger than they would be otherwise—large enough to explain the abundance of carbon found in our universe. The resonance was later measured in the laboratory at the predicted energy level.

    Credit above

    The worry is that, in other universes, with alternate strengths of the forces, the energy of this resonance could be different, and stars would not produce enough carbon. Carbon production is compromised if the energy level is changed by more than about 4 percent. This issue is sometimes called the triple-alpha fine-tuning problem.

    Fortunately, this problem has a simple solution. What nuclear physics takes away, it also gives. Suppose nuclear physics did change by enough to neutralize the carbon resonance. Among the possible changes of this magnitude, about half would have the side effect of making beryllium stable, so the loss of the resonance would become irrelevant. In such alternate universes, carbon would be produced in the more logical manner of adding together alpha particles one at a time. Helium could fuse into beryllium, which could then react with additional alpha particles to make carbon. There is no fine-tuning problem after all.

    A third instance of potential fine-tuning concerns the simplest nuclei composed of two particles: deuterium nuclei, which contain one proton and one neutron; diprotons, consisting of two protons; and dineutrons, consisting of two neutrons. In our universe, only deuterium is stable. The production of helium takes place by first combining two protons into deuterium.

    If the strong nuclear force had been even stronger, diprotons could have been stable. In this case, stars could have generated energy through the simplest and fastest of nuclear reactions, where protons combine to become diprotons and eventually other helium isotopes. It is sometimes claimed that stars would then burn through their nuclear fuel at catastrophic rates, resulting in lifetimes that are too short to support biospheres. Conversely, if the strong force had been weaker, then deuterium would be unstable, and the usual stepping stone on the pathway to heavy elements would not be available. Many scientists have speculated that the absence of stable deuterium would lead to a universe with no heavy elements at all and that such a universe would be devoid of complexity and life.

    As it turns out, stars are remarkably stable entities. Their structure adjusts automatically to burn nuclear fuel at exactly the right rate required to support themselves against the crush of their own gravity. If the nuclear reaction rates are higher, stars will burn their nuclear fuel at a lower central temperature, but otherwise they will not be so different. In fact, our universe has an example of this type of behavior. Deuterium nuclei can combine with protons to form helium nuclei through the action of the strong force. The cross section for this reaction, which quantifies the probability of its occurrence, is quadrillions of times larger than for ordinary hydrogen fusion. Nonetheless, stars in our universe burn their deuterium in a relatively uneventful manner. The stellar core has an operating temperature of 1 million kelvins, compared to the 15 million kelvins required to burn hydrogen under ordinary conditions. These deuterium-burning stars have cooler centers and are somewhat larger than the sun, but are otherwise unremarkable.

    Similarly, if the strong nuclear force were lower, stars could continue to operate in the absence of stable deuterium. A number of different processes provide paths by which stars can generate energy and synthesize heavy elements. During the first part of their lives, stars slowly contract, their central cores grow hotter and denser, and they glow with the power output of the sun. Stars in our universe eventually become hot and dense enough to ignite nuclear fusion, but in alternative universes they could continue this contraction phase and generate power by losing gravitational potential energy. The longest-lived stars could shine with a power output roughly comparable to the sun for up to 1 billion years, perhaps long enough for biological evolution to take place.

    For sufficiently massive stars, the contraction would accelerate and become a catastrophic collapse. These stellar bodies would basically go supernova. Their central temperatures and densities would increase to such large values that nuclear reactions would ignite. Many types of nuclear reactions would take place in the death throes of these stars. This process of explosive nucleosynthesis could supply the universe with heavy nuclei, in spite of the lack of deuterium.

    Once such a universe produces trace amounts of heavy elements, later generations of stars have yet another option for nuclear burning. This process, called the carbon-nitrogen-oxygen cycle, does not require deuterium as an intermediate state. Instead, carbon acts as a catalyst to instigate the production of helium. This cycle operates in the interior of the sun and provides a small fraction of its total power. In the absence of stable deuterium, the carbon-nitrogen-oxygen cycle would dominate the energy generation. And this does not exhaust the options for nuclear power generation. Stars could also produce helium through a triple-nucleon process that is roughly analogous to the triple-alpha process for carbon production. Stars thus have many channels for providing both energy and complex nuclei in alternate universes.

    A fourth example of fine-tuning concerns the formation of galaxies and other large-scale structures. They were seeded by small density fluctuations produced in the earliest moments of cosmic time. After the universe had cooled down enough, these fluctuations started to grow stronger under the force of gravity, and denser regions eventually become galaxies and galaxy clusters. The fluctuations started with a small amplitude, denoted Q, equal to 0.00001. The primeval universe was thus incredibly smooth: The density, temperature, and pressure of the densest regions and of the most rarefied regions were the same to within a few parts per 100,000. The value of Q represents another possible instance of fine-tuning in the universe.

    If Q had been lower, it would have taken longer for fluctuations to grow strong enough to become cosmic structures, and galaxies would have had lower densities. If the density of a galaxy is too low, the gas in the galaxy is unable to cool. It might not ever condense into galactic disks or coalesce into stars. Low-density galaxies are not viable habitats for life. Worse, a long enough delay might have prevented galaxies from forming at all. Beginning about 4 billion years ago, the expansion of the universe began to accelerate and pull matter apart faster than it could agglomerate—a change of pace that is usually attributed to a mysterious dark energy. If Q had been too small, it could have taken so long for galaxies to collapse that the acceleration would have started before structure formation was complete, and further growth would have been suppressed. The universe could have ended up devoid of complexity, and lifeless. In order to avoid this fate, the value of Q cannot be smaller by more than a factor of 10.

    What if Q had been larger? Galaxies would have formed earlier and ended up denser. That, too, would have posed a danger for the prospects of habitability. Stars would have been much closer to one another and interacted more often. In so doing, they could have stripped planets out of their orbits and sent them hurtling into deep space. Furthermore, because stars would be closer together, the night sky would be brighter—perhaps as bright as day. If the stellar background were too dense, the combined starlight could boil the oceans of any otherwise suitable planets.

    Galactic What-If: A galaxy that formed in a hypothetical universe with large initial density fluctuations might be even more hospitable than our Milky Way. The central region is too bright and hot for life, and planetary orbits are unstable. But the outer region is similar to the solar neighborhood. In between, the background starlight from the galaxy is comparable in brightness to the sunlight received by Earth, so all planets, no matter their orbits, are potentially habitable. Fred C. Adams.

    In this case, the fine-tuning argument is not very constraining. The central regions of galaxies could indeed produce such intense background radiation that all planets would be rendered uninhabitable. But the outskirts of galaxies would always have a low enough density for habitable planets to survive. An appreciable fraction of galactic real estate remains viable even when Q is thousands of times larger than in our universe. In some cases, a galaxy might be even more hospitable. Throughout much of the galaxy, the night sky could have the same brightness as the sunshine we see during the day on Earth. Planets would receive their life-giving energy from the entire ensemble of background stars rather than from just their own sun. They could reside in almost any orbit. In an alternate universe with larger density fluctuations than our own, even Pluto would get as much daylight as Miami. As a result, a moderately dense galaxy could have more habitable planets than the Milky Way.

    In short, the parameters of our universe could have varied by large factors and still allowed for working stars and potentially habitable planets. The force of gravity could have been 1,000 times stronger or 1 billion times weaker, and stars would still function as long-lived nuclear burning engines. The electromagnetic force could have been stronger or weaker by factors of 100. Nuclear reaction rates could have varied over many orders of magnitude. Alternative stellar physics could have produced the heavy elements that make up the basic raw material for planets and people. Clearly, the parameters that determine stellar structure and evolution are not overly fine-tuned.

    Given that our universe does not seem to be particularly fine-tuned, can we still say that our universe is the best one for life to develop? Our current understanding suggests that the answer is no. One can readily envision a universe that is friendlier to life and perhaps more logical. A universe with stronger initial density fluctuations would make denser galaxies, which could support more habitable planets than our own. A universe with stable beryllium would have straightforward channels available for carbon production and would not need the complication of the triple-alpha process. Although these issues are still being explored, we can already say that universes have many pathways for the development of complexity and biology, and some could be even more favorable for life than our own. In light of these generalizations, astrophysicists need to reexamine the possible implications of the multiverse, including the degree of fine-tuning in our universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

    • stewarthoughblog 1:13 am on November 10, 2017 Permalink | Reply

      The proposition that long-lived stars could last 1by and possibly be sufficient for life to evolve is not consistent with what science has observed with our solar system, planet and the origin of life. It is estimated that the first life did not appear until almost 1by after formation, making this wide speculation.

      The idea that an increased density of stars in the galaxy could support increased habitability of planets is inconsistent with astrophysical understanding of the criticality of solar radiation to not destroy all life and all biochemicals required.

      It is also widely speculative to propose that any of the fundamental constants and force tolerances can be virtually arbitrarily reassigned with minimal affect without much more serious scientific analysis. In light of the fundamental fact that the understanding of the origin of life naturalistically is a chaotic mess, it is widely speculative to conjecture the fine-tuning of the universe is not critical.


    • richardmitnick 10:13 am on November 10, 2017 Permalink | Reply

      Thanks for reading and commenting. I appreciate it.


  • richardmitnick 11:34 am on November 9, 2017 Permalink | Reply
    Tags: , But what is matter exactly, Einstein: m = E/c2. This is the great insight (not E = mc2), Frank Wilczek, , Higgs field, , Nautilus, , Physics Has Demoted Mass, , Quarks are quantum wave-particles   

    From Nautilus: “Physics Has Demoted Mass” 



    November 9, 2017
    Jim Baggott

    You’re sitting here, reading this article. Maybe it’s a hard copy, or an e-book on a tablet computer or e-reader. It doesn’t matter. Whatever you’re reading it on, we can be reasonably sure it’s made of some kind of stuff: paper, card, plastic, perhaps containing tiny metal electronic things on printed circuit boards. Whatever it is, we call it matter or material substance. It has a characteristic property that we call solidity. It has mass.

    But what is matter, exactly? Imagine a cube of ice, measuring a little over one inch (or 2.7 centimeters) in length. Imagine holding this cube of ice in the palm of your hand. It is cold, and a little slippery. It weighs hardly anything at all, yet we know it weighs something.

    Let’s make our question a little more focused. What is this cube of ice made of? And, an important secondary question: What is responsible for its mass?

    Credit below

    To understand what a cube of ice is made of, we need to draw on the learning acquired by the chemists. Building on a long tradition established by the alchemists, these scientists distinguished between different chemical elements, such as hydrogen, carbon, and oxygen. Research on the relative weights of these elements and the combining volumes of gases led John Dalton and Louis Gay-Lussac to the conclusion that different chemical elements consist of atoms with different weights which combine according to a set of rules involving whole numbers of atoms.

    The mystery of the combining volumes of hydrogen and oxygen gas to produce water was resolved when it was realized that hydrogen and oxygen are both diatomic gases, H2 and O2. Water is then a compound consisting of two hydrogen atoms and one oxygen atom, H2O.

    This partly answers our first question. Our cube of ice consists of molecules of H2O organized in a regular array. We can also make a start on our second question. Avogadro’s law states that a mole of chemical substance will contain about 6 × 10^23 discrete “particles.” Now, we can interpret a mole of substance simply as its molecular weight scaled up to gram quantities. Hydrogen (in the form of H2) has a relative molecular weight of 2, implying that each hydrogen atom has a relative atomic weight of 1. Oxygen (O2) has a relative molecular weight of 32, implying that each oxygen atom has a relative atomic weight of 16. Water (H2O) therefore has a relative molecular weight of 2 × 1 + 16 = 18.

    It so happens that our cube of ice weighs about 18 grams, which means that it represents a mole of water, more or less. According to Avogadro’s law it must therefore contain about 6 × 10^23 molecules of H2O. This would appear to provide a definitive answer to our second question. The mass of the cube of ice derives from the mass of the hydrogen and oxygen atoms present in 6 × 10^23 molecules of H2O.

    But, of course, we can go further. We learned from J.J. Thompson, Ernest Rutherford, and Niels Bohr and many other physicists in the early 20th century that all atoms consist of a heavy, central nucleus surrounded by light, orbiting electrons. We subsequently learned that the central nucleus consists of protons and neutrons. The number of protons in the nucleus determines the chemical identity of the element: A hydrogen atom has one proton, an oxygen atom has eight (this is called the atomic number). But the total mass or weight of the nucleus is determined by the total number of protons and neutrons in the nucleus.

    Hydrogen still has only one (its nucleus consists of a single proton—no neutrons). The most common isotope of oxygen has—guess what?—16 (eight protons and eight neutrons). It’s obviously no coincidence that these proton and neutron counts are the same as the relative atomic weights I quoted above.

    If we ignore the light electrons, then we would be tempted to claim that the mass of the cube of ice resides in all the protons and neutrons in the nuclei of its hydrogen and oxygen atoms. Each molecule of H2O contributes 10 protons and eight neutrons, so if there are 6 × 10^23 molecules in the cube and we ignore the small difference in mass between a proton and a neutron, we conclude that the cube contains in total about 18 times this figure, or 108 × 10^23 protons and neutrons.

    So far, so good. But we’re not quite done yet. We now know that protons and neutrons are not elementary particles. They consist of quarks. A proton contains two up quarks and a down quark, a neutron two down quarks and an up quark. And the color force binding the quarks together inside these larger particles is carried by massless gluons.

    Okay, so surely we just keep going. If once again we approximate the masses of the up and down quarks as the same we just multiply by three and turn 108 × 10^23 protons and neutrons into 324 × 10^23 up and down quarks. We conclude that this is where all the mass resides. Yes?

    No. This is where our naïve atomic preconceptions unravel. We can look up the masses of the up and down quarks on the Particle Data Group website. The up and down quarks are so light that their masses can’t be measured precisely and only ranges are quoted. The following are all reported in units of MeV/c2. In these units the mass of the up quark is given as 2.3 with a range from 1.8 to 3.0. The down quark is a little heavier, 4.8, with a range from 4.5 to 5.3. Compare these with the mass of the electron, about 0.51 measured in the same units.

    Now comes the shock. In the same units of MeV/c2 the proton mass is 938.3, the neutron 939.6. The combination of two up quarks and a down quark gives us only 9.4, or just 1 percent of the mass of the proton. The combination of two down quarks and an up quark gives us only 11.9, or just 1.3 percent of the mass of the neutron. About 99 percent of the masses of the proton and neutron seem to be unaccounted for. What’s gone wrong?

    To answer this question, we need to recognize what we’re dealing with. Quarks are not self-contained “particles” of the kind that the Greeks or the mechanical philosophers might have imagined. They are quantum wave-particles; fundamental vibrations or fluctuations of elementary quantum fields. The up and down quarks are only a few times heavier than the electron, and we’ve demonstrated the electron’s wave-particle nature in countless laboratory experiments. We need to prepare ourselves for some odd, if not downright bizarre behavior.

    And let’s not forget the massless gluons. Or special relativity, and E = mc2. Or the difference between “bare” and “dressed” mass. And, last but not least, let’s not forget the role of the Higgs field in the “origin” of the mass of all elementary particles. To try to understand what’s going on inside a proton or neutron we need to reach for quantum chromodynamics, the quantum field theory of the color force between quarks.

    icedmocha / Shutterstock

    Quarks and gluons possess color “charge.” Just what is this, exactly? We have no way of really knowing. We do know that color is a property of quarks and gluons and there are three types, which physicists have chosen to call red, green, and blue. But, just as nobody has ever “seen” an isolated quark or gluon, so more or less by definition nobody has ever seen a naked color charge. In fact, quantum chromodynamics (QCD) suggests that if a color charge could be exposed like this it would have a near-infinite energy. Aristotle’s maxim was that “nature abhors a vacuum.” Today we might say: “nature abhors a naked color charge.”

    So, what would happen if we could somehow create an isolated quark with a naked color charge? Its energy would go up through the roof, more than enough to conjure virtual gluons out of “empty” space. Just as the electron moving through its own self-generated electromagnetic field gathers a covering of virtual photons, so the exposed quark gathers a covering of virtual gluons. Unlike photons, the gluons themselves carry color charge and they are able to reduce the energy by, in part, masking the exposed color charge. Think of it this way: The naked quark is acutely embarrassed, and it quickly dresses itself with a covering of gluons.

    This isn’t enough, however. The energy is high enough to produce not only virtual particles (like a kind of background noise or hiss), but elementary particles, too. In the scramble to cover the exposed color charge, an anti-quark is produced which pairs with the naked quark to form a meson. A quark is never—but never—seen without a chaperone.

    But this still doesn’t do it. To cover the color charge completely we would need to put the anti-quark in precisely the same place at precisely the same time as the quark. Heisenberg’s uncertainty principle won’t let nature pin down the quark and anti-quark in this way. Remember that a precise position implies an infinite momentum, and a precise rate of change of energy with time implies an infinite energy. Nature has no choice but to settle for a compromise. It can’t cover the color charge completely but it can mask it with the anti-quark and the virtual gluons. The energy is at least reduced to a manageable level.

    This kind of thing also goes on inside the proton and neutron. Within the confines of their host particles, the three quarks rattle around relatively freely. But, once again, their color charges must be covered, or at least the energy of the exposed charges must be reduced. Each quark produces a blizzard of virtual gluons that pass back and forth between them, together with quark–anti-quark pairs. Physicists sometimes call the three quarks that make up a proton or a neutron “valence” quarks, as there’s enough energy inside these particles for a further sea of quark–anti-quark pairs to form. The valence quarks are not the only quarks inside these particles.

    What this means is that the mass of the proton and neutron can be traced largely to the energy of the gluons and the sea of quark–anti-quark pairs that are conjured from the color field.

    How do we know? Well, it must be admitted that it is actually really rather difficult to perform calculations using QCD. The color force is extremely strong, and the corresponding energies of color-force interactions are therefore very high. Remember that the gluons also carry color charge, so everything interacts with everything else. Virtually anything can happen, and keeping track of all the possible virtual and elementary-particle permutations is very demanding.

    This means that although the equations of QCD can be written down in a relatively straightforward manner, they cannot be solved analytically, on paper. Also, the mathematical sleight-of-hand used so successfully in QED no longer applies—because the energies of the interactions are so high we can’t apply the techniques of renormalization. Physicists have had no choice but to solve the equations on a computer instead.

    Considerable progress was made with a version of QCD called “QCD-lite.” This version considered only massless gluons and up and down quarks, and further assumed that the quarks themselves are also massless (so, literally, “lite”). Calculations based on these approximations yielded a proton mass that was found to be just 10 percent lighter than the measured value.

    Let’s stop to think about that for a bit. A simplified version of QCD in which we assume that no particles have mass to start with nevertheless predicts a mass for the proton that is 90 percent right. The conclusion is quite startling. Most of the mass of the proton comes from the energy of the interactions of its constituent quarks and gluons.

    John Wheeler used the phrase “mass without mass” to describe the effects of superpositions of gravitational waves which could concentrate and localize energy such that a black hole is created. If this were to happen, it would mean that a black hole—the ultimate manifestation of super-high-density matter—had been created not from the matter in a collapsing star but from fluctuations in spacetime. What Wheeler really meant was that this would be a case of creating a black hole (mass) from gravitational energy.

    But Wheeler’s phrase is more than appropriate here. Frank Wilczek, one of the architects of QCD, used it in connection with his discussion of the results of the QCD-lite calculations. If much of the mass of a proton and neutron comes from the energy of interactions taking place inside these particles, then this is indeed “mass without mass,” meaning that we get the behavior we tend to ascribe to mass without the need for mass as a property.

    Does this sound familiar? Recall that in Einstein’s seminal addendum to his 1905 paper on special relativity the equation he derived is actually m = E/c2. This is the great insight (not E = mc2). And Einstein was surely prescient when he wrote: “the mass of a body is a measure of its energy content.”[1] Indeed, it is. In his book The Lightness of Being, Wilczek wrote:[2]

    “If the body is a human body, whose mass overwhelmingly arises from the protons and neutrons it contains, the answer is now clear and decisive. The inertia of that body, with 95 percent accuracy, is its energy content.”

    In the fission of a U-235 nucleus, some of the energy of the color fields inside its protons and neutrons is released, with potentially explosive consequences. In the proton–proton chain involving the fusion of four protons, the conversion of two up quarks into two down quarks, forming two neutrons in the process, results in the release of a little excess energy from its color fields. Mass does not convert to energy. Energy is instead passed from one kind of quantum field to another.

    Where does this leave us? We’ve certainly come a long way since the ancient Greek atomists speculated about the nature of material substance, 2,500 years ago. But for much of this time we’ve held to the conviction that matter is a fundamental part of our physical universe. We’ve been convinced that it is matter that has energy. And, although matter may be reducible to microscopic constituents, for a long time we believed that these would still be recognizable as matter—they would still possess the primary quality of mass.

    Modern physics teaches us something rather different, and deeply counter-intuitive. As we worked our way ever inward—matter into atoms, atoms into sub-atomic particles, sub-atomic particles into quantum fields and forces—we lost sight of matter completely. Matter lost its tangibility. It lost its primacy as mass became a secondary quality, the result of interactions between intangible quantum fields. What we recognize as mass is a behavior of these quantum fields; it is not a property that belongs or is necessarily intrinsic to them.

    Despite the fact that our physical world is filled with hard and heavy things, it is instead the energy of quantum fields that reigns supreme. Mass becomes simply a physical manifestation of that energy, rather than the other way around.

    This is conceptually quite shocking, but at the same time extraordinarily appealing. The great unifying feature of the universe is the energy of quantum fields, not hard, impenetrable atoms. Perhaps this is not quite the dream that philosophers might have held fast to, but a dream nevertheless.


    1. Einstein, A. Does the inertia of a body depend upon its energy-content? Annalen der Physik 18 (1905).

    2. Wilczek, F. The Lightness of Being Basic Books, New York, NY (2008).

    Photocollage credits: Physicsworld.com; Thatree Thitivongvaroon / Getty Images

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 8:13 am on September 3, 2017 Permalink | Reply
    Tags: , , , , , Nautilus, Neutron star mergers are the largest hadron colliders ever conceived, , What the Rumored Neutron Star Merger Might Teach Us   

    From Nautilus: “What the Rumored Neutron Star Merger Might Teach Us” 



    Aug 29, 2017
    Dan Garisto

    In a sense, neutron star mergers are the largest hadron colliders ever conceived. Image by NASA Goddard Space Flight Center / Flickr

    This month, before LIGO, the Laser Interferometer Gravitational Wave Observatory, and its European counterpart Virgo, were going to close down for a year to undergo upgrades, they jointly surveyed the skies.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    It was a small observational window—the 1st to the 25th—but that may have been enough: A rumor that LIGO has detected another gravitational wave—the fourth in two years—is making the rounds. But this time, there’s a twist: The signal might have been caused by the merger of two neutron stars instead of black holes.

    If the rumor holds true, it would be an astonishingly lucky detection. To get a sense of the moment, Nautilus spoke to David Radice, a postdoctoral researcher at Princeton who simulates neutron star mergers, “one of LIGO’s main targets,” he says.

    This potential binary neutron star merger sighting reminds me of when biologists think they’ve discovered a new species. How would you describe it?

    I do agree that this is the first time something like this has been seen.

    For me, a nice analogy is one of particle colliders. In a sense, neutron star mergers are the largest hadron colliders ever conceived. Instead of smashing a few nucleons, it’s like smashing 1060 of them. So by looking at the aftermath, we can learn a lot about fundamental physics. There is a lot that can happen when these stars collide and I don’t think we have a full knowledge of all the possibilities. I think we’ll learn a lot and see new things.

    What it would it mean if they were detecting a neutron star binary merger?

    I expected this neutron star merger to be detected further in the future—the possibility that this merger has been detected earlier suggests that that rate of these events is higher than we thought. There is maybe also a counterpart—an electromagnetic wave. There are many things that you can only really do with an electromagnetic counterpart. For example, even when we have, in the far future, five detectors worldwide, we will not be able to pinpoint the exact location to the source with the precision to say: “OK, this is the host galaxy.”

    Well, if you have an electromagnetic counterpart, especially in the optical region, you can really pinpoint a galaxy and say, “This merger happened in this galaxy that has these properties.”

    What makes a neutron star binary merger different from a black hole binary merger?

    One of the main things is that in a black hole binary merger, you’re just looking at the space-time effects. In this case we are looking at this extremely dense matter. There are a lot of things that you can hope to learn about neutron star mergers. We’re looking at them for a source of gamma ray bursts, or as the origin of heavy elements, or as a way to learn about physics of very high density matter.

    One idea that has been around now for a few years is that many of the heavy elements—elements, for example, like platinum or gold—may actually be produced in neutron star mergers. Material is ejected, and because of nuclear processes, it will produce these heavy elements that are otherwise difficult to produce in normal stars.

    You’ve created visual simulations of neutron star mergers, like the one below. How much power is required to run them?

    It’s publicly available—anyone can download the code and do simulations similar to those…but you need to run them on a supercomputer. It typically takes weeks on thousands of processors, but it can tell you a lot about these mergers. Now the two detectors both LIGO and Virgo are expected to shut down and go through a series of upgrades. When they come back online, their sensitivity will be significantly boosted so we can see much farther out and learn more about each event.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 11:30 am on August 6, 2017 Permalink | Reply
    Tags: and Why Does It Seem to Flow?, , , China to launch world’s first ‘cold’ atomic clock in space ... and it’ll stay accurate for a billion years., , Nautilus, Where Did Time Come From   

    From Nautilus: “Where Did Time Come From, and Why Does It Seem to Flow?” 



    Jul 18, 2017
    John Steele

    We say a river flows because it moves through space with respect to time. But time can’t move with respect to time—time is time.Image by violscraper / Flickr.

    NASA Deep Space Atomic Clock

    NIST-F2 atomic clock operated by America’s National Institute of Standards and Technology in Boulder, Colorado.

    China to launch world’s first ‘cold’ atomic clock in space … and it’ll stay accurate for a billion years.

    Paul Davies has a lot on his mind—or perhaps more accurate to say in his mind. A physicist at Arizona State University, he does research on a wide range of topics, from the abstract fields of theoretical physics and cosmology to the more concrete realm of astrobiology, the study of life in places beyond Earth. Nautilus sat down for a chat with Davies, and the discussion naturally drifted to the subject of time, a long-standing research interest of his. Here is a partial transcript of the interview, edited lightly for length and clarity.

    Is the flow of time real or an illusion?

    The flow of time is an illusion, and I don’t know very many scientists and philosophers who would disagree with that, to be perfectly honest. The reason that it is an illusion is when you stop to think, what does it even mean that time is flowing? When we say something flows like a river, what you mean is an element of the river at one moment is in a different place of an earlier moment. In other words, it moves with respect to time. But time can’t move with respect to time—time is time. A lot of people make the mistake of thinking that the claim that time does not flow means that there is no time, that time does not exist. That’s nonsense. Time of course exists. We measure it with clocks. Clocks don’t measure the flow of time, they measure intervals of time. Of course there are intervals of time between different events; that’s what clocks measure.

    So where does this impression of flow come from?

    Well, I like to give an analogy. Suppose I stand up, twirl around a few times, and stop. Then I have the overwhelming impression that the entire universe is rotating. I feel it to be rotating—of course I know it’s not. In the same way, I feel time is flowing, but of course I know it’s not. And presumably the explanation for this illusion has to do with something up here [in your head] and is connected with memory I guess—laying down of memories and so on. So it’s a feeling we have, but it’s not a property of time itself.

    And the other thing people contemplate: They think denying the flow of time is denying time asymmetry of the world. Of course events in the world follow a directional sequence. Drop an egg on the floor and it breaks. You don’t see eggs assembling themselves. Buildings fall down after earthquakes; they don’t rise up from heaps of rubble. [There are] many, many examples in daily life of the asymmetry of the world in time; that’s a property of the world. It’s not a property of time itself, and the explanation for that is to be sought in the very early universe and its initial conditions. It’s a whole different and perfectly respectable subject.

    Is time fundamental to the Universe?

    Time and space are the framework in which we formulate all of our current theories of the universe, but there is some question as to whether these might be emergent or secondary qualities of the universe. It could be that fundamentally the laws of the universe are formulated in terms of some sort of pre-space and time, and that space-time comes out of something more fundamental.

    Now obviously in daily life we experience a three-dimensional world and one dimension of time. But back in the Big Bang—we don’t really understand exactly how the universe was born in the Big Bang, but we think that quantum physics had something to do with it—it may be that this notion of what we would call a classical space-time, where everything seems to be sort of well-defined, maybe that was all closed out. And so maybe not just the world of matter and energy, but even space-time itself is a product of the special early stage of the universe. We don’t know that. That’s work under investigation.

    So time could be emergent?

    This dichotomy between space-time being emergent, a secondary quality—that something comes out of something more primitive, or something that is at the rock bottom of our description of nature—has been floating around since before my career. John Wheeler believed in and wrote about this in the 1950s—that there might be some pre-geometry, that would give rise to geometry just like atoms give rise to the continuum of elastic bodies—and people play around with that.

    The problem is that we don’t have any sort of experimental hands on that. You can dream up mathematical models that do this for you, but testing them looks to be pretty hopeless. I think the reason for that is that most people feel that if there is anything funny sort of underpinning space and time, any departure from our notion of a continuous space and time, that probably it would manifest itself only at the so-called Planck scale, which is [20 orders of magnitude] smaller than an atomic nucleus, and our best instruments at the moment are probing scales which are many orders of magnitude above that. It’s very hard to see how we could get at anything at the Planck scale in a controllable way.

    If multiple universes exist, do they have a common clock?

    The inter-comparison of time between different observers and different places is a delicate business even within one universe. When you talk about what is the rate of a clock, say, near the surface of a black hole, it’s going to be quite different from the rate of a clock here on Earth. So there isn’t even a common time in the entire universe.

    But now if we have a multiverse with other universes, whether each one in a sense comes with its own time—you can only do an inter-comparison between the two if there was some way of sending signals from one to the other. It depends on your multiverse model. There are many on offer, but on the one that cosmologists often talk about—where you have bubbles appearing in a sort of an inflating superstructure—then there’s no direct way of comparing a clock rate in one bubble from clock rates in another bubble.

    What do you think are the most exciting recent advances in understanding time?

    I’m particularly drawn to the work that is done in the lab on perception of time, because I think that has the ability to make rapid advances in the coming years. For example, there are famous experiments in which people apparently make free decisions at certain moments and yet it’s found that the decision was actually made a little bit earlier, but their own perception of time and their actions within time have been sort of edited after the event. When we observe the world, what we see is an apparently consistent and smooth narrative, but actually the brain is just being bombarded with sense data from different senses and puts all this together. It integrates it and then presents a consistent narrative as it were the conscious self. And so we have this impression that we’re in charge and everything is all smoothly put together. But as a matter of fact, most of this is, is a narrative that’s recreated after the event.

    Where it’s particularly striking of course is when people respond appropriately much faster than the speed of thought. You need only think of a piano player or a tennis player to see that the impression that they are making a conscious decision—“that ball is coming in this direction; I’d better move over here and hit it”—couldn’t possibly be. The time it takes for the signals to get to the brain and then through the motor system, back to the response, couldn’t work. And yet they still have this overwhelming impression that they’re observing the world in real time and are in control. I think all of this is pretty fascinating stuff.

    In terms of fundamental physics, is there anything especially new about time? I think the answer is not really. There are new ideas that are out there. I think there are still fundamental problems; we’ve talked about one of them: Is time an emergent property or a fundamental property? And the ultimate origin of the arrow of time, which is the asymmetry of the world in time, is still a bit contentious. We know we have to trace it back to the Big Bang, but there are still different issues swirling around there that we haven’t completely resolved. But these are sort of airy-fairy philosophical and theoretical issues in terms of measurement of time or anything being exposed about the nature of time.

    Then of course we’re always looking to our experimental colleagues to improve time measurements. At some stage these will become so good that we’ll no doubt see some peculiar effects showing up. There’s still an outstanding fundamental issue that although the laws of physics are symmetric in time, for the most part, there is one set of processes having to do with the weak interaction where there is apparently a fundamental breakdown of this time-reversal symmetry of a small amount. But it seems to play a crucial role and exactly how that fits into the broader picture in the universe. I think there’s still something to be played out there. So there’s still experiments can be done in particle physics that might disclose this time-reversal asymmetry which is there in the weak interaction, and how that fits in with the arrow of time.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

  • richardmitnick 7:46 am on June 15, 2017 Permalink | Reply
    Tags: , , , Nautilus, When Neurology Becomes Theology, Wilder Penfield   

    From Nautilus: “When Neurology Becomes Theology” 



    June 15, 2017
    Robert A. Burton

    A neurologist’s perspective on research into consciousness.

    Early in my neurology residency, a 50-year-old woman insisted on being hospitalized for protection from the FBI spying on her via the TV set in her bedroom. The woman’s physical examination, lab tests, EEGs, scans, and formal neuropsychological testing revealed nothing unusual. Other than being visibly terrified of the TV monitor in the ward solarium, she had no other psychiatric symptoms or past psychiatric history. Neither did anyone else in her family, though she had no recollection of her mother, who had died when the patient was only 2.

    The psychiatry consultant favored the early childhood loss of her mother as a potential cause of a mid-life major depressive reaction. The attending neurologist was suspicious of an as yet undetectable degenerative brain disease, though he couldn’t be more specific. We residents were equally divided between the two possibilities.

    Fortunately an intern, a super-sleuth more interested in data than speculation, was able to locate her parents’ death certificates. The patient’s mother had died in a state hospital of Huntington’s disease—a genetic degenerative brain disease. (At that time such illnesses were often kept secret from the rest of the family.) Case solved. The patient was a textbook example of psychotic behavior preceding the cognitive decline and movement disorders characteristic of Huntington’s disease.

    WHERE’S THE MIND?: Wilder Penfield spent decades studying how brains produce the experience of consciousness, but concluded “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” Montreal Neurological Institute

    As a fledgling neurologist, I’d already seen a wide variety of strange mental states arising out of physical diseases. But on this particular day, I couldn’t wrap my mind around a gene mutation generating an isolated feeling of being spied on by the FBI. How could a localized excess of amino acids in a segment of DNA be transformed into paranoia?

    Though I didn’t know it at the time, I had run headlong into the “hard problem of consciousness,” the enigma of how physical brain mechanisms create purely subjective mental states. In the subsequent 50 years, what was once fodder for neurologists’ late night speculations has mushroomed into the pre-eminent question in the philosophy of mind. As an intellectual challenge, there is no equal to wondering how subatomic particles, mindless cells, synapses, and neurotransmitters create the experience of red, the beauty of a sunset, the euphoria of lust, the transcendence of music, or in this case, intractable paranoia.

    Neuroscientists have long known which general areas of the brain and their connections are necessary for the state of consciousness. By observing both the effects of localized and generalized brain insults such as anoxia and anesthesia, none of us seriously doubt that consciousness arises from discrete brain mechanisms. Because these mechanisms are consistent with general biological principles, it’s likely that, with further technical advances, we will uncover how the brain generates consciousness.

    However, such knowledge doesn’t translate into an explanation for the what of consciousness—that state of awareness of one’s surroundings and self, the experience of one’s feelings and thoughts. Imagine a hypothetical where you could mix nine parts oxytocin, 17 parts serotonin, and 11 parts dopamine into a solution that would make 100 percent of people feel a sense of infatuation 100 percent of the time. Knowing the precise chemical trigger for the sensation of infatuation (the how) tells you little about the nature of the resulting feeling (the what).

    Over my career, I’ve gathered a neurologist’s working knowledge of the physiology of sensations. I realize neuroscientists have identified neural correlates for emotional responses. Yet I remain ignorant of what sensations and responses are at the level of experience. I know the brain creates a sense of self, but that tells me little about the nature of the sensation of “I-ness.” If the self is a brain-generated construct, I’m still left wondering who or what is experiencing the illusion of being me. Similarly, if the feeling of agency is an illusion, as some philosophers of mind insist, that doesn’t help me understand the essence of my experience of willfully typing this sentence.

    Slowly, and with much resistance, it’s dawned on me that the pursuit of the nature of consciousness, no matter how cleverly couched in scientific language, is more like metaphysics and theology. It is driven by the same urges that made us dream up gods and demons, souls and afterlife. The human urge to understand ourselves is eternal, and how we frame our musings always depends upon prevailing cultural mythology. In a scientific era, we should expect philosophical and theological ruminations to be couched in the language of physical processes. We argue by inference and analogy, dragging explanations from other areas of science such as quantum physics, complexity, information theory, and math into a subjective domain. Theories of consciousness are how we wish to see ourselves in the world, and how we wish the world might be.

    My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield’s 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. One of the great men of modern neuroscience, Penfield spent several decades stimulating the brains of conscious, non-anesthetized patients and noting their descriptions of the resulting mental states, including long-lost bits of memory, dreamy states, deju vu, feelings of strangeness, and otherworldliness. What was most startling about Penfield’s work was his demonstration that sensations that normally qualify how we feel about our thoughts can occur in the absence of any conscious thought. For example, he could elicit feelings of familiarity and strangeness without the patient thinking of anything to which the feeling might apply. His ability to spontaneously evoke pure mental states was proof positive that these states arise from basic brain mechanisms.

    And yet, here’s Penfield’s conclusion to his end-of-career magnum opus on the nature of the mind: “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” How is this possible? How could a man who had single-handedly elicited so much of the fabric of subjective states of mind decide that there was something to the mind beyond what the brain did?

    In the last paragraph of his book, Penfield explains, “In ordinary conversation, the ‘mind’ and ‘the spirit of man’ are taken to be the same. I was brought up in a Christian family and I have always believed, since I first considered the matter … that there is a grand design in which all conscious individuals play a role … Since a final conclusion … is not likely to come before the youngest reader of this book dies, it behooves each one of us to adopt for himself a personal assumption (belief, religion), and a way of life without waiting for a final word from science on the nature of man’s mind.”

    Front and center is Penfield’s observation that, in ordinary conversation, the mind is synonymous with the spirit of man. Further, he admits that, in the absence of scientific evidence, all opinions about the mind are in the realm of belief and religion. If Penfield is even partially correct, we shouldn’t be surprised that any theory of the “what” of consciousness would be either intentionally or subliminally infused with one’s metaphysics and religious beliefs.

    To see how this might work, take a page from Penfield’s brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify. For instance, conceptualize thought as a mental calculation and a visceral sense of the calculation. If you add 3 + 3, you compute 6, and simultaneously have the feeling that 6 is the correct answer. Thoughts feel right, wrong, strange, beautiful, wondrous, reasonable, far-fetched, brilliant, or stupid. Collectively these widely disparate mental sensations constitute much of the contents of consciousness. But we have no control over the mental sensations that color our thoughts. No one can will a sense of understanding or the joy of an a-ha! moment. We don’t tell ourselves to make an idea feel appealing; it just is. Yet these sensations determine the direction of our thoughts. If a thought feels irrelevant, we ignore it. If it feels promising, we pursue it. Our lines of reasoning are predicated upon how thoughts feel.

    No image caption or credit.

    Shortly after reading Penfield’s book, I had the good fortune to spend a weekend with theoretical physicist David Bohm. Bohm took a great deal of time arguing for a deeper and interconnected hidden reality (his theory of implicate order). Though I had difficulty following his quantum theory-based explanations, I vividly remember him advising me that the present-day scientific approach of studying parts rather than the whole could never lead to any final answers about the nature of consciousness. According to him, all is inseparable and no part can be examined in isolation.

    In an interview in which he was asked to justify his unorthodox view of scientific method, Bohm responded, “My own interest in science is not entirely separate from what is behind an interest in religion or in philosophy—that is to understand the whole of the universe, the whole of matter, and how we originate.” If we were reading Bohm’s argument as a literary text, we would factor in his Jewish upbringing, his tragic mistreatment during the McCarthy era, the lack of general acceptance of his idiosyncratic take on quantum physics, his bouts of depression, and the close relationship between his scientific and religious interests.

    Many of today’s myriad explanations for how consciousness arises are compelling. But once we enter the arena of the nature of consciousness, there are no outright winners.

    Christof Koch, the chief scientific officer of the Allen Institute for Brain Science in Seattle, explains that a “system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe.”

    According to Daniel Dennett, professor of philosophy at Tufts University and author of Consciousness Explained and many other books on science and philosophy, consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons. He feels no need to postulate any additional physical component to explain the intrinsic qualities of our subjective experience.

    Meanwhile, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology, tells us consciousness “is how information feels when it is being processed in certain very complex ways.” He writes that “external reality is completely described by mathematics. If everything is mathematical, then, in principle, everything is understandable.” Rudolph E. Tanzi, a professor of neurology at Harvard University, admits, “To me the primal basis of existence is awareness and everything including ourselves and our brains are products of awareness.” He adds, “As a responsible scientist, one hypothesis which should be tested is that memory is stored outside the brain in a sea of consciousness.”

    Each argument, taken in isolation, seems logical, internally consistent, yet is at odds with the others. For me, the thread that connects these disparate viewpoints isn’t logic and evidence, but their overall intent. Belief without evidence is Richard Dawkins’ idea of faith. “Faith is belief in spite of, even perhaps because of, the lack of evidence.” These arguments are best read as differing expressions of personal faith.

    For his part, Dennett is an outspoken atheist and fervent critic of the excesses of religion. “I have absolutely no doubt that secular and scientific vision is right and deserves to be endorsed by everybody, and as we have seen over the last few thousand years, superstitious and religious doctrines will just have to give way.” As the basic premise of atheism is to deny that for which there is no objective evidence, he is forced to avoid directly considering the nature of purely subjective phenomena. Instead he settles on describing the contents of consciousness as illusions, resulting in the circularity of using the definition of mental states (illusions) to describe the general nature of these states.

    The problem compounds itself. Dennett is fond of pointing out (correctly) that there is no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. If so, we’re still faced with asking what/who, if anything, is experiencing consciousness? All roads lead back to the hard problem of consciousness.

    Though tacitly agreeing with those who contend that we don’t yet understand the nature of consciousness, Dennett argues that we are making progress. “We haven’t yet succeeded in fully conceiving how meaning could exist in a material world … or how consciousness works, but we’ve made progress: The questions we’re posing and addressing now are better than the questions of yesteryear. We’re hot on the trail of the answers.”

    By contrast, Koch is upfront in correlating his religious upbringing with his life-long pursuit of the nature of consciousness. Raised as a Catholic, he describes being torn between two contradictory views of the world—the Sunday view reflected by his family and church, and the weekday view as reflected in his work as a scientist (the sacred and the profane).

    In an interview with Nautilus, Koch said, “For reasons I don’t understand and don’t comprehend, I find myself in a universe that had to become conscious, reflecting upon itself.” He added, “The God I now believe in is closer to the God of Spinoza than it is to Michelangelo’s paintings or the God of the Old Testament, a god that resides in this mystical notion of all-nothingness.” Koch admitted, “I’m not a mystic. I’m a scientist, but this is a feeling I have.” In short, Koch exemplifies a truth seldom admitted—that mental states such as a mystical feeling shape how one thinks about and goes about studying the universe, including mental states such as consciousness.

    Both Dennett and Koch have spent a lifetime considering the problem of consciousness; though contradictory, each point of view has a separate appeal. And I appreciate much of Dennett and Koch’s explorations in the same way that I can mull over Aquinas and Spinoza without necessarily agreeing with them. One can enjoy the pursuit without believing in or expecting answers. After all these years without any personal progress, I remain moved by the essential nature of the quest, even if it translates into Sisyphus endlessly pushing his rock up the hill.

    The spectacular advances of modern science have generated a mindset that makes potential limits to scientific inquiry intuitively difficult to grasp. Again and again we are given examples of seemingly insurmountable problems that yield to previously unimaginable answers. Just as some physicists believe we will one day have a Theory of Everything, many cognitive scientists believe that consciousness, like any physical property, can be unraveled. Overlooked in this optimism is the ultimate barrier: The nature of consciousness is in the mind of the beholder, not in the eye of the observer.

    It is likely that science will tell us how consciousness occurs. But that’s it. Although the what of consciousness is beyond direct inquiry, the urge to explain will persist. It is who we are and what we do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Welcome to Nautilus. We are delighted you joined us. We are here to tell you about science and its endless connections to our lives. Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers. We follow the story wherever it leads us. Read our essays, investigative reports, and blogs. Fiction, too. Take in our games, videos, and graphic stories. Stop in for a minute, or an hour. Nautilus lets science spill over its usual borders. We are science, connected.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: