Tagged: Quanta Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:59 pm on December 6, 2019 Permalink | Reply
    Tags: "Black Hole Singularities Are as Inescapable as Expected", , , , , Quanta   

    From Quanta Magazine: “Black Hole Singularities Are as Inescapable as Expected” 

    Quanta Magazine
    From Quanta Magazine


    December 2, 2019
    Steve Nadis

    In January 1916, Karl Schwarzschild, a German physicist who was stationed as a soldier on the eastern front, produced the first exact solution to the equations of general relativity, Albert Einstein’s radical, two-month-old theory of gravity. General relativity portrayed gravity not as an attractive force, as it had long been understood, but rather as the effect of curved space and time. Schwarzschild’s solution revealed the curvature of space-time around a stationary ball of matter.

    Curiously, Schwarzschild noticed that if this matter were confined within a small enough radius, there would be a point of infinite curvature and density — a “singularity” — at the center.

    Infinities cropping up in physics are usually cause for alarm, and neither Einstein, upon learning of the soldier’s result, nor Schwarzschild himself believed that such objects really exist. But starting in the 1970s, evidence mounted that the universe contains droves of these entities — dubbed “black holes” because their gravity is so strong that nothing going into them, not even light, can come out. The nature of the singularities inside black holes has been a mystery ever since.

    Recently, a team of researchers affiliated with Harvard University’s Black Hole Initiative (BHI) made significant progress on this puzzle. Paul Chesler, Ramesh Narayan and Erik Curiel probed the interiors of theoretical black holes that resemble those studied by astronomers, seeking to determine what kind of singularity is found inside. A singularity is not a place where quantities really become infinite, but “a place where general relativity breaks down,” Chesler explained. At such a point, general relativity is thought to give way to a more exact, as yet unknown, quantum-scale description of gravity. But there are three different ways in which Einstein’s theory can go haywire, leading to three different kinds of possible singularities. “Knowing when and where general relativity breaks down is useful in knowing what theory [of quantum gravity] lies beyond it,” Chesler said.

    The BHI group built on a major advance achieved in 1963, when the mathematician Roy Kerr solved Einstein’s equations for a spinning black hole — a more realistic situation than the one Schwarzschild took on since practically everything in the universe rotates. This problem was harder than Schwarzschild’s, because rotating objects have bulges in the center and therefore lack spherical symmetry. Kerr’s solution [Physical Review Letters] unambiguously described the region outside a spinning black hole, but not its interior.

    Kerr’s black hole was still somewhat unrealistic, as it occupied a space devoid of matter. This, the BHI researchers realized, had the effect of making the solution unstable; the addition of even a single particle could drastically change the black hole’s interior space-time geometry. In an attempt to make their model more realistic and more stable, they sprinkled matter of a special kind called an “elementary scalar field” in and around their theoretical black hole. And whereas the original Kerr solution concerned an “eternal” black hole that has always been there, the black holes in their analysis formed from gravitational collapse, like the ones that abound in the cosmos.

    First, Chesler, Narayan and Curiel tested their methodology on a charged, non-spinning, spherical black hole formed from the gravitational collapse of matter in an elementary scalar field. They detailed their findings in a paper posted on the scientific preprint site arxiv.org in February. Next, Chesler tackled the more complicated equations pertaining to a similarly formed rotating black hole, reporting his solo results three months later.

    Their analyses showed that both types of black holes contain two distinct kinds of singularities. A black hole is encased within a sphere called an event horizon: Once matter or light crosses this invisible boundary and enters the black hole, it cannot escape. Inside the event horizon, charged stationary and rotating black holes are known to have a second spherical surface of no return, called the inner horizon. Chesler and his colleagues found that for the black holes they studied, a “null” singularity inevitably forms at the inner horizon, a finding consistent with prior results. Matter and radiation can pass through this kind of singularity for most of the black hole’s lifetime, Chesler explained, but as time goes on the space-time curvature grows exponentially, “becoming infinite at infinitely late times.”

    The physicists most wanted to find out whether their quasi-realistic black holes have a central singularity — a fact that had only been established for certain for simple Schwarzschild black holes. And if there is a central singularity, they wanted to determine whether it is “spacelike” or “timelike.” These terms derive from the fact that once a particle approaches a spacelike singularity, it is not possible to evolve the equations of general relativity forward in time; evolution is only allowed along the space direction. Conversely, a particle approaching a timelike singularity will not inexorably be drawn inside; it still has a possible future and can therefore move forward in time, although its position in space is fixed. Outside observers cannot see spacelike singularities because light waves always move into them and never come out. Light waves can come out of timelike singularities, however, making them visible to outsiders.

    Of these two types, a spacelike singularity may be preferable to physicists because general relativity only breaks down at the point of singularity itself. For a timelike singularity, the theory falters everywhere around that point. A physicist has no way of predicting, for instance, whether radiation will emerge from a timelike singularity and what its intensity or amplitude might be.

    The group found that for both types of black holes they examined, there is indeed a central singularity, and it is always spacelike. That was assumed to be the case by many, if not most, astrophysicists who held an opinion, Chesler noted, “but it was not known for certain.”

    The physicist Amos Ori, a black hole expert at the Technion in Haifa, Israel, said of Chesler’s new paper, “To the best of my knowledge, this is the first time that such a direct derivation has been given for the occurrence of a spacelike singularity inside spinning black holes.”

    Gaurav Khanna, a physicist at the University of Massachusetts, Dartmouth, who also investigates black hole singularities, called the BHI team’s studies “great progress — a quantum leap beyond previous efforts in this area.”

    While Chesler and his collaborators have strengthened the case that astrophysical black holes have spacelike singularities at their cores, they haven’t proved it yet. Their next step is to make more realistic calculations that go beyond elementary scalar fields and incorporate messier forms of matter and radiation.

    Chesler stressed that the singularities that appear in black hole calculations should disappear when physicists craft a quantum theory of gravity that can handle the extreme conditions found at those points. According to Chesler, the act of pushing Einstein’s theory to its limits and seeing exactly how it fails “can guide you in constructing the next theory.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 2:16 pm on June 10, 2019 Permalink | Reply
    Tags: "Quantum Leaps, Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s., In this way what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control., Long Assumed to Be Instantaneous, Quanta, , Take Time", The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr Werner Heisenberg and their colleagues in the mid-1920s, The “quantum leap.”, the Copenhagen interpretation, The researchers could spot when a quantum jump was about to appear- “catch” it halfway through and reverse it sending the system back to the state in which it started.,   

    From Quanta: “Quantum Leaps, Long Assumed to Be Instantaneous, Take Time” 

    Quanta Magazine
    From Quanta Magazine

    June 5, 2019
    Philip Ball

    A quantum leap is a rapidly gradual process. Quanta Magazine; source: qoncha

    When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

    A new experiment [Nature] shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature [noted above]. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

    But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

    All Too Random

    The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

    Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

    U Chicago Press Books

    Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” [BJPS] Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

    The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

    Seeing Without Looking

    To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported [Physical Review Letters] them [Physical Review Letters] happening [Physical Review Letters] in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported [Nature] jumps that correspond to what they called “the birth, life and death of individual photons.”

    In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

    The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

    Michel Devoret (left) and Zlatko Minev in front of the cryostat holding their experiment. Yale Quantum Institute

    Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

    To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

    The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

    The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

    The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

    To Catch a Jump

    The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

    So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

    Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

    That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

    With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

    The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

    The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

    What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

    Flash of Insight

    The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

    The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

    The possibility of predicting quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal [to a quantum jump] has not been proposed or measured before,” he said.

    Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

    DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

    The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 1:38 pm on August 2, 2017 Permalink | Reply
    Tags: , , , , , G2- star or gas cloud? Settled it is a star, Quanta,   

    From Quanta: “Black-Hole Hunter Takes Aim at Einstein” 

    Quanta Magazine
    Quanta Magazine

    July 27, 2017
    Joshua Sokol

    Andrea Ghez at the W.M. Keck Observatory Headquarters in Waimea, Hawaii. John Hook for Quanta Magazine.

    If you cast an observational lasso into the center of the Milky Way galaxy and pull it closed, you will find a dense, dark lump: a mass totaling some four million suns, crammed into a space no wider than twice Pluto’s orbit in our solar system.

    In recent years, astronomers have come to agree that inside this region is a supermassive black hole, and that similar black holes lurk at the cores of nearly all other galaxies as well. And for those revelations, they give a lot of credit to Andrea Ghez.

    Since 1995, Ghez, an astrophysicist at the University of California, Los Angeles, has used the W.M. Keck telescope on Mauna Kea in Hawaii to see fine details at the center of the galaxy. The observations that Ghez has made of stars racing around the Milky Way’s core (alongside those of rival Reinhard Genzel, an astrophysicist at the Max Planck Institute for Astrophysics in Garching, Germany) have proven to most astronomers that the central object can be nothing but a black hole. But to be able to see these fine details, Ghez had to become a pioneering user of adaptive optics, a technology that measures distortions in the atmosphere and then adjusts the telescope in real time to cancel out those fluctuations. The technique produces images that look as if they were taken under the calmest possible skies.

    In Ghez’s mind, new discoveries require that scientists take risks. “If you have a new idea, the thing you are going to encounter first and foremost is ‘no, you can’t do it,’” she said. “I can’t tell you how many times in the course of this project I have been told ‘this won’t work.’” Her first proposal to image the galactic center was turned down; two decades later, Ghez, now 52, has received a MacArthur Fellowship, among other awards, and was the first woman to receive a Crafoord Prize from the Royal Swedish Academy of Sciences.

    Ghez maps the movements of stars around the supermassive black hole at the galaxy’s center. John Hook for Quanta Magazine.

    The supermassive black hole has been identified, but her explorations are far from over. Theories of galactic evolution suggest that the Milky Way’s center should have lots of old stars and almost no young stars. Observations show the opposite. Ghez’s group is also tracking a mysterious, glowing infrared blob called G2 that skimmed past the black hole in 2014. And now, using their decades-long data set, her team has begun testing whether the stars orbiting the black hole move according to the rules of Einstein’s general relativity or are subject to exotic deviations from theory.

    Quanta caught up with Ghez to hear about these projects and her plans. The interview has been edited and condensed for clarity.

    You use new telescope technology to address deep theoretical questions. Which one comes first for you: observation or theory?

    I think that’s a great question about creativity and discovery. Like, how do you figure out your next project? For me, what floats my boat the most is to figure out new ways of seeing things; to reveal puzzles. What makes me happiest is when observations don’t make sense. And in order for observations to not make sense in a new way — in other words to not be doing incremental work — you need to be looking in a way that’s different.

    Your team and Reinhard Genzel’s group disagreed about how to interpret the observations of G2.

    An image from W. M. Keck Observatory near infrared data shows that G2 survived its closest approach to the black hole and continues happily on its orbit. The green circle just to its right depicts the location of the invisible supermassive black hole. Credit: Andrea Ghez, Gunther Witzel/UCLA Galactic Center Group/W. M. Keck Observatory. Universe Today.

    They thought it was a gas cloud; your group suggested it was a star. Can you walk us through what happened when it passed the black hole in 2014?

    I was pretty convinced that you could explain this object with a model in which you said the object was actually intrinsically a star. One of the key determinants of whether it was a pure gas cloud or a star was whether or not it survived closest approach in 2014. It happily survived.

    The interpretation that I am most intrigued by is the idea that you are seeing an object that began its life as a binary star. And if you put very close binaries near a black hole, it turns out to induce what’s known as a three-body interaction, and the binary can merge. So black holes can drive binaries to merge more quickly than they would anywhere else in our galaxy. You end up with an object that has the characteristics of what we are looking at.

    It also explains some of the unusual observations of the center of the galaxy. We see many young stars at the center of the galaxy that are hard to explain. It turns out that when binaries merge, it’s like resetting the clock; you get a rebirth of a star, so to speak. So it will create an excess of apparently young stars really close to the black hole, and that’s exactly what we see.

    And then after we got very excited about this whole business of binaries, the detection [of gravitational waves from a black-hole merger] happened. In fact, if you take this scenario that we’re developing, where G2 at the galactic center is a binary, it actually gives a mechanism for very naturally explaining these events.

    You’re referring to the fact that the Laser Interferometer Gravitational-Wave Observatory (LIGO) found black holes of around 30 times the mass of the sun, which is heavier than astronomers expected?

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    If you took two stellar-mass black holes that are the mass that we anticipated, which is 10 times the mass of the sun rather than the 30 that is being observed, and put them near a supermassive black hole, then the two would merge to become a 20 solar mass black hole. And if you do this successively you can work your way up to the 30 solar mass number.

    Again, we always start with what’s simple, and then the observations often lay out a more complicated picture. But today, the standard picture is that most if not all galaxies harbor supermassive black holes, so if you think they can play an important role in terms of driving binary stars to merge, then you need to think about that in terms of understanding LIGO. So I think G2 has this really interesting connection — potentially, let me really emphasize potential, as this just an idea we’re playing with — but it has a lot of nice attributes of being very consistent with what we know today about the universe and the center of our galaxy specifically.

    By which you mean the young stars in the galactic center, and the LIGO observations?

    Right. There’s a third mystery that may be a bit of a stretch. We anticipate that the population of old stars should be greater near the black hole. And yet we actually don’t see that. There are all sorts of different explanations, from all different camps, but one camp is that the old stars that you are looking at have envelopes that are a little fluffy. If you think that binary stars are being driven to merge, before they merge the binaries might strip these old stars of their outer envelopes. That would make them fainter than you expect them to be, so the lack of old stars might just be an observational outcome of this binary process. Again — when you line up all your mysteries, you have to ask, well, what’s the missing element? What am I not seeing?

    You have started testing general relativity around the supermassive black hole, and you haven’t found any deviations yet from Einstein’s predictions. What are your plans for this project?

    In 2018, the star that is the strongest probe of the gravity around the black hole, S02, will make its next closest approach. And it will be the first time we have enough of a handle on its orbit for that closest passage to probe the laws of gravity. In the space of a month or so its velocity will change by more than 6,000 kilometers per second. That’s what will enable us to test general relativity.

    Speaking of improvements in technology, you were until recently on the science advisory committee of the Thirty Meter Telescope (TMT), which is expected to be the world’s most powerful ground-based telescope.

    TMT-Thirty Meter Telescope, proposed for Mauna Kea, Hawaii, USA

    The observatory was planned for Mauna Kea, but the Hawaiian Sovereignty Movement considers that mountain a sacred place and is opposing the project. Spain’s Canary Islands have been chosen as a backup location. If TMT doesn’t go up on Mauna Kea, how does that affect your studies of the galactic center?

    Oh, you know, that’s such a can of worms. Let me tell you a side story before we get into this more political stuff.

    Today, all these weird phenomena that we see — like G2, and the young stars where there should be none, and not enough old stars — you’re really only looking at the brightest stars. So, in order to truly understand the population, you really need to see the typical star, because most stars are low mass or faint. So as we improve our technology both in terms of adaptive optics and going to larger telescopes, it allows you to see a typical star like the sun.

    In addition, not only would better resolution let you probe gravity with better measurements of the stellar orbits, but you can increase your understanding of how black holes impact the evolution of a galaxy. And this effect is a key parameter of all cosmological models. You want to be able to see not just the tip of the iceberg in terms of the stellar population.

    OK, so then let’s tackle this TMT story. I was on the TMT science advisory committee for, I don’t know, 13 years. The thing that is important is that you get a site where adaptive optics works really well. That means that you want a very smooth airflow over the site where your telescope is at; you want to be on a mountain that is surrounded by a body of water. So you always see observatories near water. Hawaii is surrounded by water, and the Canary Islands are surrounded by water, as opposed to having just water on one side. That makes for much smoother airflows. So I think that the alternative site has some interesting characteristics. Without being — can you tell my angst about talking about this?

    Yes, sorry to put you on the spot. But I had to, because it’s very interesting.

    It’s a very interesting story that goes so far beyond science. If it were only a scientific decision, today, Mauna Kea would be my preference. It’s what we chose, so we chose it for a reason. It’s a great site from the point of view of performance of adaptive optics. From my biased perspective, it’s also farther south, so it’s easier to see the center of the galaxy. But one has to be totally respectful of the cultural issues associated with Mauna Kea. It’s one thing to be an astronomer over on the mainland thinking and looking at this, but when you go over there, you understand that it’s a much more complex issue.

    I hope for the sake of science, and also for the sake of bringing science and technology to the state of Hawaii, that this project can continue. But it has to continue in a way that works for all the players. And I think the issues have risen far above the issues of astronomy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 4:07 pm on June 20, 2017 Permalink | Reply
    Tags: , Conjectures about gravity, Cosmic censorship conjecture, , Naked singularity in a four-dimensional universe, , Quanta, , Singularities, , Then Stephen said ‘You want to bet?’, Weak gravity   

    From Quanta: “Where Gravity Is Weak and Naked Singularities Are Verboten’ 

    Quanta Magazine
    Quanta Magazine

    June 20, 2017
    Natalie Wolchover

    Mike Zeng for Quanta Magazine

    Physicists have wondered for decades whether infinitely dense points known as singularities can ever exist outside black holes, which would expose the mysteries of quantum gravity for all to see. Singularities — snags in the otherwise smooth fabric of space and time where Albert Einstein’s classical gravity theory breaks down and the unknown quantum theory of gravity is needed — seem to always come cloaked in darkness, hiding from view behind the event horizons of black holes. The British physicist and mathematician Sir Roger Penrose conjectured in 1969 that visible or “naked” singularities are actually forbidden from forming in nature, in a kind of cosmic censorship. But why should quantum gravity censor itself?

    Roger Penrose in Berkeley, California, in 1978, nine years after proposing the cosmic censorship conjecture. George M. Bergman, Berkeley. Source: Archives of the Mathematisches Forschungsinstitut Oberwolfach

    Now, new theoretical calculations provide a possible explanation for why naked singularities do not exist — in a particular model universe, at least. The findings indicate that a second, newer conjecture about gravity, if it is true, reinforces Penrose’s cosmic censorship conjecture by preventing naked singularities from forming in this model universe. Some experts say the mutually supportive relationship between the two conjectures increases the chances that both are correct. And while this would mean singularities do stay frustratingly hidden, it would also reveal an important feature of the quantum gravity theory that eludes us.

    “It’s pleasing that there’s a connection” between the two conjectures, said John Preskill of the California Institute of Technology, who in 1991 bet Stephen Hawking that the cosmic censorship conjecture would fail (though he actually thinks it’s probably true).

    The new work, reported in May in Physical Review Letters by Jorge Santos and his student Toby Crisford at the University of Cambridge and relying on a key insight by Cumrun Vafa of Harvard University, unexpectedly ties cosmic censorship to the 2006 weak gravity conjecture [JHEP], which asserts that gravity must always be the weakest force in any viable universe, as it is in ours. (Gravity is by far the weakest of the four fundamental forces; two electrons electrically repel each other 1 million trillion trillion trillion times more strongly than they gravitationally attract each other.) Santos and Crisford were able to simulate the formation of a naked singularity in a four-dimensional universe with a different space-time geometry than ours. But they found that if another force exists in that universe that affects particles more strongly than gravity, the singularity becomes cloaked in a black hole. In other words, where a perverse pinprick would otherwise form in the space-time fabric, naked for all the world to see, the relative weakness of gravity prevents it.

    Santos and Crisford are running simulations now to test whether cosmic censorship is saved at exactly the limit where gravity becomes the weakest force in the model universe, as initial calculations suggest. Such an alliance with the better-established cosmic censorship conjecture would reflect very well on the weak gravity conjecture. And if weak gravity is right, it points to a deep relationship between gravity and the other quantum forces, potentially lending support to string theory over a rival theory called loop quantum gravity. The “unification” of the forces happens naturally in string theory, where gravity is one vibrational mode of strings and forces like electromagnetism are other modes. But unification is less obvious in loop quantum gravity, where space-time is quantized in tiny volumetric packets that bear no direct connection to the other particles and forces. “If the weak gravity conjecture is right, loop quantum gravity is definitely wrong,” said Nima Arkani-Hamed, a professor at the Institute for Advanced Study who co-discovered the weak gravity conjecture.

    The new work “does tell us about quantum gravity,” said Gary Horowitz, a theoretical physicist at the University of California, Santa Barbara.

    The Naked Singularities

    In 1991, Preskill and Kip Thorne, both theoretical physicists at Caltech, visited Stephen Hawking at Cambridge. Hawking had spent decades exploring the possibilities packed into the Einstein equation, which defines how space-time bends in the presence of matter, giving rise to gravity. Like Penrose and everyone else, he had yet to find a mechanism by which a naked singularity could form in a universe like ours. Always, singularities lay at the centers of black holes — sinkholes in space-time that are so steep that no light can climb out. He told his visitors that he believed in cosmic censorship. Preskill and Thorne, both experts in quantum gravity and black holes (Thorne was one of three physicists who founded the black-hole-detecting LIGO experiment), said they felt it might be possible to detect naked singularities and quantum gravity effects. “There was a long pause,” Preskill recalled. “Then Stephen said, ‘You want to bet?’”

    The bet had to be settled on a technicality and renegotiated in 1997, after the first ambiguous exception cropped up. Matt Choptuik, a physicist at the University of British Columbia who uses numerical simulations to study Einstein’s theory, showed that a naked singularity can form in a four-dimensional universe like ours when you perfectly fine-tune its initial conditions. Nudge the initial data by any amount, and you lose it — a black hole forms around the singularity, censoring the scene. This exceptional case doesn’t disprove cosmic censorship as Penrose meant it, because it doesn’t suggest naked singularities might actually form. Nonetheless, Hawking conceded the original bet and paid his debt per the stipulations, “with clothing to cover the winner’s nakedness.” He embarrassed Preskill by making him wear a T-shirt featuring a nearly-naked lady while giving a talk to 1,000 people at Caltech. The clothing was supposed to be “embroidered with a suitable concessionary message,” but Hawking’s read like a challenge: “Nature Abhors a Naked Singularity.”

    The physicists posted a new bet online, with language to clarify that only non-exceptional counterexamples to cosmic censorship would count. And this time, they agreed, “The clothing is to be embroidered with a suitable, truly concessionary message.”

    The wager still stands 20 years later, but not without coming under threat. In 2010, the physicists Frans Pretorius and Luis Lehner discovered a mechanism [Physical Review Letters]for producing naked singularities in hypothetical universes with five or more dimensions. And in their May paper, Santos and Crisford reported a naked singularity in a classical universe with four space-time dimensions, like our own, but with a radically different geometry. This latest one is “in between the ‘technical’ counterexample of the 1990s and a true counterexample,” Horowitz said. Preskill agrees that it doesn’t settle the bet. But it does change the story.

    Lucy Reading-Ikkanda/Quanta Magazine

    The Tin Can Universe

    The new discovery began to unfold in 2014, when Horowitz, Santos and Benson Way found that naked singularities could exist in a pretend 4-D universe called “anti-de Sitter” (AdS) space whose space-time geometry is shaped like a tin can. This universe has a boundary — the can’s side — which makes it a convenient testing ground for ideas about quantum gravity: Physicists can treat bendy space-time in the can’s interior like a hologram that projects off of the can’s surface, where there is no gravity. In universes like our own, which is closer to a “de Sitter” (dS) geometry, the only boundary is the infinite future, essentially the end of time. Timeless infinity doesn’t make a very good surface for projecting a hologram of a living, breathing universe.

    Despite their differences, the interiors of both AdS and dS universes obey Einstein’s classical gravity theory — everywhere outside singularities, that is. If cosmic censorship holds in one of the two arenas, some experts say you might expect it to hold up in both.

    Horowitz, Santos and Way were studying what happens when an electric field and a gravitational field coexist in an AdS universe. Their calculations suggested that cranking up the energy of the electric field on the surface of the tin can universe will cause space-time to curve more and more sharply around a corresponding point inside, eventually forming a naked singularity. In their recent paper, Santos and Crisford verified the earlier calculations with numerical simulations.

    But why would naked singularities exist in 5-D and in 4-D when you change the geometry, but never in a flat 4-D universe like ours? “It’s like, what the heck!” Santos said. “It’s so weird you should work on it, right? There has to be something here.”

    Weak Gravity to the Rescue

    In 2015, Horowitz mentioned the evidence for a naked singularity in 4-D AdS space to Cumrun Vafa, a Harvard string theorist and quantum gravity theorist who stopped by Horowitz’s office. Vafa had been working to rule out large swaths of the 10^500 different possible universes that string theory naively allows. He did this by identifying “swamplands”: failed universes that are too logically inconsistent to exist. By understanding patterns of land and swamp, he hoped to get an overall picture of quantum gravity.

    Working with Arkani-Hamed, Luboš Motl and Alberto Nicolis in 2006, Vafa proposed the weak gravity conjecture as a swamplands test. The researchers found that universes only seemed to make sense when particles were affected by gravity less than they were by at least one other force. Dial down the other forces of nature too much, and violations of causality and other problems arise. “Things were going wrong just when you started violating gravity as the weakest force,” Arkani-Hamed said. The weak-gravity requirement drowns huge regions of the quantum gravity landscape in swamplands.

    Jorge Santos (left) and Toby Crisford of the University of Cambridge have found an unexpected link between two conjectures about gravity.
    Courtesy of Jorge Santos

    Weak gravity and cosmic censorship seem to describe different things, but in chatting with Horowitz that day in 2015, Vafa realized that they might be linked. Horowitz had explained Santos and Crisford’s simulated naked singularity: When the researchers cranked up the strength of the electric field on the boundary of their tin-can universe, they assumed that the interior was classical — perfectly smooth, with no particles quantum mechanically fluctuating in and out of existence. But Vafa reasoned that, if such particles existed, and if, in accordance with the weak gravity conjecture, they were more strongly coupled to the electric field than to gravity, then cranking up the electric field on the AdS boundary would cause sufficient numbers of particles to arise in the corresponding region in the interior to gravitationally collapse the region into a black hole, preventing the naked singularity.

    Subsequent calculations by Santos and Crisford supported Vafa’s hunch; the simulations they’re running now could verify that naked singularities become cloaked in black holes right at the point where gravity becomes the weakest force. “We don’t know exactly why, but it seems to be true,” Vafa said. “These two reinforce each other.”

    Quantum Gravity

    The full implications of the new work, and of the two conjectures, will take time to sink in. Cosmic censorship imposes an odd disconnect between quantum gravity at the centers of black holes and classical gravity throughout the rest of the universe. Weak gravity appears to bridge the gap, linking quantum gravity to the other quantum forces that govern particles in the universe, and possibly favoring a stringy approach over a loopy one. Preskill said, “I think it’s something you would put on your list of arguments or reasons for believing in unification of the forces.”

    However, Lee Smolin of the Perimeter Institute, one of the developers of loop quantum gravity, has pushed back, arguing that if weak gravity is true, there might be a loopy reason for it. And he contends that there is a path to unification [J.Phys.A] of the forces within his theory — a path that would need to be pursued all the more vigorously if the weak gravity conjecture holds.

    Given the apparent absence of naked singularities in our universe, physicists will take hints about quantum gravity wherever they can find them. They’re as lost now in the endless landscape of possible quantum gravity theories as they were in the 1990s, with no prospects for determining through experiments which underlying theory describes our world. “It is thus paramount to find generic properties that such quantum gravity theories must have in order to be viable,” Santos said, echoing the swamplands philosophy.

    Weak gravity might be one such property — a necessary condition for quantum gravity’s consistency that spills out and affects the world beyond black holes. These may be some of the only clues available to help researchers feel their way into the darkness.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:35 pm on June 17, 2017 Permalink | Reply
    Tags: , , , , , Helen Quinn and Roberto Peccei, Peccei-Quinn symmetry, Quanta, ,   

    From Quanta: “Roberto Peccei and Helen Quinn, Driving Around Stanford in a Clunky Jeep” 

    Quanta Magazine
    Quanta Magazine

    June 15, 2017
    Thomas Lin
    Olena Shmahalo, Art Director
    Lucy Reading-Ikkanda, graphics

    Ryan Schude for Quanta Magazine
    Helen Quinn and Roberto Peccei walking toward Stanford University’s new science and engineering quad. Behind them is the main quad, the oldest part of the campus. “If you look at a campus map,” said Quinn, who along with Peccei proposed Peccei-Quinn symmetry, “you will see the axis that goes through the middle of both quadrangle areas. We are on that line between the two.”

    Four decades ago, Helen Quinn and Roberto Peccei took on one of the great problems in theoretical particle physics: the strong charge-parity (CP) problem. Why does the symmetry between matter and antimatter break in weak interactions, which are responsible for nuclear decay, but not in strong interactions, which hold matter together?

    “The academic year 1976-77 was particularly exciting for me because Helen Quinn and Steven Weinberg were visiting the Stanford department of physics,” Peccei told Quanta in an email. “Helen and I had similar interests and we soon started working together.”

    Encouraged by Weinberg, who would go on to win a Nobel Prize in physics in 1979 for his work on the unification of electroweak interactions, Quinn and Peccei zeroed in on a CP-violating interaction whose strength can be characterized by an angular variable, theta. They knew theta had to be small, but no one had an elegant mechanism for explaining its smallness.

    “Steve liked to discuss physics over lunch, and Helen and I often joined him,” Peccei said. “Steve invariably brought up the theta problem in our lunch discussions, urging us to find a natural solution for why it was so small.”

    Quinn said by email that she and Peccei knew two things: The problem goes away if any quarks have zero mass (which seems to make theta irrelevant), and “in the very early hot universe all the quarks have zero mass.” They wondered how it could be that “theta is irrelevant in the early universe but matters once it cools enough that the quarks get their masses?”

    They proceeded to draft a “completely wrong paper based on conclusions we drew from this set of facts,” Quinn said. They went to Weinberg, whose comments helped clarify their thinking and, she said, “put us on the right track.”

    They realized they could naturally arrive at a zero value for theta by requiring a new symmetry, now known as the Peccei-Quinn mechanism. Besides being one of the popular proposed solutions to the strong CP problem, Peccei-Quinn symmetry also predicts the existence of a hypothetical “axion” particle, which has become a mainstay in theories of supersymmetry and cosmic inflation and has been proposed as a candidate for dark matter.

    Peccei and Quinn discussing their proposed symmetry with the aid of a sombrero. Ryan Schude for Quanta Magazine

    That year at Stanford, Quinn and Peccei regularly interacted with the theory group at the Stanford Linear Accelerator Center (SLAC) as well as with another group from the University of California, Santa Cruz.

    “We formed a large and active group of theorists, which created a wonderful atmosphere of open discussion and collaboration,” Quinn said, adding that she recalls “riding with Roberto back and forth from Stanford to SLAC in his yellow and clunky Jeep, talking physics ideas as we went.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 5:00 pm on June 13, 2017 Permalink | Reply
    Tags: A different kind of dark matter could help to resolve an old celestial conundrum, , , , , , , Dark matter superfluid, Dark matter vortices, , Kent Ford, Quanta, ,   

    From Quanta: “Dark Matter Recipe Calls for One Part Superfluid” 

    Quanta Magazine
    Quanta Magazine

    June 13, 2017
    Jennifer Ouellette

    A different kind of dark matter could help to resolve an old celestial conundrum.

    Markos Kay for Quanta Magazine

    For years, dark matter has been behaving badly. The term was first invoked nearly 80 years ago by the astronomer Fritz Zwicky, who realized that some unseen gravitational force was needed to stop individual galaxies from escaping giant galaxy clusters. Later, Vera Rubin and Kent Ford used unseen dark matter to explain why galaxies themselves don’t fly apart.

    Yet even though we use the term “dark matter” to describe these two situations, it’s not clear that the same kind of stuff is at work. The simplest and most popular model holds that dark matter is made of weakly interacting particles that move about slowly under the force of gravity. This so-called “cold” dark matter accurately describes large-scale structures like galaxy clusters. However, it doesn’t do a great job at predicting the rotation curves of individual galaxies. Dark matter seems to act differently at this scale.

    In the latest effort to resolve this conundrum, two physicists have proposed that dark matter is capable of changing phases at different size scales. Justin Khoury, a physicist at the University of Pennsylvania, and his former postdoc Lasha Berezhiani, who is now at Princeton University, say that in the cold, dense environment of the galactic halo, dark matter condenses into a superfluid — an exotic quantum state of matter that has zero viscosity. If dark matter forms a superfluid at the galactic scale, it could give rise to a new force that would account for the observations that don’t fit the cold dark matter model. Yet at the scale of galaxy clusters, the special conditions required for a superfluid state to form don’t exist; here, dark matter behaves like conventional cold dark matter.

    “It’s a neat idea,” said Tim Tait, a particle physicist at the University of California, Irvine. “You get to have two different kinds of dark matter described by one thing.” And that neat idea may soon be testable. Although other physicists have toyed with similar ideas, Khoury and Berezhiani are nearing the point where they can extract testable predictions that would allow astronomers to explore whether our galaxy is swimming in a superfluid sea.

    Impossible Superfluids

    Here on Earth, superfluids aren’t exactly commonplace. But physicists have been cooking them up in their labs since 1938. Cool down particles to sufficiently low temperatures and their quantum nature will start to emerge. Their matter waves will spread out and overlap with one other, eventually coordinating themselves to behave as if they were one big “superatom.” They will become coherent, much like the light particles in a laser all have the same energy and vibrate as one. These days even undergraduates create so-called Bose-Einstein condensates (BECs) in the lab, many of which can be classified as superfluids.

    Superfluids don’t exist in the everyday world — it’s too warm for the necessary quantum effects to hold sway. Because of that, “probably ten years ago, people would have balked at this idea and just said ‘this is impossible,’” said Tait. But recently, more physicists have warmed to the possibility of superfluid phases forming naturally in the extreme conditions of space. Superfluids may exist inside neutron stars, and some researchers have speculated that space-time itself may be a superfluid. So why shouldn’t dark matter have a superfluid phase, too?

    To make a superfluid out of a collection of particles, you need to do two things: Pack the particles together at very high densities and cool them down to extremely low temperatures. In the lab, physicists (or undergraduates) confine the particles in an electromagnetic trap, then zap them with lasers to remove the kinetic energy and lower the temperature to just above absolute zero.

    Lucy Reading-Ikkanda/Quanta Magazine

    The dark matter particles that would make Khoury and Berezhiani’s idea work are emphatically not WIMP-like. WIMPs should be pretty massive as fundamental particles go — about as massive as 100 protons, give or take. For Khoury’s scenario to work, the dark matter particle would have to be a billion times less massive. Consequently, there should be billions of times as many of them zipping through the universe — enough to account for the observed effects of dark matter and to achieve the dense packing required for a superfluid to form. In addition, ordinary WIMPs don’t interact with one another. Dark matter superfluid particles would require strongly interacting particles.

    The closest candidate is the axion, a hypothetical ultralight particle with a mass that could be 10,000 trillion trillion times as small as the mass of the electron. According to Chanda Prescod-Weinstein, a theoretical physicist at the University of Washington, axions could theoretically condense into something like a Bose-Einstein condensate.

    But the standard axion doesn’t quite fit Khoury and Berezhiani’s needs. In their model, particles would need to experience a strong, repulsive interaction with one another. Typical axion models have interactions that are both weak and attractive. That said, “I think everyone thinks that dark matter probably does interact with itself at some level,” said Tait. It’s just a matter of determining whether that interaction is weak or strong.

    Cosmic Superfluid Searches

    The next step for Khoury and Berezhiani is to figure out how to test their model — to find a telltale signature that could distinguish this superfluid concept from ordinary cold dark matter. One possibility: dark matter vortices. In the lab, rotating superfluids give rise to swirling vortices that keep going without ever losing energy. Superfluid dark matter halos in a galaxy should rotate sufficiently fast to also produce arrays of vortices. If the vortices were massive enough, it would be possible to detect them directly.

    Inside galaxies, the role of the electromagnetic trap would be played by the galaxy’s gravitational pull, which could squeeze dark matter together enough to satisfy the density requirement. The temperature requirement is easier: Space, after all, is naturally cold.

    Outside of the “halos” found in the immediate vicinity of galaxies, the pull of gravity is weaker, and dark matter wouldn’t be packed together tightly enough to go into its superfluid state. It would act as dark matter ordinarily does, explaining what astronomers see at larger scales.

    But what’s so special about having dark matter be a superfluid? How can this special state change the way that dark matter appears to behave? A number of researchers over the years have played with similar ideas. But Khoury’s approach is unique because it shows how the superfluid could give rise to an extra force.

    In physics, if you disturb a field, you’ll often create a wave. Shake some electrons — for instance, in an antenna — and you’ll disturb an electric field and get radio waves. Wiggle the gravitational field with two colliding black holes and you’ll create gravitational waves. Likewise, if you poke a superfluid, you’ll produce phonons — sound waves in the superfluid itself. These phonons give rise to an extra force in addition to gravity, one that’s analogous to the electrostatic force between charged particles. “It’s nice because you have an additional force on top of gravity, but it really is intrinsically linked to dark matter,” said Khoury. “It’s a property of the dark matter medium that gives rise to this force.” The extra force would be enough to explain the puzzling behavior of dark matter inside galactic halos.

    A Different Dark Matter Particle

    Dark matter hunters have been at work for a long time. Their efforts have focused on so-called weakly interacting massive particles, or WIMPs. WIMPs have been popular because not only would the particles account for the majority of astrophysical observations, they pop out naturally from hypothesized extensions of the Standard Model of particle physics.

    Yet no one has ever seen a WIMP, and those hypothesized extensions of the Standard Model haven’t shown up in experiments either, much to physicists’ disappointment.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    With each new null result, the prospects dim even more, and physicists are increasingly considering other dark matter candidates. “At what point do we decide that we’ve been barking up the wrong tree?” said Stacy McGaugh, an astronomer at Case Western Reserve University.

    Unfortunately, this is unlikely to be the case: Khoury’s most recent computer simulations suggest that vortices in the dark matter superfluid would be “pretty flimsy,” he said, and unlikely to offer researchers clear-cut evidence that they exist. He speculates it might be possible to exploit the phenomenon of gravitational lensing to see if there are any scattering effects, similar to how a crystal will scatter X-ray light that passes through it.

    Gravitational Lensing NASA/ESA

    Astronomers could also search for indirect evidence that dark matter behaves like a superfluid. Here, they’d look to galactic mergers.

    The rate that galaxies collide with one another is influenced by something called dynamical friction. Imagine a massive body passing through a sea of particles. Many of the small particles will get pulled along by the massive body. And since the total momentum of the system can’t change, the massive body must slow down a bit to compensate.

    That’s what happens when two galaxies start to merge. If they get sufficiently close, their dark matter halos will start to pass through each other, and the rearrangement of the independently moving particles will give rise to dynamical friction, pulling the halos even closer. The effect helps galaxies to merge, and works to increase the rate of galactic mergers across the universe.

    But if the dark matter halo is in a superfluid phase, the particles move in sync. There would be no friction pulling the galaxies together, so it would be more difficult for them to merge. This should leave behind a telltale pattern: rippling interference patterns in how matter is distributed in the galaxies.

    Perfectly Reasonable Miracles

    While McGaugh is mostly positive about the notion of superfluid dark matter, he confesses to a niggling worry that in trying so hard to combine the best of both worlds, physicists might be creating what he terms a “Tycho Brahe solution.” The 16th-century Danish astronomer invented a hybrid cosmology in which the Earth was at the center of the universe but all the other planets orbited the sun. It attempted to split the difference between the ancient Ptolemaic system and the Copernican cosmology that would eventually replace it. “I worry a little that these kinds of efforts are in that vein, that maybe we’re missing something more fundamental,” said McGaugh. “But I still think we have to explore these ideas.”

    Tait admires this new superfluid model intellectually, but he would like to see the theory fleshed out more at the microscopic level, to a point where “we can really calculate everything and show why it all works out the way it’s supposed to. At some level, what we’re doing now is invoking a few miracles” in order to get everything to fit into place, he said. “Maybe they’re perfectly reasonable miracles, but I’m not fully convinced yet.”

    One potential sticking point is that Khoury and Berezhiani’s concept requires a very specific kind of particle that acts like a superfluid in just the right regime, because the kind of extra force produced in their model depends upon the specific properties of the superfluid. They are on the hunt for an existing superfluid — one created in the lab — with those desired properties. “If you could find such a system in nature, it would be amazing,” said Khoury, since this would essentially provide a useful analog for further exploration. “You could in principle simulate the properties of galaxies using cold atoms in the lab to mimic how superfluid dark matter behaves.”

    While researchers have been playing with superfluids for many decades, particle physicists are only just beginning to appreciate the usefulness of some of the ideas coming from subjects like condensed matter physics. Combining tools from those disciplines and applying it to gravitational physics might just resolve the longstanding dispute on dark matter — and who knows what other breakthroughs might await?

    “Do I need superfluid models? Physics isn’t really about what I need,” said Prescod-Weinstein. “It’s about what the universe may be doing. It may be naturally forming Bose-Einstein condensates, just like masers naturally form in the Orion nebula. Do I need lasers in space? No, but they’re pretty cool.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 9:19 pm on June 6, 2017 Permalink | Reply
    Tags: , , , , , Latest Black Hole Collision Comes With a Twist, , Quanta   

    From Quanta: “Latest Black Hole Collision Comes With a Twist” 

    Quanta Magazine
    Quanta Magazine

    June 1, 2017
    Natalie Wolchover

    An illustration of a newly detected black-hole merger, whose gravitational-wave signal suggests that at least one of the black holes was misaligned with its orbital motion before merging with its partner. LIGO/Caltech/MIT/Sonoma State (Aurore Simonnet)

    Once again, a gust of gravitational waves coming from the faraway collision of black holes has tickled the instruments of the Advanced Laser Interferometer Gravitational-Wave Observatory (Advanced LIGO), bringing the count of definitive gravitational-wave detections up to three. The new signal, detected in January and reported today in Physical Review Letters, deepens the riddle of how black holes come to collide.

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    Before Advanced LIGO switched on in the fall of 2015 and almost immediately detected gravitational waves from a black-hole merger, no one knew whether it would see merging black holes, merging neutron stars, black holes merging with neutron stars or none of the above. (As Albert Einstein figured out a century ago, pairs of dense, tightly orbiting objects are needed to generate ripples in the fabric of space-time, or gravitational waves.) But the three signals spotted by LIGO so far have all come from merging black holes, suggesting pairs of these ultradense, invisible objects abundantly populate the universe.

    Astronomers have since been struggling mightily to understand how black holes (which, for the most part, are remnants of collapsed stars) can wind up so close to each other, without having been close enough to have merged during their stellar lifetimes. It’s a puzzle that has forced experts to think anew about many aspects of stars.

    They’ll now have to think even harder.

    Lucy Reading-Ikkanda for Quanta Magazine

    The LIGO team’s estimate for the abundance of black-hole mergers in the universe, based on its first two detections, favored one of two competing scenarios for how stars might end up colliding as black holes: The “common-envelope” and “chemically homogeneous” scenarios both involve pairs of massive stars that form near each other in the otherwise empty expanse of space, gravitationally collapse into black holes and collide. The methods differ in their details, but either is theoretically capable of producing enough black-hole mergers to account for Advanced LIGO’s signals. Scientists think it’s likely that one scenario is dominant in the universe and accounts for almost all observed events, since it would be strange for multiple scenarios to produce equal numbers of events in a fine-tuned balance.

    Meanwhile, the high rate of mergers disfavored a third scenario called “dynamical formation,” which has the black holes forming far apart inside a dense stellar cluster. According to this theory, over time, the black holes sink to the center of the cluster, perturbing one another’s orbits in complicated ways and, occasionally, entering tight enough orbits to collide. Considering the relative rareness of stellar clusters and dynamical collisions, the expected rate associated with this scenario seemed too low to account for Advanced LIGO’s data.

    That’s where the new gravitational-wave signal comes in. It originated from black holes weighing 31 times and 19 times the mass of the sun that merged roughly 3 billion light-years away. The signal also indicates that at least one of the black holes may have been spinning in a direction that was not aligned with the pair’s common axis of rotation. Black holes that formed and evolved from a close-knit pair of stars — as in the common-envelope and chemically homogeneous scenarios — would spin in the same direction as their common axis (if at all), so misaligned spins would disfavor these scenarios and favor dynamical formation in a stellar cluster, which does not require any connection between the black holes’ spins.

    The spin measurement is difficult to do and carries some uncertainty — it’s within the margin of error that the black holes weren’t spinning at all. “We’ll need more events to be able to statistically disentangle what is happening,” said Daniel Holz, an astrophysicist at the University of Chicago and a LIGO member who has worked on the common-envelope scenario. “If it turns out that there is significant support for high misaligned spins, then we do indeed face a conundrum,” he said. “The cluster model would be favored, but it’s not straightforward for it to produce the rates we are seeing.”

    Duncan Brown, a LIGO member and professor of physics at Syracuse University, told Quanta that it might be possible after all that multiple scenarios produce black-hole mergers. But he’s waiting for more statistics. “As LIGO’s sensitivity improves — we’re still a factor of three away from design sensitivity — and as we see more signals, we’ll get a much better understanding of the spins of the black hole population. [The new signal] contains hints of something interesting, but right now I’m looking forward to future detections before making a call.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 6:28 pm on June 1, 2017 Permalink | Reply
    Tags: A Theory of Reality as More Than the Sum of Its Parts, , , Causal emergence, Consciousness as information, Quanta, The gray scale leading between nonlife and life, The mathematical language of information theory   

    From Quanta: “A Theory of Reality as More Than the Sum of Its Parts” 

    Quanta Magazine
    Quanta Magazine

    June 1, 2017
    Natalie Wolchover

    Olena Shmahalo/Quanta Magazine

    New math shows how, contrary to conventional scientific wisdom, conscious beings and other macroscopic entities might have greater influence over the future than does the sum of their microscopic components.

    In his 1890 opus, The Principles of Psychology, William James invoked Romeo and Juliet to illustrate what makes conscious beings so different from the particles that make them up.

    “Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they,” James wrote. “But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings. … Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly.”

    Erik Hoel, a 29-year-old theoretical neuroscientist and writer, quoted the passage in a recent essay [FXQi] in which he laid out his new mathematical explanation of how consciousness and agency arise. The existence of agents — beings with intentions and goal-oriented behavior — has long seemed profoundly at odds with the reductionist assumption that all behavior arises from mechanistic interactions between particles. Agency doesn’t exist among the atoms, and so reductionism suggests agents don’t exist at all: that Romeo’s desires and psychological states are not the real causes of his actions, but merely approximate the unknowably complicated causes and effects between the atoms in his brain and surroundings.

    Hoel’s theory, called “causal emergence,” roundly rejects this reductionist assumption.

    “Causal emergence is a way of claiming that your agent description is really real,” said Hoel, a postdoctoral researcher at Columbia University who first proposed [PNAS]the idea with Larissa Albantakis and Giulio Tononi of the University of Wisconsin, Madison. “If you just say something like, ‘Oh, my atoms made me do it’ — well, that might not be true. And it might be provably not true.”

    Erik Hoel, a theoretical neuroscientist at Columbia University. Julia Buntaine

    Using the mathematical language of information theory, Hoel and his collaborators claim to show that new causes — things that produce effects — can emerge at macroscopic scales. They say coarse-grained macroscopic states of a physical system (such as the psychological state of a brain) can have more causal power over the system’s future than a more detailed, fine-grained description of the system possibly could. Macroscopic states, such as desires or beliefs, “are not just shorthand for the real causes,” explained Simon DeDeo, an information theorist and cognitive scientist at Carnegie Mellon University and the Santa Fe Institute who is not involved in the work, “but it’s actually a description of the real causes, and a more fine-grained description would actually miss those causes.”

    “To me, that seems like the right way to talk about it,” DeDeo said, “because we do want to attribute causal properties to higher-order events [and] things like mental states.”

    Hoel and collaborators have been developing the mathematics behind their idea since 2013. In a May paper in the journal Entropy,

    Hoel placed causal emergence on a firmer theoretical footing by showing that macro scales gain causal power in exactly the same way, mathematically, that error-correcting codes increase the amount of information that can be sent over information channels. Just as codes reduce noise (and thus uncertainty) in transmitted data — Claude Shannon’s 1948 insight that formed the bedrock of information theory — Hoel claims that macro states also reduce noise and uncertainty in a system’s causal structure, strengthening causal relationships and making the system’s behavior more deterministic.

    “I think it’s very significant,” George Ellis, a South African cosmologist who has also written about top-down causation in nature, said of Hoel’s new paper. Ellis thinks causal emergence could account for many emergent phenomena such as superconductivity and topological phases of matter. Collective systems like bird flocks and superorganisms — and even simple structures like crystals and waves — might also exhibit causal emergence, researchers said.

    The work on causal emergence is not yet widely known among physicists, who for centuries have taken a reductionist view of nature and largely avoided further philosophical thinking on the matter. But at the interfaces between physics, biology, information theory and philosophy, where puzzles crop up, the new ideas have generated excitement. Their ultimate usefulness in explaining the world and its mysteries — including consciousness, other kinds of emergence, and the relationships between the micro and macro levels of reality — will come down to whether Hoel has nailed the notoriously tricky notion of causation: Namely, what’s a cause? “If you brought 20 practicing scientists into a room and asked what causation was, they would all disagree,” DeDeo said. “We get mixed up about it.”

    A Theory of Cause

    In a fatal drunk driving accident, what’s the cause of death? Doctors name a ruptured organ, while a psychologist blames impaired decision-making abilities and a sociologist points to permissive attitudes toward alcohol. Biologists, chemists and physicists, in turn, see ever more elemental causes. “Famously, Aristotle had a half-dozen notions of causes,” DeDeo said. “We as scientists have rejected all of them except things being in literal contact, touching and pushing.”

    The true causes, to a physicist, are the fundamental forces acting between particles; all effects ripple out from there. Indeed, these forces, when they can be isolated, appear perfectly deterministic and reliable — physicists can predict with high precision the outcomes of particle collisions at the Large Hadron Collider, for instance.


    CERN/LHC Map

    CERN LHC Tunnel

    CERN LHC particles

    In this view, causes and effects become hard to predict from first principles only when there are too many variables to track.

    Furthermore, philosophers have argued that causal power existing at two scales at once would be twice what the world needs; to avoid double-counting, the “exclusion argument” says all causal power must originate at the micro level. But it’s almost always easier to discuss causes and effects in terms of macroscopic entities. When we look for the cause of a fatal car crash, or Romeo’s decision to start climbing, “it doesn’t seem right to go all the way down to microscopic scales of neurons firing,” DeDeo said. “That’s where Erik [Hoel] is jumping in. It’s a bit of a bold thing to do to talk about the mathematics of causation.”

    Friendly and large-limbed, Hoel grew up reading books at Jabberwocky, his family’s bookstore in Newburyport, Massachusetts. He studied creative writing as an undergraduate and planned to become a writer. (He still writes fiction and has started a novel.) But he was also drawn to the question of consciousness — what it is, and why and how we have it — because he saw it as an immature scientific subject that allowed for creativity. For graduate school, he went to Madison, Wisconsin, to work with Tononi — the only person at the time, in Hoel’s view, who had a truly scientific theory of consciousness.

    Tononi conceives of consciousness as information: bits that are encoded not in the states of individual neurons, but in the complex networking of neurons, which link together in the brain into larger and larger ensembles. Tononi argues that this special “integrated information” corresponds to the unified, integrated state that we experience as subjective awareness. Integrated information theory has gained prominence in the last few years, even as debates have ensued about whether it is an accurate and sufficient proxy for consciousness. But when Hoel first got to Madison in 2010, only the two of them were working on it there.

    Giulio Tononi, a neuroscientist and psychiatrist at the University of Wisconsin, Madison, best known for his research on sleep and consciousness. John Maniaci/UW Health

    Tononi tasked Hoel with exploring the general mathematical relationship between scales and information. The scientists later focused on how the amount of integrated information in a neural network changes as you move up the hierarchy of spatiotemporal scales, looking at links between larger and larger groups of neurons. They hoped to figure out which ensemble size might be associated with maximum integrated information — and thus, possibly, with conscious thoughts and decisions. Hoel taught himself information theory and plunged into the philosophical debates around consciousness, reductionism and causation.

    Hoel soon saw that understanding how consciousness emerges at macro scales would require a way of quantifying the causal power of brain states. He realized, he said, that “the best measure of causation is in bits.” He also read the works of the computer scientist and philosopher Judea Pearl, who developed a logical language for studying causal relationships in the 1990s called causal calculus. With Albantakis and Tononi, Hoel formalized a measure of causal power called “effective information,” which indicates how effectively a particular state influences the future state of a system. (Effective information can be used to help calculate integrated information, but it is simpler and more general and, as a measure of causal power, does not rely on Tononi’s other ideas about consciousness.)

    The researchers showed that in simple models of neural networks, the amount of effective information increases as you coarse-grain over the neurons in the network — that is, treat groups of them as single units. The possible states of these interlinked units form a causal structure, where transitions between states can be mathematically modeled using so-called Markov chains. At a certain macroscopic scale, effective information peaks: This is the scale at which states of the system have the most causal power, predicting future states in the most reliable, effective manner. Coarse-grain further, and you start to lose important details about the system’s causal structure. Tononi and colleagues hypothesize that the scale of peak causation should correspond, in the brain, to the scale of conscious decisions; based on brain imaging studies, Albantakis guesses that this might happen at the scale of neuronal microcolumns, which consist of around 100 neurons.

    Causal emergence is possible, Hoel explained, because of the randomness and redundancy that plagues the base scale of neurons. As a simple example, he said to imagine a network consisting of two groups of 10 neurons each. Each neuron in group A is linked to several neurons in group B, and when a neuron in group A fires, it usually causes one of the B neurons to fire as well. Exactly which linked neuron fires is unpredictable. If, say, the state of group A is {1,0,0,1,1,1,0,1,1,0}, where 1s and 0s represent neurons that do and don’t fire, respectively, the resulting state of group B can have myriad possible combinations of 1s and 0s. On average, six neurons in group B will fire, but which six is nearly random; the micro state is hopelessly indeterministic. Now, imagine that we coarse-grain over the system, so that this time, we group all the A neurons together and simply count the total number that fire. The state of group A is {6}. This state is highly likely to lead to the state of group B also being {6}. The macro state is more reliable and effective; calculations show it has more effective information.

    A real-world example cements the point. “Our life is very noisy,” Hoel said. “If you just give me your atomic state, it may be totally impossible to guess where your future [atomic] state will be in 12 hours. Try running that forward; there’s going to be so much noise, you’d have no idea. Now give a psychological description, or a physiological one: Where are you going to be in 12 hours?” he said (it was mid-day). “You’re going to be asleep — easy. So these higher-level relationships are the things that seem reliable. That would be a super simple example of causal emergence.”

    For any given system, effective information peaks at the scale with the largest and most reliable causal structure. In addition to conscious agents, Hoel says this might pick out the natural scales of rocks, tsunamis, planets and all other objects that we normally notice in the world. “And the reason why we’re tuned into them evolutionarily [might be] because they are reliable and effective, but that also means they are causally emergent,” Hoel said.

    Brain-imaging experiments are being planned in Madison and New York, where Hoel has joined the lab of the Columbia neuroscientist Rafael Yuste. Both groups will examine the brains of model organisms to try to home in on the spatiotemporal scales that have the most causal control over the future. Brain activity at these scales should most reliably predict future activity. As Hoel put it, “Where does the causal structure of the brain pop out?” If the data support their hypothesis, they’ll see the results as evidence of a more general fact of nature. “Agency or consciousness is where this idea becomes most obvious,” said William Marshall, a postdoctoral researcher in the Wisconsin group. “But if we do find that causal emergence is happening, the reductionist assumption would have to be re-evaluated, and that would have to be applied broadly.”

    New Philosophical Thinking

    Sara Walker, a physicist and astrobiologist at Arizona State University who studies the origins of life, hopes measures like effective information and integrated information will help define what she sees as the gray scale leading between nonlife and life (with viruses and cell cycles somewhere in the gray area). Walker has been collaborating with Tononi’s team on studies of real and artificial cell cycles, with preliminary indications that integrated information might correlate with being alive.

    In other recent work, the Madison group has developed a way of measuring causal emergence called “black-boxing” that they say works well for something like a single neuron. A neuron isn’t simply the average of its component atoms and so isn’t amenable to coarse-graining. Black-boxing is like putting a box around a neuron and measuring the box’s overall inputs and outputs, instead of assuming anything about its inner workings. “Black-boxing is the truly general form of causal emergence and is especially important for biological and engineering systems,” Tononi said in an email.

    Walker is also a fan of Hoel’s new work tracing effective information and causal emergence to the foundations of information theory and Shannon’s noisy-channel theorem. “We’re in such deep conceptual territory it’s not really clear which direction to go,” she said, “so I think any bifurcations in this general area are good and constructive.”

    Robert Bishop, a philosopher and physicist at Wheaton College, said, “My take on EI” —effective information — “is that it can be a useful measure of emergence but likely isn’t the only one.” Hoel’s measure has the charm of being simple, reflecting only reliability and the number of causal relationships, but according to Bishop, it could be one of several proxies for causation that apply in different situations.

    Hoel’s ideas do not impress Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin. He says causal emergence isn’t radical in its basic premise. After reading Hoel’s recent essay for the Foundational Questions Institute, “Agent Above, Atom Below” (the one that featured Romeo and Juliet), Aaronson said, “It was hard for me to find anything in the essay that the world’s most orthodox reductionist would disagree with. Yes, of course you want to pass to higher abstraction layers in order to make predictions, and to tell causal stories that are predictively useful — and the essay explains some of the reasons why.”

    It didn’t seem so obvious to others, given how the exclusion argument has stymied efforts to get a handle on higher-level causation. Hoel says his arguments go further than Aaronson acknowledges in showing that “higher scales have provably more information and causal influence than their underlying ones. It’s the ‘provably’ part that’s hard and is directly opposite to most reductionist thinking.”

    Larissa Albantakis, a theoretical neuroscientist at the University of Wisconsin, Madison.

    Moreover, causal emergence isn’t merely a claim about our descriptions or “causal stories” about the world, as Aaronson suggests. Hoel and his collaborators aim to show that higher-level causes — as well as agents and other macroscopic things — ontologically exist. The distinction relates to one that the philosopher David Chalmers makes about consciousness: There’s the “easy problem” of how neural circuitry gives rise to complex behaviors, and the “hard problem,” which asks, essentially, what distinguishes conscious beings from lifeless automatons. “Is EI measuring causal power of the kind that we feel that we have in action, the kind that we want our conscious experiences or selves to have?” said Hedda Hassel Mørch, a philosopher at New York University and a protégé of Chalmers’. She says it’s possible that effective information could “track real ontological emergence, but this requires some new philosophical thinking about the nature of laws, powers and how they relate.”

    The criticism that hits Hoel and Albantakis the hardest is one physicists sometimes make upon hearing the idea: They assert that noise, the driving force behind causal emergence, doesn’t really exist; noise is just what physicists call all the stuff that their models leave out. “It’s a typical physics point of view,” Albantakis said, that if you knew the exact microscopic state of the entire universe, “then I can predict what happens until the end of time, and there is no reason to talk about something like cause-effect power.”

    One rejoinder is that perfect knowledge of the universe isn’t possible, even in principle. But even if the universe could be thought of as a single unit evolving autonomously, this picture wouldn’t be informative. “What is left out there is to identify entities — things that exist,” Albantakis said. Causation “is really the measure or quantity that is necessary to identify where in this whole state of the universe do I have groups of elements that make up entities? … Causation is what you need to give structure to the universe.” Treating causes as real is a necessary tool for making sense of the world.

    Maybe we sort of knew all along, as Aaronson contends, that higher scales wrest the controls from lower scales. But if these scientists are right, then causal emergence might be how that works, mathematically. “It’s like we cracked the door open,” Hoel said. “And actually proving that that door is a little bit open is very important. Because anyone can hand-wave and say, yeah, probably, maybe, and so on. But now you can say, ‘Here’s a system [that has these higher-level causal events]; prove me wrong on it.’”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Formerly known as Simons Science News, Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science. Why Quanta? Albert Einstein called photons “quanta of light.” Our goal is to “illuminate science.” At Quanta Magazine, scientific accuracy is every bit as important as telling a good story. All of our articles are meticulously researched, reported, edited, copy-edited and fact-checked.

  • richardmitnick 12:35 pm on January 29, 2017 Permalink | Reply
    Tags: Einstein's theory of general relativity and the Metric Sensor or Reiman Metric, , , Minowski developed the formalism of spacetime, Quanta, What Is Spacetime?   

    From Ethan Siegel: “What Is Spacetime?” 

    Ethan Siegel
    Jan 28, 2017

    The fabric of the Universe, spacetime, is a tricky concept to understand. But we’re up to the challenge. Image credit: Pixabay user JohnsonMartin.

    When it comes to understanding the Universe, there are a few things everyone’s heard of: Schrödinger’s cat, the Twin Paradox and E = mc^2. But despite being around for over 100 years now, General Relativity — Einstein’s greatest achievement — is largely mysterious to everyone from the general public to undergraduate and graduate students in physics. For this week’s Ask Ethan, Katia Moskovitch wants that cleared up:

    Could you one day write a story explaining to a lay person what the metric is in GR?

    Before we get to “the metric,” let’s start at the beginning, and talk about how we conceptualize the Universe in the first place.

    Quanta, whether waves, particles or anything in between, have properties that define what they are. But they require a stage on which to interact and play out the Universe’s story. Image credit: Wikimedia Commons user Maschen.

    At a fundamental level, the Universe is made up of quanta — entities with physical properties like mass, charge, momentum, etc. — that can interact with each other. A quantum can be a particle, a wave, or anything in some weird in-between state, depending on how you look at it. Two or more quanta can bind together, building up complex structures like protons, atoms, molecules or human beings, and all of that is fine. Quantum physics might be relatively new, having been founded in mostly the 20th century, but the idea that the Universe was made of indivisible entities that interacted with each other goes back more than 2000 years, to at least Democritus of Abdera.

    But no matter what the Universe is made of, the things it’s composed of need a stage to move on if they’re going to interact.

    Newton’s law of Universal Gravitation has been superseded by Einstein’s general relativity, but relied on the concept of an instantaneous action (force) at a distance. Image credit: Wikimedia commons user Dennis Nilsson.

    In Newton’s Universe, that stage was flat, empty, absolute space. Space itself was a fixed entity, sort of like a Cartesian grid: a 3D structure with an x, y and z axis. Time always passed at the same rate, and was absolute as well. To any observer, particle, wave or quantum anywhere, they should experience space and time exactly the same as one another. But by the end of the 19th century, it was clear that Newton’s conception was flawed. Particles that moved close to the speed of light experienced time differently (it dilates) and space differently (it contracts) compared to a particle that was either slow-moving or at rest. A particle’s energy or momentum was suddenly frame-dependent, meaning that space and time weren’t absolute quantities; the way you experienced the Universe was dependent on your motion through it.

    A “light clock” will appear to run different for observers moving at different relative speeds, but this is due to the constancy of the speed of light. Einstein’s law of special relativity governs how these time and distance transformations take place. Image credit: John D. Norton, via http://www.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/Special_relativity_clocks_rods/.

    That was where the notion of Einstein’s theory of special relativity came from: some things were invariant, like a particle’s rest mass or the speed of light, but others transformed depending on how you moved through space and time. In 1907, Einstein’s former professor, Hermann Minkowski, made a brilliant breakthrough: he showed that you could conceive of space and time in a single formulation. In one fell swoop, he had developed the formalism of spacetime. This provided a stage for particles to move through the Universe (relative to one another) and interact with one another, but it didn’t include gravity. The spacetime he had developed — still today known as Minkowski space — describes all of special relativity, and also provides the backdrop for the vast majority of the quantum field theory calculations we do.

    Quantum field theory calculations are normally done in flat space, but general relativity goes beyond that to include curved space. QFT calculations are far more complex there. Image credit: SLAC National Accelerator Laboratory.

    If there were no such thing as the gravitational force, Minkowski spacetime would do everything we needed. Spacetime would be simple, uncurved, and would simply provide a stage for matter to move through and interact. The only way you’d ever accelerate would be through an interaction with another particle. But in our Universe, we do have the gravitational force, and it was Einstein’s principle of equivalence that told us that so long as you can’t see what’s accelerating you, gravitation treats you the same as any other acceleration.

    The identical behavior of a ball falling to the floor in an accelerated rocket (left) and on Earth (right) is a demonstration of Einstein’s equivalence principle. Image credit: Wikimedia Commons user Markus Poessel, retouched by Pbroks13.

    It was this revelation, and the development to link this, mathematically, to the Minkowski-an concept of spacetime, that led to general relativity. The major difference between special relativity’s Minkowski space and the curved space that appears in general relativity is the mathematical formalism known as the Metric Tensor, sometimes called Einstein’s Metric Tensor or the Riemann Metric. Riemann was a pure mathematician in the 19th century (and a former student of Gauss, perhaps the greatest mathematician of them all), and he gave a formalism for how any fields, lines, arcs, distances, etc., can exist and be well-defined in an arbitrarily curved space of any number of dimensions. It took Einstein (and a number of collaborators) nearly a decade to cope with the complexities of the math, but all was said and done, we had general relativity: a theory that described our three-space-and-one-time dimensional Universe, where gravitation existed.

    The warping of spacetime by gravitational masses, as illustrated to represent General Relativity. Image credit: LIGO/T. Pyle.

    Conceptually, the metric tensor defines how spacetime itself is curved. Its curvature is dependent on the matter, energy and stresses present within it; the contents of your Universe define its spacetime curvature. By the same token, how your Universe is curved tells you how the matter and energy is going to move through it. We like to think that an object in motion will continue in motion: Newton’s first law. We conceptualize that as a straight line, but what curved space tells us is that instead an object in motion continuing in motion follows a geodesic, which is a particularly-curved line that corresponds to unaccelerated motion. Ironically, it’s a geodesic, not necessarily a straight line, that is the shortest distance between two points. This shows up even on cosmic scales, where the curved spacetime due to the presence of extraordinary masses can curve the background light from behind it, sometimes into multiple images.

    An example/illustration of gravitational lensing, and the bending of starlight due to mass. Image credit: NASA / STScI, via http://hubblesite.org/newscenter/archive/releases/2000/07/image/c/.

    Physically, there are a number of different pieces that contribute to the Metric Tensor in general relativity. We think of gravity as due to masses: the locations and magnitudes of different masses determine the gravitational force. In general relativity, this corresponds to the mass density and does contribute, but it’s one of only 16 components of the Metric Tensor! There are also pressure components (such as radiation pressure, vacuum pressure or pressures created by fast-moving particles) that contribute, which are three additional contributors (one for each of the three spatial directions) to the Metric Tensor. And finally, there are six other components that tell us how volumes change and deform in the presence of masses and tidal forces, along with how the shape of a moving body is distorted by those forces. This applies to everything from a planet like Earth to a neutron star to a massless wave moving through space: gravitational radiation.

    As masses move through spacetime relative to one another, they cause the emission of gravitational waves: ripples through the fabric of space itself. These ripples are mathematically encoded in the Metric Tensor. Image credit: ESO/L. Calçada.

    You might have noticed that 1 + 3 + 6 ≠ 16, but 10, and if you did, good eye! The Metric Tensor may be a 4 × 4 entity, but it’s symmetric, meaning that there are four “diagonal” components (the density and the pressure components), and six off-diagonal components (the volume/deformation components) that are independent; the other six off-diagonal components are then uniquely determined by symmetry. The metric tells us the relationship between all the matter/energy in the Universe and the curvature of spacetime itself. In fact, the unique power of general relativity tells us that if you knew where all the matter/energy in the Universe was and what it was doing at any instant, you could determine the entire evolutionary history of the Universe — past, present and future — for all eternity.

    The four possible fates of the Universe, with the bottom example fitting the data best: a Universe with dark energy. Image credit: E. Siegel.

    This is how my sub-field of theoretical physics, cosmology, got its start! The discovery of the expanding Universe, its emergence from the Big Bang and the dark energy-domination that will lead to a cold, empty fate are all only understandable in the context of general relativity, and that means understanding this key relationship: between matter/energy and spacetime. The Universe is a play, unfolding every time a particle interacts with another, and spacetime is the stage on which it all takes place. The one key counterintuitive thing you’ve got to keep in mind? The stage isn’t a constant backdrop for everyone, but it, too, evolves along with the Universe itself.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: