Tagged: NOVA Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:41 am on April 30, 2016 Permalink | Reply
    Tags: , , NOVA, SN 1006 supernova   

    From NOVA: “Ancient Philosophers Help Scientists Decode Brightest Supernova on Record” 



    29 Apr 2016
    Allison Eck

    In April of the year 1006 A.D., spectators around the world were treated to a resplendent display—a supernova, now called SN 1006, that reportedly shone brighter than Venus.

    Astronomers from China, Japan, Europe, and the Middle East all looked to the sky and wrote down what they saw. A few months later, the fireball faded from view.

    Now, Ralph Neuhäuser, an astrophysicist at Friedrich Schiller University Jena in Germany, has uncovered clues* about the supernova (in apparent magnitude, the brightest in recorded history) as part of the writings of Persian scientist Ibn Sina, also known as Avicenna.

    SN1006, NASA Chandra 2011
    SN1006, NASA Chandra 2011

    Here’s Jesse Emspak, reporting for National Geographic:

    “One section of his multipart opus Kitab al-Shifa, or “Book of Healing,” makes note of a transient celestial object that changed color and “threw out sparks” as it faded away. According to Neuhäuser and his colleagues, this object—long mistaken for a comet—is really a record of SN 1006, which Ibn Sina could have witnessed when he lived in northern Iran.

    While SN 1006 was relatively well documented at the time, the newly discovered text adds some detail not seen in other reports. According to the team’s translation, Ibn Sina saw the supernova start out as a faint greenish yellow, twinkle wildly at its peak brightness, then become a whitish color before it ultimately vanished.”

    This text illustrates an evolution of color unlike anything described in alternate accounts of this celestial event. Scientists use this category of supernova—called type 1A—to calculate distances across the universe, as they are “standard candles” that emit the same amount of energy in the form of light no matter how far away they are from Earth. Researchers compare their perceived brightness with their actual brightness to get an accurate measure of where ghostly imprints, or nebulae, of type 1A supernovae are located in space. Thus, perceived changes in hue and brightness during an individual supernova event can help scientists refine the “standard candle” approach.

    When two stars orbit each other and one of those stars transforms into a small but massive white dwarf, the massive dwarf pulls gas from its partner star; eventually, the white dwarf will explode into a type 1A supernova.

    Sag A*  NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way
    Sag A* NASA Chandra X-Ray Observatory 23 July 2014, the supermassive black hole at the center of the Milky Way

    SN 1006 appears to have worked differently: in this case, two white dwarves revolved around one another—then, both lost energy in the form of gravitational waves and collided. Unusual supernovae like SN 1006 one help scientists understand the full spectrum of supernova characteristics.

    Still, some of Neuhäuser’s colleagues say that, while interesting, the color evolution Ibn Sina described may not actually be that useful. Ibn Sina would have observed SN 1006 close to the horizon, so the colors he saw might have distorted by atmospheric effects. What’s potentially more telling is another source that Neuhäuser has uncovered—writings from the historian al-Yamani of Yemen that suggest the supernova happened earlier than we once thought. Taken together, these accounts from a millennium ago will aid in scientists’ ability to decipher the universe.

    *Science paper:
    An Arabic report about supernova SN 1006 by Ibn Sina (Avicenna)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:58 pm on April 17, 2016 Permalink | Reply
    Tags: , NOVA,   

    From NOVA: “Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?” 



    24 Jun 2015 [NOVA just put this up in social media.]
    Scott Aaronson

    Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

    Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

    Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

    There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude.

    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/
    Multiverse. Image credit: public domain, retrieved from https://pixabay.com/

    In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

    A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

    But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

    In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2n amplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

    But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

    I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

    But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

    At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

    At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

    “Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?”

    As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

    The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

    For this reason, the only hope for a quantum-computing advantage comes from interference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

    Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

    It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

    What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist might give, if she wanted to answer Deutsch’s argument on its own terms.

    There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

    But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

    So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

    To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

    So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

    But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

    But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

    What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

    My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

    Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:39 am on April 14, 2016 Permalink | Reply
    Tags: , Dark Energy/Dark Matter, , NOVA   

    From NOVA: “Dark Matter’s Invisible Hand” 



    13 Apr 2016
    Charles Q. Choi

    Dark matter is currently one of the greatest mysteries in the universe.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al
    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    It’s thought to be an invisible substance that makes up roughly five-sixths of all matter in the cosmos, a dark fog suffusing the universe that rarely interacts with ordinary matter. But when it does, according to an unexpected finding by theoretical physicist Lisa Randall, the consequences could be momentous.

    Astronomers first detected dark matter through its gravitational pull, which apparently keeps the Milky Way and other galaxies from ripping themselves apart given the speeds at which they spin.

    Scientists have mostly ruled out all known ordinary materials as candidates for dark matter. The current consensus is that dark matter lies outside the Standard Model of particle physics, currently the best description of how all known subatomic particles behave.

    The Standard Model of elementary particles , with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.
    The Standard Model of elementary particles , with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Specifically, physicists have suggested that dark matter is composed of new kinds of particles that have very weak interactions—not just with ordinary matter but also with themselves.

    Gamma rays from the Fermi Gamma-ray Space Telescope, could be produced by proposed dark matter interactions
    Gamma rays from the Fermi Gamma-ray Space Telescope, could be produced by proposed dark matter interactions.

    NASA/Fermi Gamma Ray Telescope
    NASA/Fermi Gamma Ray Telescope

    However, Randall and other scientists have suggested that dark matter might interact more strongly with itself than we suspect, experiencing as-yet undetected “dark forces” that would influence dark matter particles alone. Just as electromagnetism can make particles of ordinary matter attract or repel each other and emit and absorb light, so too might “dark electromagnetism” cause similar interactions between dark matter particles and cause them to emit “dark light” that’s invisible to ordinary matter.

    Differentiated Dark Matter

    The evidence for this theory can be seen in potential discrepancies between predictions and observations of the way matter is distributed in the universe on relatively modest scales, such as that of dwarf galaxies, says Randall, a professor at Harvard University.

    Dwarf Galaxies with Messier 101  Allison Merritt  Dragonfly Telephoto Array
    Dwarf Galaxies with Messier 101 Allison Merritt Dragonfly Telephoto Array

    For example, repulsive interactions between dark matter particles might keep these particles apart and reduce their overall density, explaining why current estimates of the density of the innermost portion of galaxies are higher than what is actually seen.

    Most dark matter models suggest that dark matter particles are all of one type—they either all interact with each other or they all do not. However, Randall and her colleagues propose a more complex version that they call “partially-interacting dark matter,” where dark matter has both a non-interacting component and a self-interacting one. A similar example in real particles can be seen with protons, electrons, and neutrons—positively charged protons and negatively charged electrons attract one another, while neutrally charged neutrons are not attracted to either protons or electrons.

    “There’s no reason to think that dark matter is composed of all the same type of particle,” Randall says. “We certainly see a diversity of particles in the one sector of matter we do know about, ordinary matter. Why shouldn’t we think the same of dark matter?”

    In this model, Randall and her colleagues suggest that only a small portion of dark matter—maybe about 5%—experiences interactions reminiscent of those seen in ordinary matter. However, this fraction of dark matter could influence not only the evolution of the Milky Way, but of life on Earth as well, an idea Randall explores in her latest book, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe.

    Standard dark matter models predict that dwarf galaxies orbiting larger galaxies should be scattered in spherical patterns around their parents. However, astronomical data suggest that many dwarf galaxies orbiting the Milky Way and Andromeda lie roughly in the same plane as each other. Randall and her colleagues suggest that if dark matter particles can interact with each other, they can shed energy, potentially creating a structure that could not only solve this dwarf galaxy mystery, but also have triggered the cosmic disruption that doomed the dinosaurs.

    Dark Disks

    In the partially interacting dark matter scenario, the non-interacting component would still form spherical clouds around galaxies, consistent with what astronomers know of their general structure. However, self-interacting dark matter particles would lose energy and cool as they jostled with each other. Cooling would slow these particles down, and gravity would make them cluster together. If these clouds were relatively immobile, they would simply shrink into smaller balls.

    However, since they likely rotate—just like the rest of the matter in their galaxies—this rotation would make these clouds of self-interacting dark matter collapse into flat disks, in much the same way as spherical clouds of ordinary matter collapsed to form the spiral disks of the Milky Way and many other galaxies. Conservation of angular momentum causes these would-be spheres to flatten out. While cooling would still cause them to collapse vertically, they would not collapse along the same plane as their rotation.

    If dark matter in large galaxies was concentrated in disks, it’s likely that at least some of the orbiting dwarf galaxies would be concentrated in flat planes because of the gravitational pull of dark matter on the dwarf galaxies, Randall and her colleagues say. The researchers suggest these “dark disks” should be embedded in the visible disk of larger galaxies.

    But here’s where dark matter begins to exert its influence. The relationship between the dark disk and the stars in the galaxy is not entirely stable. The Sun, for example, completes a circuit around the Milky Way’s core roughly every 240 million years. During its orbit, it bobs up and down in a wavy motion through the galactic plane about every 32 million years. Coincidentally, some researchers previously suggested that meteor impacts on Earth rise and fall in cycles about 30 million to 35 million years long, leading to regular mass extinctions.

    Earlier researchers proposed a cosmic trigger for this deadly cycle, such as a potential companion star for the Sun dubbed “Nemesis” that would ensnare meteoroids and send them hurling toward Earth. Instead, Randall and her colleagues suggest that the Sun’s regular passage through the Milky Way’s dark disk might have warped the orbits of comets in the outer solar system, flinging them inward. Such disruption may have then led to disastrous cosmic impacts on Earth, including the collision about 67 million years ago that likely caused the Cretaceous-Tertiary extinction event, the most recent and most familiar mass extinction which killed off all dinosaurs (except those which would evolve into birds).

    The main suspect behind this disaster is an impact from an asteroid or comet that left behind a gargantuan crater more than 110 miles wide near the town of Chicxulub in Mexico. The collision, likely caused by a meteor about 6 miles across, would have released as much energy as 100 trillion tons of TNT, more than a billion times more than the atomic bombs that destroyed Hiroshima and Nagasaki, killing off at least 75% of life on Earth.

    In research* detailed in 2014 in the journal Physical Review Letters, Randall and her colleague Matthew Reece analyzed craters that are more than 12.4 miles in diameter and were created in the past 250 million years. When they compared the ages of these craters against the 35-million-year cycle they proposed, they discovered that it was three times more likely that the craters matched the dark matter cycle instead of simply occurring randomly.

    “I want to be clear that I did not set out to explain the extinction of the dinosaurs,” Randall says. “This work was about exploring the story of how our universe came about, to explore one possible connection between many different levels of the universe, from the universe down to the Milky Way, the solar system, Earth, and life on Earth.”

    Geologist Michael Rampino at New York University, who did not participate in this study, finds a potential link between a dark disk and mass extinctions “an interesting idea,” he says. “If true, it ties together events that happened on Earth to large-scale cycles in the rest of the solar system and even the galaxy in general.”

    Dark Life?

    However, not everyone agrees that Randall and her colleagues present a convincing case. “They tie mass extinctions to the cratering record, but there are all kinds of estimates one can make with the cratering record depending on what craters one think makes the cut—do you accept all craters above a certain size or craters out to a certain age?” says astrobiophysicist Adrian Melott at the University of Kansas, who did not take part in this research. “You can get all different kind of answers, from cycles 20 million to 37 million years long.”

    Moreover, other research suggests that the cycle of mass extinctions on Earth is actually roughly 27 million years long, Melott says. “That’s way too short a duration for motion oscillating back and forth through the disk.”

    Randall notes that data from the European Space Agency’s satellite Hipparcos, launched in 1989 to precisely measure the positions and velocities of stars, allowed for the theoretical existence of a dark disk. She adds that ESA’s Gaia mission, launched in 2013 to create a precise 3D map of matter throughout the Milky Way, could reveal or refute the dark disk’s existence.

    ESA/Gaia satellite
    ESA/Gaia satellite

    One intriguing possibility raised by interacting dark matter models is the existence of dark atoms that might have given rise to dark life, neither of which would be easily detected, Randall says. Although she admits that the concept of dark life might be far-fetched, “life is complicated, and we have yet to understand life and what’s necessary for it.”

    *Science paper:
    Dark Matter as a Trigger for Periodic Comet Impacts

    Science team:
    Lisa Randall and Matthew Reece

    Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 6:58 pm on March 23, 2016 Permalink | Reply
    Tags: , , , NOVA, Phages   

    From NOVA: “The Virus That Could Cure Alzheimer’s, Parkinson’s, and More” 



    23 Mar 2016
    Jon Palfreman

    In 2004, the British chemist Chris Dobson speculated that there might be a universal elixir out there that could combat not just alpha-synuclein for Parkinson’s but the amyloids caused by many protein-misfolding diseases at once. Remarkably, in that same year an Israeli scientist named Beka Solomon discovered an unlikely candidate for this elixir, a naturally occurring microorganism called a phage.

    Solomon, a professor at Tel Aviv University, made a serendipitous discovery one day when she was testing a new class of agents against Alzheimer’s disease. If it pans out, it might mark the beginning of the end of Alzheimer’s, Parkinson’s, and many other neurodegenerative diseases. It’s a remarkable story, and the main character isn’t Solomon or any other scientist but a humble virus that scientists refer to as M13.

    Alzheimer’s disease can cause brain tissues to atrophy, seen here in blue. No image credit.

    Among the many varieties of viruses, there is a kind that only infects bacteria. Known as bacteriophages, or just phages, these microbes are ancient (over three billion years old) and ubiquitous: they’re found everywhere from the ocean floor to human stomachs. The phage M13’s goal is to infect just one type of bacteria, Escherichia coli, or E. coli, which can be found in copious amounts in the intestines of mammals. Like other microorganisms, phages such as M13 have only one purpose: to pass on their genes. In order to do this, they have developed weapons to enable them to invade, take over, and even kill their bacterial hosts. Before the advent of antibiotics, in fact, doctors occasionally used phages to fight otherwise incurable bacterial infections.

    To understand Solomon’s interest in M13 requires a little background about her research. Solomon is a leading Alzheimer’s researcher, renowned for pioneering so-called immunotherapy treatments for the disease. Immunotherapy employs specially made antibodies, rather than small molecule drugs, to target the disease’s plaques and tangles. As high school students learn in biology class, antibodies are Y-shaped proteins that are part of the body’s natural defense against infection. These proteins are designed to latch onto invaders and hold them so that they can be destroyed by the immune system. But since the 1970s, molecular biologists have been able to genetically engineer human-made antibodies, fashioned to attack undesirable interlopers like cancer cells. In the 1990s, Solomon set out to prove that such engineered antibodies could be effective in attacking amyloid-beta plaques in Alzheimer’s as well.

    In 2004, she was running an experiment on a group of mice that had been genetically engineered to develop Alzheimer’s disease plaques in their brains. She wanted to see if human-made antibodies delivered through the animals’ nasal passages would penetrate the blood-brain barrier and dissolve the amyloid-beta plaques in their brains. Seeking a way to get more antibodies into the brain, she decided to attach them to M13 phages in the hope that the two acting in concert would better penetrate the blood-brain barrier, dissolve more of the plaques, and improve the symptoms in the mice—as measured by their ability to run mazes and perform similar tasks.

    Solomon divided the rodents into three groups. She gave the antibody to one group. The second group got the phage-antibody combination, which she hoped would have an enhanced effect in dissolving the plaques. And as a scientific control, the third group received the plain phage M13.

    Because M13 cannot infect any organism except E. coli, she expected that the control group of mice would get absolutely no benefit from the phage. But, surprisingly, the phage by itself proved highly effective at dissolving amyloid-beta plaques and in laboratory tests improved the cognition and sense of smell of the mice. She repeated the experiment again and again, and the same thing happened. “The mice showed very nice recovery of their cognitive function,” Solomon says. And when Solomon and her team examined the brains of the mice, the plaques had been largely dissolved. She ran the experiment for a year and found that the phage-treated mice had 80% fewer plaques than untreated ones. Solomon had no clear idea how a simple phage could dissolve Alzheimer’s plaques, but given even a remote chance that she had stumbled across something important, she decided to patent M13’s therapeutic properties for the University of Tel Aviv. According to her son Jonathan, she even “joked about launching a new company around the phage called NeuroPhage. But she wasn’t really serious about it.”

    The following year, Jonathan Solomon—who’d just completed more than a decade in Israel’s special forces, during which time he got a BS in physics and an MS in electrical engineering—traveled to Boston to enroll at the Harvard Business School. While he studied for his MBA, Jonathan kept thinking about the phage his mother had investigated and its potential to treat terrible diseases like Alzheimer’s. At Harvard, he met many brilliant would-be entrepreneurs, including the Swiss-educated Hampus Hillerstrom, who, after studying at the University of St. Gallen near Zurich, had worked for a European biotech venture capital firm called HealthCap.

    Following the first year of business school, both students won summer internships: Solomon at the medical device manufacturer Medtronic and Hillerstrom at the pharmaceutical giant AstraZeneca. But as Hillerstrom recalls, they returned to Harvard wanting more: “We had both spent…I would call them ‘weird summers’ in large companies, and we said to each other, ‘Well, we have to do something more dynamic and more interesting.’ ”

    In their second year of the MBA, Solomon and Hillerstrom took a class together in which students were tasked with creating a new company on paper. The class, Solomon says, “was called a field study, and the idea was you explore a technology or a new business idea by yourself while being mentored by a Harvard Business School professor. So, I raised the idea with Hampus of starting a new company around the M13 phage as a class project. At the end of that semester, we developed a mini business plan. And we got on so well that we decided that it was worth a shot to do this for real.”

    In 2007, with $150,000 in seed money contributed by family members, a new venture, NeuroPhage Pharmaceuticals, was born. After negotiating a license with the University of Tel Aviv to explore M13’s therapeutic properties, Solomon and Hillerstrom reached out to investors willing to bet on M13’s potential therapeutic powers. By January 2008, they had raised over $7 million and started hiring staff.

    Their first employee—NeuroPhage’s chief scientific officer—was Richard Fisher, a veteran of five biotech start-ups. Fisher recalls feeling unconvinced when he first heard about the miraculous phage. “But the way it’s been in my life is that it’s really all about the people, and so first I met Jonathan and Hampus and I really liked them. And I thought that within a year or so we could probably figure out if it was an artifact or whether there was something really to it, but I was extremely skeptical.”

    Fisher set out to repeat Beka Solomon’s mouse experiments and found that with some difficulty he was able to show the M13 phage dissolved amyloid-beta plaques when the phage was delivered through the rodents’ nasal passages. Over the next two years, Fisher and his colleagues then discovered something totally unexpected: that the humble M13 virus could also dissolve other amyloid aggregates—the tau tangles found in Alzheimer’s and also the amyloid plaques associated with other diseases, including alpha-synuclein (Parkinson’s), huntingtin (Huntington’s disease), and superoxide dismutase (amyotrophic lateral sclerosis). The phage even worked against the amyloids in prion diseases (a class that includes Creutzfeldt-Jakob disease). Fisher and his colleagues demonstrated this first in test tubes and then in a series of animal experiments. Astonishingly, the simple M13 virus appeared in principle to possess the properties of a “pan therapy,” a universal elixir of the kind the chemist Chris Dobson had imagined.

    This phage’s unique capacity to attack multiple targets attracted new investors in a second round of financing in 2010. Solomon recalls feeling a mix of exuberance and doubt: “We had something interesting that attacks multiple targets, and that was exciting. On the other hand, we had no idea how the phage worked.”
    The Key

    That wasn’t their only problem. Their therapeutic product, a live virus, it turned out, was very difficult to manufacture. It was also not clear how sufficient quantities of viral particles could be delivered to human beings. The methods used in animal experiments—inhaled through the nose or injected directly into the brain—were unacceptable, so the best option available appeared to be a so-called intrathecal injection into the spinal canal. As Hillerstrom says, “It was similar to an epidural; this was the route we had decided to deliver our virus with.”

    While Solomon and Hillerstrom worried about finding an acceptable route of administration, Fisher spent long hours trying to figure out the phage’s underlying mechanism of action. “Why would a phage do this to amyloid fibers? And we really didn’t have a very good idea, except that under an electron microscope the phage looked a lot like an amyloid fiber; it had the same dimensions.”

    Boston is a town with enormous scientific resources. Less than a mile away from NeuroPhage’s offices was MIT, a world center of science and technology. In 2010, Fisher recruited Rajaraman Krishnan—an Indian postdoctoral student working in an MIT laboratory devoted to protein misfolding—to investigate the M13 puzzle. Krishnan says he was immediately intrigued. The young scientist set about developing some new biochemical tools to investigate how the virus worked and also devoured the scientific literature about phages. It turned out that scientists knew quite a lot about the lowly M13 phage. Virologists had even created libraries of mutant forms of M13. By running a series of experiments to test which mutants bound to the amyloid and which ones didn’t, Krishnan was able to figure out that the phage’s special abilities involved a set of proteins displayed on the tip of the virus, called GP3. “We tested the different variants for examples of phages with or without tip proteins, and we found that every time we messed around with the tip proteins, it lowered the phage’s ability to attach to amyloids,” Krishnan says.

    Virologists, it turned out, had also visualized the phage’s structure using X-ray crystallography and nuclear magnetic resonance imaging. Based on this analysis, those microbiologists had predicted that the phage’s normal mode of operation in nature was to deploy the tip proteins as molecular keys; the keys in effect enabled the parasite to “unlock” E. coli bacteria and inject its DNA. Sometime in 2011, Krishnan became convinced that the phage was doing something similar when it bound to toxic amyloid aggregates. The secret of the phage’s extraordinary powers, he surmised, lay entirely in the GP3 protein.

    As Fisher notes, this is serendipitous. Just by “sheer luck, M13’s keys not only unlock E. coli; they also work on clumps of misfolded proteins.” The odds of this happening by chance, Fisher says, are very small. “Viruses have exquisite specificity in their molecular mechanisms, because they’re competing with each other…and you need to have everything right, and the two locks need to work exactly the way they are designed. And this one way of getting into bacteria also works for binding to the amyloid plaques that cause many chronic diseases of our day.”

    Having proved the virus’s secret lay in a few proteins at the tip, Fisher, Krishnan, and their colleagues wondered if they could capture the phage’s amyloid-busting power in a more patient friendly medicine that did not have to be delivered by epidural. So over the next two years, NeuroPhage’s scientists engineered a new antibody (a so-called fusion protein because it is made up of genetic material from different sources) that displayed the critical GP3 protein on its surface so that, like the phage, it could dissolve amyloid plaques. Fisher hoped this novel manufactured product would stick to toxic aggregates just like the phage.

    By 2013, NeuroPhage’s researchers had tested the new compound, which they called NPT088, in test tubes and in animals, including nonhuman primates. It performed spectacularly, simultaneously targeting multiple misfolded proteins such as amyloid beta, tau, and alpha-synuclein at various stages of amyloid assembly. According to Fisher, NPT088 didn’t stick to normally folded individual proteins; it left normal alpha-synuclein alone. It stuck only to misfolded proteins, not just dissolving them directly, but also blocking their prion-like transmission from cell to cell: “It targets small aggregates, those oligomers, which some scientists consider to be toxic. And it targets amyloid fibers that form aggregates. But it doesn’t stick to normally folded individual proteins.” And as a bonus, it could be delivered by intravenous infusion.
    The Trials

    There was a buzz of excitement in the air when I visited NeuroPhage’s offices in Cambridge, Massachusetts, in the summer of 2014. The 18 staff, including Solomon, Hillerstrom, Fisher, and Krishnan, were hopeful that their new discovery, which they called the general amyloid interaction motif, or GAIM, platform, might change history. A decade after his mother had made her serendipitous discovery, Jonathan Solomon was finalizing a plan to get the product into the clinic. As Solomon says, “We now potentially have a drug that does everything that the phage could do, which can be delivered systemically and is easy to manufacture.”

    Will it work in humans? While NPT088, being made up of large molecules, is relatively poor at penetrating the blood-brain barrier, the medicine persists in the body for several weeks, and so Fisher estimates that over time enough gets into the brain to effectively take out plaques. The concept is that this antibody could be administered to patients once or twice a month by intravenous infusion for as long as necessary.

    NeuroPhage must now navigate the FDA’s regulatory system and demonstrate that its product is safe and effective. So far, NPT088 has proved safe in nonhuman primates. But the big test will be the phase 1A trial expected to be under way this year. This first human study proposed is a single-dose trial to look for any adverse effects in healthy volunteers. If all goes well, NeuroPhage will launch a phase 1B study involving some 50 patients with Alzheimer’s to demonstrate proof of the drug’s activity. Patients will have their brains imaged at the start to determine the amount of amyloid-beta and tau. Then, after taking the drug for six months, they will be reimaged to see if the drug has reduced the aggregates below the baseline.

    “If our drug works, we will see it working in this trial,” Hillerstrom says. “And then we may be able to go straight to phase 2 trials for both Alzheimer’s and Parkinson’s.” There is as yet no imaging test for alpha-synuclein, but because their drug simultaneously lowers amyloid-beta, tau, and alpha-synuclein levels in animals, a successful phase 1B test in Alzheimer’s may be acceptable to the FDA. “In mice, the same drug lowers amyloid beta, tau, and alpha-synuclein,” Hillerstrom says. “Therefore, we can say if we can reduce in humans the tau and amyloid-beta, then based on the animal data, we can expect to see a reduction in humans in alpha-synuclein as well.”

    Along the way, the company will have to prove its GAIM system is superior to the competition. Currently, there are several drug and biotech companies testing products in clinical trials for Alzheimer’s disease, against both amyloid-beta (Lilly, Pfizer, Novartis, and Genentech) and tau (TauRx) and also corporations with products against alpha-synuclein for Parkinson’s disease (AFFiRiS and Prothena/Roche). But Solomon and Hillerstrom think they have two advantages: multi-target flexibility (their product is the only one that can target multiple amyloids at once) and potency (they believe that NPT088 eliminates more toxic aggregates than their competitors’ products). Potency is a big issue. PET imaging has shown that existing Alzheimer’s drugs like crenezumab reduce amyloid loads only modestly, by around 10%. “One weakness of existing products,” Solomon says, “is that they tend to only prevent new aggregates. You need a product potent enough to dissolve existing aggregates as well. You need a potent product because there’s a lot of pathology in the brain and a relatively short space of time in which to treat it.”
    Future Targets

    NeuroPhage’s rise is an extraordinary example of scientific entrepreneurship. While I am rooting for Solomon, Hillerstrom, and their colleagues, and would be happy to volunteer for one of their trials (I was diagnosed with Parkinson’s in 2011), there are still many reasons why NeuroPhage has a challenging road ahead. Biotech is a brutally risky business. At the end of the day, NPT088 may prove unsafe. And it may still not be potent enough. Even if NPT088 significantly reduces amyloid beta, tau, and alpha-synuclein, it’s possible that this may not lead to measurable clinical benefits in human patients, as it has done in animal models.

    But if it works, then, according to Solomon, this medicine will indeed change the world: “A single compound that effectively treats Alzheimer’s and Parkinson’s could be a twenty billion-dollar-a-year blockbuster drug.” And in the future, a modified version might also work for Huntington’s, ALS, prion diseases like Creutzfeldt-Jakob disease, and more.

    I asked Jonathan about his mother, who launched this remarkable story in 2004. According to him, she has gone on to other things. “My mother, Beka Solomon, remains a true scientist. Having made the exciting scientific discovery, she was happy to leave the less interesting stuff—the engineering and marketing things for bringing it to the clinic—to us. She is off looking for the next big discovery.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 10:58 am on March 19, 2016 Permalink | Reply
    Tags: , NOVA, ,   

    From NOVA: “Why Quantize Gravity?” 



    24 Feb 2016
    Sabine Hossenfelder

    A good question is one that you know has an answer—if only you could find it. What’s her name again? Where are my keys? How do I quantize gravity? We’ve all been there.

    Science is the art of asking questions. Scientists often have questions to which they would like an answer, yet aren’t sure there is one. For instance, why is the mass of the proton 1.67 x 10-27 kilograms? Maybe there is an answer—but then again, maybe the masses of elementary particles are what they are, without deeper explanation. And maybe the four known forces are independent of each other, not aspects of one unified “Theory of Everything.”

    Spacetime with Gravity Probe B. NASA
    Spacetime with Gravity Probe B. NASA

    The quest for a theory of quantum gravity is different. To remove contradictions in the known laws of nature, physicists need a theory that can resolve the clash between the laws of gravity and those of quantum mechanics. Gravity and quantum mechanics have been developed and confirmed separately in countless experiments over the last century, but when applied together they produce nonsense. A working theory of quantum gravity would resolve these contradictions by applying the rules of quantum mechanics to gravity, thereby endowing the gravitational field with the irreducible randomness and uncertainty characteristic of quantization. We know there must be a way: if only we could find it.

    Take the double-slit experiment: In quantum mechanics an electron is able to pass through two slits at once, creating a wave-like interference pattern on a detection screen. Yet the electron is neither a wave nor a particle. Instead, it is described by a wave-function—a mathematical in-between of particle and wave—that allows it to act like a particle in some respects and a wave in others. This way, the electron can exist in a quantum superposition: It can be in two different places at once and go through both the right and the left slit. It remains in a superposition until a measurement forces it to “decide” on one location. This behavior of the electron, unintuitive as it seems, has been tested and verified over and over again. Strange or not, we know it’s real.

    But what about the electron’s gravitational field? Electrons have mass, and mass creates a gravitational field. So if the electron goes through both the left and the right slit, its gravitational field should go through both slits, too. But in general relativity the gravitational field cannot do this: General relativity is no quantum theory, and the gravitational field cannot behave like a wave-function. Unlike the electron itself, the electron’s gravitational field must be either here or there, which means that electrons don’t always have their gravitational pull in the right place. We must conclude then that the existing theories just cannot describe what the gravitational field does when the electron goes through a double-slit. There has to be an answer to this, but what?

    At first theorists thought there would be a simple fix: Just modify general relativity to allow the gravitational field to be in two places at once. Physicists Bryce DeWitt and Richard Feynman [collaborated] on just such a theory in the 1960s, but they quickly realized that it worked only at small energies, whereas at high energies, when space-time becomes strongly curved, it produces nonsensical infinite results. This straightforward quantization, it turned out, is only an approximation to a more complete theory, one which should not suffer from the problem of infinities. It is this complete, still unknown, theory that physicists refer to as “quantum gravity.”

    These first attempts at quantization break down when the gravitational force becomes very strong. This happens when large amounts of energy are compressed into a small regions of [spacetime]. Without a full theory of quantum gravity, thus, physicists cannot understand what happens in the early universe or inside black holes.

    Indeed, the black hole information loss problem is another strong indication that we need a theory of quantum gravity. As Stephen Hawking demonstrated in 1974, quantum fluctuations of matter fields close to a black hole’s horizon lead to the production of particles, now called Hawking radiation, that make the black hole lose mass and shrink until nothing is left. Today, the amount of radiation leaking out of the black holes in the Milky Way other galaxies is minuscule; they gain more mass from swallowing matter and gas around them than they can lose by Hawking radiation. But once the universe has cooled down sufficiently, which will inevitably happen, black holes will begin to evaporate. It will take hundreds of billions of years, but eventually they will be gone, leaving behind nothing but radiation.

    This radiation does not carry any information besides its temperature. All the information about what fell into the black hole is irretrievably destroyed during the evaporation. The problem? In quantum mechanics, all processes are reversible, at least in principle, and information about the initial state of any process can always be retrieved. The information might be very scrambled and unrecognizable, such as when you burn a book and are left with smoke and ashes, but in principle the remains still contain the book’s information. Not so for a black hole. A book that crosses the horizon is gone for good, which conflicts with quantum mechanics, which demands that information always be conserved. The information loss problem is not a practical concern that affects observational predictions, but it is a deep conceptual worry about the soundness of our theories. It’s the kind of problem that keeps physicists up at night, and it shows once again that leaving gravity unquantized results in a conundrum which has to be resolved by quantum gravity.

    Black holes and the Big Bang pose another problem for unquantized gravity because they lead to singularities, locations in [spacetime] with a seemingly infinite energy density. Similar singularities appear in other theories too, and in these cases physicists understand that singularities signal the theories’ breakdown. The equations of fluid dynamics, for example, can have singularities. But these equations are no longer useful on distances below the size of atoms, where they must be corrected by a more fundamental theory. Physicists therefore interpret the singularities in general relativity as signs that the theory is no longer applicable and must be corrected.

    Many physicists believe that a theory of quantum gravity will also shed light on other puzzles, such as the nature of dark energy or the unification of the other three known forces, the strong, electromagnetic, and weak [interaction].

    Given that thousands of the brightest minds have tried their hands at it, 80 years seems a long time for a question to remain unanswered. But physicists are not giving up. They know there must be an answer—if only they could find it.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 9:34 am on March 17, 2016 Permalink | Reply
    Tags: , , , NOVA   

    From NOVA: “The ‘Dark’ Universe May Be Full of Strange Interactions” 



    16 Mar 2016
    Charles Q. Choi

    Most of the universe may not be as constant as physicists think it is.

    Dark energy and dark matter—the two main ingredients of the universe, and two of the great mysteries of science—can turn into each other over time, according to cosmologist Elisa Ferreira at McGill University in Montreal and her colleagues. If further data supports this scenario, this discovery might controversially suggest that, as is largely thought now, dark energy is not an immutable force of nature—a cosmological constant [Λ] that evenly controls the expansion of the universe—but instead changes over time.

    Dark matter halo
    Dark matter halo Image credit: Virgo consortium / A. Amblard / ESA

    If true, it could upend our understanding of the universe itself. Ever since the Big Bang, the universe has been expanding, a fact that astronomers first discovered nearly a century ago when they found that galaxies were hurtling away from us. Scientists had initially assumed that the attractive force of gravity would slow down the universe’s expansion over time to eventually either stop it or even collapse everything back together in a Big Crunch. However, about 25 years ago, researchers unexpectedly discovered that cosmic expansion was not only not slowing down, but that it was speeding up.

    Shedding Light on Dark Energy

    Scientists call the force driving this mysterious acceleration dark energy and suggest that it could make up roughly 70% of all matter and energy in universe. In comparison, they estimate that matter only makes up about 30% of the universe. However, physicists don’t really have a clue as to what dark energy is—so much remains unknown about dark energy that some researchers wonder if it even exists.

    Dark matter, on the other hand, is thought to be an invisible material that makes up roughly five-sixths of all matter in the universe. This means that dark matter composes about 25% of the universe, while ordinary matter only makes up about 5%. Currently, the consensus among scientists is that dark matter is made of unknown particles that lie outside the Standard Model, which is the best description we have to date of how subatomic particles behave.

    Standard model with Higgs New
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Ferreira’s new research that suggests a link between dark energy and dark matter relies on data from the [LBL]Baryon Oscillation Spectroscopic Survey (BOSS), which was designed to map the history of the universe’s expansion. Baryons are particles such as protons and neutrons, the building blocks of atoms, and astronomers use the term as a shorthand way of referring to the ordinary matter that makes up stars, planets, and people. When the universe was very young and small, all of its matter was very hot and densely packed together, and sound waves zipping through it led to ripples of density, causing material to clump together in spots. These initial baryon acoustic oscillations are visible today as clusters of galaxies.

    As the largest program in the third Sloan Digital Sky Survey (SDSS-III)—which seeks to create one of the most detailed maps of objects in the sky—BOSS has mapped the positions of roughly 1.5 million galaxies using the Sloan Foundation Telescope at the Apache Point Observatory in New Mexico. By pinpointing the locations of galaxies of different ages, scientists can deduce the rate at which the universe has expanded over time. This in turn helps explain the effects of dark energy over the course of cosmic history.

    Sloan Digital Sky Survey Telescope
    Sloan Digital Sky Survey Telescope at Apache Point, NM, USA

    Beyond a distance of about 6 billion light years, galaxies become fainter and more difficult to see.

    Universe map  2MASS Extended Source Catalog XSC
    Universe map 2MASS Extended Source Catalog XSC

    Caltech 2MASS telescope interior
    Caltech 2MASS Telescope

    To map baryon acoustic oscillations beyond this distance, BOSS relied on roughly 160,000 quasars, the brightest objects in the universe, whose light probably comes from supermassive black holes feeding on surrounding matter.

    As light from distant quasars passes through hydrogen gas in the interstellar and intergalactic void, pockets of greater density absorb more light. The absorption lines that hydrogen leaves behind in the spectrum of light from these quasars are known as Lyman-alpha lines, which are so numerous that they resemble a forest. Astronomers can use this so-called “Lyman-alpha forest” to determine the locations of these pockets of hydrogen gas and help create a 3D map of the universe.

    Lyman-Alpha Forest U Pitt
    Lyman-Alpha Forest U Pitt

    Ferreira and her colleagues say there are hints from BOSS’s data that undermines the current leading theory of how dark energy and dark matter should have forced the universe to evolve, technically known as Lambda-CDM. One potential explanation is that dark energy possesses a bizarre quality known as negative energy density, where dark energy stores more and more energy as it stretches out. However, Ferreira notes that both normal matter and dark matter, the other major components of the universe, have positive energy density, so it would be strange if dark energy did not.

    Instead, the researchers suggest a simpler model to explain these BOSS findings—that dark energy and dark matter are linked. “The possibility of interaction between the two largest components of the universe is allowed and even favorable in some contexts in physics,” Ferreira says. “Interacting dark energy models, in which the dark components interact with each other and their evolution is linked, are possible and should be considered.”

    Specifically, this new model suggests that dark energy changes over time, decaying to become dark matter. “There is less dark energy in the past than we have today,” Ferreira says. In contrast, the current leading of model of dark energy and dark matter suggests that dark energy is a cosmological constant, meaning its strength has remained the same over time, just like other constants such as the speed of light.

    Einstein’s ‘Biggest Blunder’

    Albert Einstein first proposed the existence of a cosmological constant in his equations for general relativity to support the accepted view at the time that the universe was static—while gravity would cause the universe to contract, a cosmological constant would push the universe apart and keep such a collapse from occurring. However, after [Edwin] Hubble discovered that the universe was expanding and not static, physicist George Gamow said that Einstein called the cosmological constant his “biggest blunder.” Still, interest in the concept of the cosmological constant revived when evidence began accumulating that dark energy might be one of the cosmological constants.

    Ferreira notes that their interacting dark energy model is consistent with past data from the Wilkinson Microwave Anisotropy Probe (WMAP) launched in 2001 and the Planck satellite launched in 2009. These two spacecraft analyzed the cosmic microwave background [CMB], the heat left over from the Big Bang, to provide some of the most accurate measurements yet of a number of key cosmological parameters, such as what the universe is made of. “The consistency of this result with other observations, like Planck, WMAP, and others is very encouraging to me,” she says.

    NASA WMAP satellite

    Cosmic Microwave Background WMAP
    CMB per WMAP

    ESA Planck

    Cosmic Background Radiation Planck
    CMB per Planck

    Hydrogen gas in the intergalactic void, seen here scattered throughout this portrait of the Milky Way by the Plank Observatory, helps physicists map the universe in 3D.

    “I would say that the evidence now is overwhelming that dark energy cannot be a cosmological constant,” says theoretical astrophysicist Fulvio Melia at the University of Arizona at Tucson, who did not participate in this study.

    However, the notion that dark energy is not a cosmological constant is a controversial one, and many disagree with it. “I think overall that the BOSS data is very nicely consistent with a cosmological constant model,” says cosmologist Daniel Eisenstein at Harvard University, director of SDSS-III. “We do find mild discrepancies, but I don’t see them as being statistically significant enough to argue against a cosmological constant.”

    If dark energy is not a cosmological constant, “it’s a very important discovery,” Eisenstein says. “It would mean that whatever is causing the large-scale acceleration of the expansion is actually changing on cosmological measurable time scales. That could signal a breakdown in general relativity or the presence of a very pervasive low-energy component of the universe that is still evolving.”

    In addition, if dark energy’s properties do change over time, that means that dark energy has to have particle-like characteristics, Melia says. “Dark energy would presumably be made up of particles like electrons or quarks or the Higgs,” Melia says. Though he does point out that dark energy particles, if they exist, would definitely lie beyond the Standard Model. Such dark energy particles might interact with other particles in ways besides gravity, perhaps through as-yet unknown forces, he adds.

    Melia says that while the idea that dark energy is not a cosmological constant is controversial, the converse—that dark energy is a cosmological constant—is “very problematic. Its measured value is some 10120 times different from what it should be in the context of quantum mechanics. That’s 120 orders of magnitude. So it would not disappoint anyone if it turns out that there is no cosmological constant and that dark energy is instead something more understandable in the context of particle physics.”

    Ferreira stresses that their conclusions are not definitive—the BOSS data that suggests a departure from the current leading model of dark energy and dark matter falls short of the five-sigma level of confidence that physicists often rely on to confirm a result. Plus, while the paper has been submitted to the journal Physical Review D, it has not yet been accepted. “We need other experiments to confirm this effect, and also, we need a better precision in the measurements so that we can say more precisely that such deviations from Lambda-CDM are real,” Ferreira says. Future models and data could also examine the strength and duration of these interactions, she says.

    Ferreira notes there are now many projects investigating dark energy that could support or refute their interacting dark energy model, such as the Dark Energy Survey operating in Chile, the Javalambre Physics of the Accelerating Universe Survey operating in Spain, and the proposed BINGO telescope in Uruguay.

    Dark Energy Icon
    Dark Energy Camera
    CTIO Victor M Blanco 4m Telescope
    Dark Energy Survey, DECam built at FNAL, and the NOAO/CTIO/Victor M Blanco 4 meter telescope in Chile which houses the DECam

    “I think that the prospects of this area of research for the future are very good and promising,” Ferreira says.

    While Eisenstein says that current astronomical observations suggest that dark energy is in fact a cosmological constant, he nevertheless supports research to rigorously test that idea.

    “It’s very reasonable to keep exploring all the possibilities about the dark sector,” Eisenstein says. “We’re all very interested in testing the very simple model of the cosmological constant at higher precision. It’d be great if we could find something to disprove it.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:03 pm on March 16, 2016 Permalink | Reply
    Tags: , , , , NOVA   

    From NOVA: “Are Black Holes Real?” This is a MUST READ 



    10 Mar 2016
    Kate Becker

    Not so long ago, black holes were like unicorns: fantastical creatures that flourished on paper, not in life. Today, there is wide scientific consensus that black holes are real. Even though they can’t be observed directly—by definition, they give off no light—astronomers can infer their hidden presence by watching how stars, gas, and dust swirl and glow around them.

    But what if they’re wrong? Could something else—massive, dense, all-but-invisible—be concealed in the darkness?

    While black holes have gone mainstream, a handful of researchers are investigating exotic ultra-compact stars that, they argue, would look exactly like black holes from afar. Well, almost exactly. Though their ideas have been around for many years, researchers are now putting them to the most stringent tests ever, looking to show once and for all that what looks and quacks like a black hole really is a black hole. And if not? Well, it could just spark the next revolution in physics.

    The game-changer is a new experiment called the Event Horizon Telescope (EHT).

    Event Horizon Telescope map
    EHT map

    The EHT is a network of telescopes that are sensitive to radio waves about a millimeter long and linked together using a technique called very long baseline interferometry. Baseline refers to the distance between the networked telescopes: the longer the distance, the finer the details the telescope can pick out. It’s impossible—or at least impractical—to build a single telescope as big as planet Earth, but astronomers can achieve the same “zoom” factor by linking telescopes on opposite continents. Just like that, the universe goes from standard-definition to HD: a switch powerful enough to tell a black hole from an exotic imposter.

    Meanwhile, scientists have directly detected gravitational waves for the first time using the Laser Interferometer Gravitational-Wave Observatory, also known as LIGO.

    MIT Caltech  Advanced aLIGO Hanford Washington USA installation
    MIT Caltech Advanced aLIGO, Hanford, Washington, USA installation

    Gravitational waves—ripples in the fabric of space-time that [Albert] Einstein predicted should radiate out from the site of any gravitational disturbance—represent an entirely new way to see the cosmos, and with enough data, they could finally confirm—or contradict—the existence of black holes.

    Black Hole Anatomy

    On its own, a black hole looks like nothing: black-on-black, indistinguishable from the empty space that surrounds it. But supermassive black holes, which are believed to sit at the core of almost every galaxy in the universe, surrounded by stars and other galactic detritus that accumulates around the edge like soap suds circling the bathtub drain. By studying those “suds,” astronomers can answer questions about the central black hole.

    The best-studied black hole candidate in the universe is the one called Sagittarius A* [Sag A*], which lives at the center of our very own Milky Way galaxy.

    Sag A prime
    Sag A*

    By tracking the orbits of stars circling around Sagittarius A*, they have deduced that Sagittarius A* packs some 4 million times the mass of the Sun into a region of space much smaller than the solar system. Their conclusion: it could only be a supermassive black hole.

    To confirm that suspicion, they would like to see up to the edge of the black hole—the event horizon, a sort of line in the sand that separates the “inside” of the black hole from the “outside” and beyond which nothing can escape. From the perspective of a telescope on Earth, the event horizon should look like a dark shadow surrounded by a bright ring of light. The exact shape of this ring and shadow are predicted by the equations of general relativity, plus the properties of the black hole and its surroundings.

    An Earth-Sized Telescope

    That’s where the EHT comes in. Since the EHT first started taking data, it has been building its telescope roster, and with each new member, it gets closer to making the first true image of a black hole shadow.

    Arizona Radio Observatory/Submillimeter-wave Astronomy (ARO/SMT)
    Arizona Radio Observatory

    Atacama Pathfinder EXperiment (APEX)


    Atacama Submillimeter Telescope Experiment (ASTE)

    Atacama Submillimeter Telescope Experiment (ASTE) (ASTE)

    Combined Array for Research in Millimeter-wave Astronomy (CARMA)

    CARMA Array

    Caltech Submillimeter Observatory (CSO)

    Caltech Submillimeter Observatory

    Institut de Radioastronomie Millimetrique (IRAM) 30m

    IRAM 30m Radio telescope

    James Clerk Maxwell Telescope (JCMT)

    East Asia Observatory James Clerk Maxwell telescope

    The Large Millimeter Telescope (LMT) Alfonso Serrano

    Large Millimeter Telescope Alfonso Serrano

    The Submillimeter Array (SMA)

    CfA Submillimeter Array Hawaii SAO

    Future Array/Telescopes

    Atacama Large Millimeter/submillimeter Array (ALMA)

    ALMA Array

    Plateau de Bure interferometer

    Plateau de Bure interferometer

    The EHT is like an all-star team of telescopes: Most days, its millimeter-wave dishes run their own experiments independently, but for one or two weeks a year, they team up to become the EHT, taking new data and running tests during the brief window when astronomers can expect clear weather at sites from Hawaii to Europe to the South Pole.

    “It sounds too good to be true that you just drop telescopes around the world and ‘poof!’ you have an Earth-sized telescope,” says Avery Broderick, a theoretical astrophysicist at University of Waterloo and the Perimeter Institute. And in a way, it is. The EHT doesn’t make pictures. Instead, it turns out a kind of mathematical cipher called a Fourier transform, which is like the graphic equalizer on your stereo: it divvies up the incoming signal, whether its an image of space or a piece of music, into the different frequencies that make it up and tells you how much power is stored in each frequency. So far, the EHT has only given astronomers a look at a few scattered pixels of the Fourier transform. When they compare those pixels to what they expect to see in the case of a true black hole, they find a good match. But the job is like trying to figure out whether you’re listening to Beethoven or the Beastie Boys based only on a few slivers of the graphic equalizer curve.

    Now, the EHT is about to add a superstar player: the [ESO/NRAO/NAOJ]Atacama Large Millimeter Array, a telescope made up of 66 high-precision dishes sited 16,000 feet above sea level in Chile’s clear, dry Atacama desert. With ALMA on board, the EHT will finally be able to make the leap from fitting models to seeing a complete picture of the black hole’s shadow. EHT astronomers are now rounding up time at all of the telescopes so that they can take new data and assemble that first coveted image in 2017.

    And if they don’t see what they expect? It could mean that the black hole isn’t really a black hole at all.

    That would come as a relief to many theorists. Black holes are mothers of cosmic paradox, keeping physicists up at night with the puzzles they present: Do black holes really destroy information? Do they really contain infinitely dense points called singularities? Black holes are also the battlefield on which general relativity and quantum mechanics clash most dramatically. If it turns out that they don’t actually exist, some physicists might sleep a little better.

    But if they’re not black holes, then what could they be? One possibility is that they are dark stars made up of bosons, subatomic particles that, unlike more familiar electrons and protons, obey strange rules that allow more than one of them to be in the same place at the same time. Boson stars are highly speculative—astronomers have never seen one, as far as they know—but theorists like Vitor Cardoso, a professor of physics at Técnico in Lisbon and a distinguished visiting researcher at Sapienza University of Rome, hypothesize that some or all of the objects we think are supermassive black holes could actually be boson stars in disguise.

    Physicists classify particles into two different categories: fermions, which include protons, electrons, neutrons, and their components; and bosons, like photons (light particles), gluons, and Higgs particles. Every star that we’ve ever seen shining is dominated by fermions. But, Cardoso says, given a starting environment rich in bosons, bosons could “clump” together gravitationally to form stars, just as fermions do. The early universe might have had a high enough density of bosons for boson stars to form.

    But not every boson is a suitable building block for a boson star. Gravity won’t hold together a clump of massless photons, for instance. Higgs particles are massive enough to be bound together by gravity, but they aren’t stable—they only exist for tiny fraction of a second before decaying away. Theorists have speculated about ways to stabilize Higgs particles, but Cardoso is more intrigued by the prospect that other, yet-undiscovered heavy bosons, like axions, could make up boson stars. In fact, some physicists hypothesize that massive bosons like these could be responsible for dark matter—meaning that boson stars wouldn’t just be a solution to the riddle of black holes, they could also tell us what, exactly, dark matter is.


    Boson stars aren’t the only black hole doppelgänger that theorists have dreamed up. In 2001, researchers proposed an even more speculative oddity called a gravastar. In the gravastar model, as a would-be black hole collapses under its own weight, extreme gravity combines with quantum fluctuations that are constantly jiggling through space to create a bubble of exotic spacetime that halts the cave-in.

    Theorists don’t really know what’s inside that bubble, which is both good and bad news for gravastars: Good news because it gives theorists the flexibility to revise the model as new observations come in, bad news because scientists are rightly skeptical of any model that can be patched up to match the data.

    When the data does come in, physicists have a checklist of sorts that should help them know which of the three—black hole, boson star, or gravastar—they’re looking at. A gravastar should have a bright surface that’s distinguishable from the glowing ring predicted to loop around a black hole. Meanwhile, if the object at the center of the Milky Way is actually a boson star, Cardoso predicts, it will look more like a “normal” star. “Black holes are black all the way through,” Cardoso says. “If really the object is a boson star, then the luminous material can in principle pile up at its center. A bright spot should be detected right at the center of the object.”

    A New View

    Most physicists have placed their bets on Saggitarius A* and other candidates being black holes, though. Boson stars and gravastars already have a few strikes against them. First, when it comes to scientific credibility, black holes have a major head start. Astronomers have a solid understanding of the process by which black holes form and have direct evidence that other ultra-dense objects, like white dwarfs and neutron stars, which could merge to form black holes, really do exist. The alternatives are more speculative on every count.

    Furthermore, Broderick says, astronomers have looked for the telltale signature of boson stars and gravastars at the center of the Milky Way—and haven’t found it. “The stuff raining down on the object will give up all its kinetic energy—all the gravitational binding energy tied up in the kinetic energy of its fall—resulting in a thermal bump in the spectrum,” Broderick says —that is, a signature spike in infrared emission. In 2009, astrophysicists reported that they had found no such bump coming from Sagittarius A*, and in 2015, they announced that it was missing from the nearby massive galaxy [Messier]87, too.

    Cardoso doesn’t see this as a death-knell for the boson star model, though. “The field that makes up the boson star hardly interacts with matter,” he says. To ordinary matter, the surface of a boson star would feel like frothed milk. “We do not yet have a complete model of how these objects accrete luminous matter,” Cardoso says, “so I think that it’s fair to say that this is still an open question.” He is less optimistic about gravastars, which he describes as “artificial constructs” that are likely ruled out by the latest observations.

    As the LIGO experiment gathers more data, theorists will get more opportunities to test their exotic hypotheses with gravitational waves. As two massive objects—say, a supermassive black hole and a star—spiral toward each other on the way toward a collision, gravitational waves carry away the energy of their motion. If one member of the spiraling pair is a black hole, the gravitational wave signal will cut off abruptly as the star passes through the black hole’s event horizon. “It gives rise to a very characteristic ringdown in the final stages of the inspiral,” Cardoso says. Because the alternative models have no such horizon, the gravitational wave signal would keep on reverberating.

    Most astronomers believe that the waves LIGO detected were given off by the collision of two black holes, but Cardoso thinks that boson stars shouldn’t be ruled out just yet. “The data is, in principle, compatible with the two colliding objects being each a boson star,” he says. The end result, though, is probably a black hole “because it rings down very fast.”

    LIGO is not designed to pick up signals at the frequency at which supermassive objects like Sagittarius A* are expected to “ring.” (LIGO is tuned to recognize gravitational waves from smaller black holes and dense stars like neutron stars.) But supermassive black holes and boson stars are in the sweet spot for the planned space-based gravitational wave telescope ESA/LISA (the Evolved Laser Interferometer Space Antenna), slated for launch in 2034.

    ESA LISA Pathfinder

    “To confirm or rule out boson stars entirely, we need ‘louder’ observations,” Cardoso says. “EHT or eLISA are probably our best bet.”

    Taking the Pulse

    In the meantime, astronomers could measure waves from these extremely massive objects by precisely clocking the arrival times of radio pulses from a special class of dead stars called pulsars. If astronomers spot pulses arriving systematically off-beat, that could be a sign that the space they’ve been traveling across is being stretched and squeezed by gravitational waves. Three collaborations—NANOGrav in North America, the European Pulsar Timing Array, and the Parkes Pulsar Timing Array in Australia—are already scanning for these signals using radio telescopes scattered around the globe.

    To Broderick, though, the big question isn’t which model will win out, it’s whether these new experiments can find a flaw in general relativity. “For 100 years, general relativity has been enormously successful, and there’s no hint of where it breaks,” he says. Yet general relativity and quantum mechanics, which appears equally shatterproof, are fundamentally incompatible. Somewhere, one or both must break down. But where? Boson stars and gravastars might not be the answer. Still, exploring these exotic possibilities forces physicists to ask the questions that might lead them to something even more profound.

    “We expect that general relativity will pass the EHT’s tests with flying colors,” Broderick says. “But the great hope is that it won’t, that we’ll finally find the loose thread to pull on that will unravel the next great revolution in physics.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 8:39 am on March 12, 2016 Permalink | Reply
    Tags: , , BOSS Great Wall Supercluster, NOVA,   

    From NOVA: “BOSS Supercluster Is So Big It Could Rewrite Cosmological Theory” 



    11 Mar 2016
    Conor Gearin

    BOSS Supercluster Baryon Oscillation Spectroscopic Survey (BOSS)
    BOSS Supercluster

    Astronomers just observed the biggest collection of star stuff that we’ve seen so far.

    An international team of scientists described a huge wall of galaxies in a little-explored part of the cosmos. It’s over a billion light years long, bristling with 830 galaxies. They have dubbed it the BOSS Great Wall, named after the BOSS survey which spotted it.

    At the largest scale, matter in the universe forms long threads and dense clusters—like a net with big knots. Between the threads are voids drained of almost all matter. In 2014, astronomers learned that our Milky Way galaxy is just one of many in the Laniakea supercluster, which is a web of 100,000 galaxies.

    Laniakea supercluster no image credit
    Laniakea supercluster. No image credit

    But superclusters can stick together and form even bigger structures. The BOSS Great Wall is a tight network of four superclusters. The largest two form a stretched-out wall of galaxies that’s about 1.2 billion light years long. This is one of only a few supercluster systems ever found. Only one other system, the Sloan Great Wall, comes close in size, but not quite close enough—BOSS has over twice as many galaxies and is 170% wider than Sloan.

    Sloan Great Wall SDSS
    Sloan Great Wall SDSS

    “It looks like we have a structure that is bigger than anything else: like two Sloan Great Wall scale structures right next to each other,” said Heidi Lietzen of the Institute of Astrophysics at the University of La Laguna in Spain, who was the lead author of the new study. “The question now is: is it too big for our cosmological theories?”

    Scientists are still figuring out what shapes supercluster systems like this one can take, said Elmo Tempel, an astronomer at the Tartu Observatory in Estonia and a co-author on the study. Since they’ve only found a few systems of this scale, astrophysicists aren’t sure whether they always form wall-like structures or if the one’s they’ve seen are special cases. The next step is to run simulations of the shapes that superstructures this massive tend to form, Tempel said.

    Superclusters have their origins in pools of dark matter that formed early in the universe’s history, said Brent Tully of the Institute for Astronomy at the University of Hawaii. Normal matter flows towards the wells of dark matter, giving the universe its web-like structure.

    While Tully agreed that the BOSS Great Wall is indeed the biggest structure in the universe we’ve found so far, he doesn’t think it will change our theories of how the cosmos gets its shape. (There is another contender for largest structure in the universe, but instead of being made of something, it’s made of nothing.)

    “It is not surprising that if we look at a bigger patch of the universe we find something bigger,” Tully said. “But not so much bigger that it disrupts the generally held view of structure formation.”

    What’s more, Tully said that the BOSS Great Wall won’t be the last word on giant superstructures—there’s plenty of universe left to explore. “Look in a new place and you’ll find something new,” Tully said.

    Astronomers already know where to look next. Lietzen explained that the survey data the team used to figure out the large-scale structure of objects only covered a quarter of the night sky. “There could very well be another equally big system of superclusters somewhere in the Southern sky, for example,” Lietzen said.

    *Sloan refers to the Sloan Digital Sky Survey, SDSS, using the SDSS telescope at Apache Point, NM, USA

    **Baryon Oscillation Spectroscopic Survey also ran on the SDSS telescope.

    SDSS Telescope
    SDSS telescope at Apache Pointe, NM, USA

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 5:39 pm on January 13, 2016 Permalink | Reply
    Tags: , , Cosmic inflation and contraction theory, NOVA   

    From NOVA: “Do We Live in an Anamorphic Universe?” 



    12 Jan 2016
    Anna Ijjas
    Paul Steinhardt

    Temp 1
    Anamorphic is a term often used in art or film for images that can be interpreted two ways, depending on your vantage point. Önarckép Albert Einsteinnel/Self portrait with Albert Einstein, Copyright Istvan Orosz

    A century ago, we knew virtually nothing about the large scale structure of the universe, not even the fact that there exist galaxies beyond our Milky Way. Today, cosmologists have the tools to image the universe as it is today and as it was in the past, stretching all the way back to its infancy when the first atoms were forming. These images reveal that the complex universe we see today, full of galaxies, black holes, planets and dust, emerged from a remarkably featureless universe: a uniform hot soup of elemental constituents immersed in a space that exhibits no curvature. (1)

    How did the universe evolve from this featureless soup to the finely-detailed hierarchy of stars, galaxies, and galaxy clusters we see today? A closer look reveals the primordial soup was not precisely uniform. Exquisitely sensitive detectors, such as those aboard the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck satellites, produced a map that shows the soup had a distribution of hot and cold spots arranged in a pattern with particular statistical properties.

    NASA WMAP satellite

    ESA Planck

    For example, if one only considers spots of a certain size and measures the distribution of temperatures for those spots only, it turns out the distribution has two notable properties: it is nearly a bell curve (“Gaussian”) and it is nearly the same for any size (“scale-invariant”). Thanks to high-resolution computer simulations, we can reproduce the story of how the hot and cold spots evolved into the structure we see today. But we are still struggling to understand how the universe came to be flat and uniform and where the tiny but critical hot and cold spots came from in the first place.

    Cosmic Background Radiation Planck
    Cosmic microwave background per Planck

    For example, if one only considers spots of a certain size and measures the distribution of temperatures for those spots only, it turns out the distribution has two notable properties: it is nearly a bell curve (“Gaussian”) and it is nearly the same for any size (“scale-invariant”). Thanks to high-resolution computer simulations, we can reproduce the story of how the hot and cold spots evolved into the structure we see today. But we are still struggling to understand how the universe came to be flat and uniform and where the tiny but critical hot and cold spots came from in the first place.

    Looking Beyond Inflation

    One leading idea is that, right after the big bang, a period of rapid expansion known as inflation set in, smoothing and flattening the observable universe.

    Temp 3
    Credit:NASA/WMAP Science Team

    However, there are serious flaws with inflation: inflation requires adding special forms of energy to the simple big bang picture that must be arranged in a very particular way in order for inflation to start, so the big bang is very unlikely to trigger a period of inflation; and, even if inflation were to start, it would amplify quantum fluctuations into large volumes of space that result in a wildly-varying multiverse consisting of regions that are generally neither smooth nor flat. Although inflation was originally thought to give firm predictions about the structure of our universe, the discovery of the multiverse effect renders the theory unpredictive: literally any outcome, any kind of universe is possible.

    Another leading approach, known as the ekpyrotic picture, proposes that the smoothing and flattening of the universe occurs during a period of slow contraction. This may seem counterintuitive at first. To understand how this could work, imagine a film showing the original big bang picture. The universe would be slowly expanding and become increasingly non-uniform and curved over time. Now imagine running this film backwards. It would show a slowly contracting universe becoming more uniform and less curved over time. Of course, if the smoothing and flattening occur during a period of slow contraction, there must be a bounce followed by slow expansion leading up to the present epoch. In one version of this picture, the evolution of the universe is cyclic, with periods of expansion, contraction, and bounce repeating at regular intervals. In contrast to inflation, smoothing by ekpyrotic contraction does not require special arrangements of energy and is easy to trigger. Furthermore, contraction prevents quantum fluctuations from evolving into large patches that would generate a multiverse. However, making the scale-invariant spectrum of variations in density requires more ingredients than in inflation.

    The best of both worlds?

    While experimentalists have been feverishly working to determine which scenario is responsible for the large-scale properties of the universe—rapid expansion or slow contraction—a novel third possibility has been proposed: Why not expand and contract at the same time? This, in essence, is the idea behind anamorphic cosmology. Anamorphic is a term often used in art or film for images that can be interpreted two ways, depending on your vantage point. In anamorphic cosmology, whether you view the universe as contracting or expanding during the smoothing and flattening phase depends on what measuring stick you use.

    If you are measuring the distance between two points, you can use the Compton wavelength of a particle, such as an electron or proton, as your fundamental unit of length. Another possibility is to use the Planck length, the distance formed by combining three fundamental physical “constants”: Planck’s constant, the gravitational constant and the speed of light [in a vacuum]. In [Albert] Einstein’s theory of general relativity, both lengths are fixed for all times, so measuring contraction or expansion with respect to either the particle Compton wavelength or the Planck length gives the same result. However, in many theories of quantum gravity—that is, extensions of Einstein’s theory aimed at combining quantum mechanics and general relativity—one length varies in time with respect to the other. In the anamorphic smoothing phase, the Compton wavelength is fixed in time and, as measured by rulers made of matter, space is contracting. Simultaneously, the Planck length is shrinking so rapidly that space is expanding relative to it. And so, surprisingly, it is really possible to have contraction (with respect to the Compton wavelength) and expansion (with respect to the Planck length) at the same time!

    The anamorphic smoothing phase is temporary. It ends with a bounce from contraction to expansion (with respect to the Compton wavelength). As the universe expands and cools afterwards, both the particle Compton wavelengths and the Planck mass become fixed, as observed in the present phase of the universe.

    By combining contraction and expansion, anamorphic cosmology potentially incorporates the advantages of the inflationary and ekpyrotic scenarios and avoids their disadvantages. Because the universe is contracting with respect to ordinary rulers, like in ekpyrotic models, there is no multiverse problem. And because the universe is expanding with respect to the Planck length, as in inflationary models, generating a scale-invariant spectrum of density variations is relatively straightforward. Furthermore, the conditions needed to produce the bounce are simple to obtain, and, notably, the anamorphic scenario can generate a detectable spectrum of primordial gravitational waves, which cannot occur in models with slow ekpyrotic contraction. International efforts currently underway to detect primordial gravitational waves from land-based, balloon-borne and space-based observatories may prove decisive in distinguishing these possibilities.

    (1)According to Einstein’s theory of general relativity, space can be bent so that parallel light rays converge or diverge, yet observations indicate that their separation remains fixed, as occurs in ordinary Euclidean geometry. Cosmologists refer to this special kind of unbent space as “flat.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

  • richardmitnick 4:57 pm on January 12, 2016 Permalink | Reply
    Tags: , Nearly Two-Thirds of Earth’s Minerals Were Created by Life, NOVA, Oxidation   

    From NOVA: “Nearly Two-Thirds of Earth’s Minerals Were Created by Life” 



    12 Jan 2016
    Tim De Chant

    Geothite—chemical formula FeO(OH)—is formed by the oxidation of iron

    Planet Earth’s stunning diversity of 4,500 minerals may be thanks to its stunning diversity of life, according to a recent theory proposed by minerologists.

    Rocks helped give life its start—serving as storehouses of chemicals and workbenches atop which the key processes sparked the complex reactions that now power living things—so it only seems fair that life may have returned the favor. “Rocks create, life creates rocks. They’re intertwined in ways that are just now coming into focus,” Robert Hazen, a research scientist at the Carnegie Institution of Washington’s Geophysical Laboratory, told NOVA.

    According to Hazen and his colleagues, who have published a slew of papers on the theory over the past several years, up to two-thirds of minerals on Earth may be the result of oxidation, a chemical reaction that occurs when one element loses electrons to another. The reaction was first discovered with oxygen as the oxidizing agent, hence the name, though other elements such as chlorine (Cl2) can also act as oxidizers.

    But it was oxygen that played an outsize role in Earth’s history. About 2.5 billion years ago, O2 was released as a waste product by newly photosynthesizing algae. Within the span of about 300 million years, those microbes had boosted oxygen from nothing to 1% of the atmosphere, Hazen said. It was a rapid shift that would have wide reaching consequences.

    As O2 came into contact with iron dissolved in the ocean, it precipitated a rusty rain that sank to the bottom. Today, those vast swaths of Precambrian rust are still found in the trillions of tons of iron ore that are locked in banded formations around the world.

    Other elements were similarly affected. Two-thirds of Earth’s minerals are the result of oxidation, Hazen said, and most oxygen on Earth was created by life.

    “As a mineralogist when I look at earth history. I see big new transitions, I see the moon forming impact, I see the formation of oceans and so forth,” Hazen said. :But nothing, nothing matches what life and oxygen did to create new minerals.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    NOVA is the highest rated science series on television and the most watched documentary series on public television. It is also one of television’s most acclaimed series, having won every major television award, most of them many times over.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 554 other followers

%d bloggers like this: