Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:52 pm on November 18, 2017 Permalink | Reply
    Tags: , , , , , Could Matter Escape The Event Horizon During A Black Hole Merger?, Ethan Siegel   

    From Ethan Siegel: ” Could Matter Escape The Event Horizon During A Black Hole Merger?” 

    Ethan Siegel

    Nov 18, 2017

    Nothing can escape from a black hole… but could another black hole pull something out?

    1
    Even though black holes should have accretion disks, and matter falling in from them, it doesn’t appear to be possible to escape from inside the event horizon once you cross over. Could anything change that? Image credit: NASA / Dana Berry (Skyworks Digital).

    Once you fall into the event horizon of a black hole, you can never escape. There’s no speed you could travel at, not even the speed of light, that would enable you to get out. But in General Relativity, space gets curved by the presence of mass and energy, and merging black holes are one of the most extreme scenarios of all. Is there any way that you could fall into a black hole, cross the event horizon, and then escape as your black hole’s event horizon gets distorted from a massive merger? That’s the question of Chris Mitchell, who asks:

    If two black holes merge, is it possible for matter that was within the event horizon of one black hole to escape? Could it escape and migrate to the other (more massive black hole)? What about escape to outside of both horizons?

    It’s a crazy idea, to be certain. But is it crazy enough to work? Let’s find out.

    2
    When a massive enough star ends its life, or two massive enough stellar remnants merge, a black hole can form, with an event horizon proportional to its mass and an accretion disk of infalling matter surrounding it. Image credit: NASA/ESA Hubble, ESO, M. Kornmesser.

    The way you form a black hole is typically from the collapse of a massive star’s core, either in the aftermath of a supernova explosion, a neutron star merger, or via direct collapse. As far as we know, every black hole is formed out of matter that was once a part of a star, and so in many ways black holes are the ultimate stellar remnant. Some black holes form in isolation; others form as part of a binary system or even one with multiple stars. Over time, black holes can not only inspiral and merge, but devour other matter that falls inside the event horizon.

    3
    In a Schwarzschild black hole, falling in leads you to the singularity, and darkness. No matter which direction you travel in, how you accelerate, etc., a crossover into the event horizon means an inevitable encounter with a singularity. Image credit: (Illustration) ESO, NASA/ESA Hubble, M. Kornmesser.

    When anything crosses into a black hole’s event horizon from the outside, that matter is immediately doomed. Inevitably, in a matter of mere seconds, it will find itself encountering the singularity at the center of a black hole: a single point for a non-rotating black hole, and a ring for a rotating one. The black hole itself will have no memory of which particles fell in or what their quantum state was. Instead, all that will remain, information-wise, is what the total mass, charge, and angular momentum of the black hole now is.

    3
    In the final pre-merger stages, the spacetime surrounding a black hole pair will be distorted, as matter continues to fall into both black holes from the surrounding environment. At no point does it appear that anything will have the opportunity to escape from the inside to the outside of an event horizon. Image credit: NASA/Ames Research Center/C. Henze.

    So you might envision a scenario, then, where matter falls into a black hole during the final pre-merger stages, when one black hole is about to combine with another. Since black holes are always expected to have accretion disks, and throughout interstellar space there’s material simply zipping through, you should have particles crossing the event horizon all the time. That part’s a no-brainer, and so it makes sense to consider a particle that’s just entered the event horizon prior to the final moments of a merger.

    Could it possibly escape? Could it “jump” from one black hole to the other? Let’s examine the situation from a spacetime perspective.

    4
    Computer simulation of two merging black holes and the spacetime distortions that they cause. While gravitational waves are copiously emitted, matter itself isn’t expected to escape. Image credit: MPI for Gravitational Physics Werner Benger, cc by-sa 4.0.

    When two black holes merge, they do so only after a long period of inspiral, where energy is radiated away via gravitational waves. Leading up to the final pre-merger moments, energy is radiated away. But that doesn’t cause the event horizon of either black hole to shrink; rather, that energy comes from spacetime in the center-of-mass region getting more and more heavily deformed. It’s the same as if you stole energy away from the planet Mercury; it would orbit closer to the Sun, but no properties of Mercury or the Sun would need to change.

    However, when the final moments of the merger are upon us, the event horizons of the two black holes do get deformed by the gravitational presence of one another. Fortunately, numerical relativists [Physical Review D] have already worked out exactly how this merger affects the event horizons, and it’s spectacularly informative.

    Despite the fact that up to ~5% of the total pre-merger mass of the black holes can be radiated away in the form of gravitational waves, you’ll notice that the event horizons never shrink; they simply grow a connection, distort a little bit, and then increase in total volume. That last point is important: if I have two black holes of equal mass, their event horizons take up a certain amount of volume in space. If I merge them to create a single black hole of double the mass of the two originals, the amount of volume taken up by the event horizon is now four times the original volume of the combined black holes. The mass of a black hole is directly proportional to its radius, but volume is proportional to radius cubed.

    5
    While we’ve discovered a great many black holes, note that the radius of each one’s event horizon is directly proportional to its mass. Double the mass, double the radius, but that means the area increases fourfold and the volume increases eightfold! Image credit: LIGO/Caltech/Sonoma State (Aurore Simonnet).

    As it turns out, even if you kept a particle as close to stationary inside a black hole as possible, and made it fall towards the singularity as slowly as possible, there’s no way for it to get out. The total volume of the combined event horizons during a black hole merger goes up, not down, and no matter what the trajectory of an event-horizon-crossing particle is, it’s forever destined to be swallowed by the combined singularity of both black holes together.

    In many collision scenarios in astrophysics, there are ejecta, where matter from inside an object escapes during a cataclysmic event. But in the case of merging black holes, everything from inside remains inside; most of what was outside gets sucked in; and only a little bit of what was outside could conceivably escape. Once you fall in, you’re doomed, and nothing you throw at that black hole — even another black hole — will change that!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

    Advertisements
     
  • richardmitnick 12:04 pm on November 16, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, The Bullet Cluster Proves Dark Matter Exists But Not For The Reason Most Physicists Think   

    From Ethan Siegel: “The Bullet Cluster Proves Dark Matter Exists, But Not For The Reason Most Physicists Think” 

    Ethan Siegel

    Nov 16, 2017

    If the gravity isn’t where the matter is, things get into trouble very, very quickly.

    1
    The gravitational lensing map (blue), overlayed over the optical and X-ray (pink) data of the Bullet cluster. The mismatch is undeniable. Image credit: X-ray: NASA/CXC/CfA/M.Markevitch et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe et al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.

    NASA/Chandra Telescope

    NASA/ESA Hubble Telescope

    ESO WFI LaSilla 2.2-m MPG/ESO telescope at La Silla, 600 km north of Santiago de Chile at an altitude of 2400 metres

    MPG/ESO 2.2 meter telescope at Cerro La Silla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres

    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile.

    The above image, a composite of optical data, X-ray data, and a reconstructed mass map, is one of the most famous and informative ones in all of astronomy. Known as the Bullet Cluster, it showcases two galaxy clusters that have recently collided. The individual galaxies present within the clusters, like two guns filled with bird shot fired at one another, passed right through one another, as the odds of a collision were exceedingly low. However, the intergalactic gas within each cluster, largely diffuse and making up the majority of the normal matter, collided and heated up, emitting X-rays that we can see today. But when we used our knowledge of General Relativity and the bending of background light to reconstruct where the mass must be, we found it alongside the galaxies, not with the intra-cluster matter. Hence, dark matter must exist.

    According to the standard line of reasoning, the clusters are composed of dark matter and normal matter in a 5:1 ratio. When they collide, the diffuse normal matter collides, sticks together, and heats up, while the clumps (galaxies) and dark matter pass through, creating the observed effects.

    2
    A merging galaxy cluster in MACS J0416.1–2403 exhibits a different, smaller separation of X-ray gas from the gravitational signal, but this is expected, as this cluster is in a different stage of its merger, and there is still an offset. Image credit: X-ray: NASA/CXC/SAO/G.Ogrean et al.; Optical: NASA/STScI; Radio: NRAO/AUI/NSF.

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    But, as with any great idea, all the alternatives must be considered. The X-rays don’t lie: there really is that much matter in between the two separated clusters today, so any argument to the contrary must be discarded. The idea that there are ultra-compact, invisible clumps of normal matter within the clusters is intriguing, but a systematic set of observations and analyses indicate that it can’t be nearly enough to explain the observed effects. And the idea that this is unique to this one cluster in the Universe is disproven by the large number of other colliding clusters that have since been found, and observed to exhibit the same effects.

    3
    Four colliding galaxy clusters, showing the separation between X-rays (pink) and gravitation (blue), indicative of dark matter. Image credit: X-ray: NASA/CXC/UVic./A.Mahdavi et al. Optical/Lensing: CFHT/UVic./A. Mahdavi et al. (top left); X-ray: NASA/CXC/UCDavis/W.Dawson et al.; Optical: NASA/ STScI/UCDavis/ W.Dawson et al. (top right); ESA/XMM-Newton/F. Gastaldello (INAF/ IASF, Milano, Italy)/CFHTLS (bottom left); X-ray: NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University) (bottom right).


    CFHT Telescope, Maunakea, Hawaii, USA, at Maunakea, Hawaii, USA,4,207 m (13,802 ft) above sea level

    ESA/XMM Newton X-ray telescope

    Perhaps, then, there’s an intriguing alternative that we have to consider: that under the right conditions, gravity exhibits non-local effects. This sounds crazy, but it’s a hallmark of many physically important processes, such as the quantum Universe. The basic idea is that the effects of gravitation occur in different locations from where the majority of the matter is. Non-local gravity theories are excellent at reproducing the successes of modified gravity ideas, such as the rotation curves of galaxies. While a recent paper used constraints from gravitational waves and gamma rays to rule out some variants of modified gravity where they travel along different paths, one of the survivors is MOG: a theory of gravity with non-local effects. (Non-local means its effects are not wholly located where the sources are located.)

    But as compelling as this is, it cannot be right, and it only requires a thought experiment to understand why.

    4
    Clumps and clusters of galaxies exhibit gravitational effects on the light-and-matter behind them due to the effects of weak gravitational lensing. This enables us to reconstruct their mass distributions, which should line up with the observed matter. Image credit: ESA, NASA, K. Sharon (Tel Aviv University) and E. Ofek (Caltech).

    Imagine what it would take to have two galaxy clusters, post-collision, exhibit an effect where most of the matter is in the central region where the collision took place, but where most of the gravitational effects are located centered elsewhere. It would require that gravity and mass not line up together. This is, in fact, what we see when we look at the normal matter alone in galaxy clusters: the places where we see/trace the gas and where we reconstruct the mass, from gravitational lensing, don’t perfectly align.

    5
    a) Projected distribution of dark matter in the COSMOS field from the analysis of Massey et al. (2007a). The blue map reveals the density of dark matter as inferred from the pattern of weak distortions viewed in background galaxies by the Hubble Space Telescope. (b) Equivalent map for the baryonic matter as revealed by a combination of the stellar mass in galaxies imaged with the Hubble Space Telescope and hot gas imaged with the X-ray satellite XMM–Newton. Image credit: R. Ellis, Philos Trans A Math Phys Eng Sci. 2010 Mar 13; 368(1914): 967–987.

    Either gravity behaves non-locally, or there’s some unseen form of mass: dark matter. But there’s an easy way to tell these two apart! Simply take a look at galaxy clusters that aren’t in the process of colliding, or look at two nearby clusters that are headed towards one another, but haven’t merged yet. If dark matter is the correct explanation, the gravitational lensing signal should trace the matter distribution: everything should be local. But if non-local gravity is the answer, there should be gravitational effects seen where the matter isn’t located.

    Thankfully, we have that data, and we have an answer.

    6
    The contours, above, show the reconstructed mass of the galaxy cluster from gravitational lensing, while the points show observed galaxies, color-coded for a variety of redshifts. Where the cluster is quiescent, there’s no separation of matter from gravitation. Image credit: H.S. Hwang et al., ApJ, 797, 2, 106.

    When your cluster is undisturbed, the gravitational effects are located where the matter is distributed. It’s only after a collision or interaction has taken place that we see what appears to be a non-local effect. This indicates that something happens during the collision process to separate normal matter from where we see the gravitational effects. Adding dark matter makes this work, but non-local gravity would make differing before-and-after predictions that can’t both match up, simultaneously, with what we observe.

    Interestingly, this argument has been made for over a decade, now, with no satisfactory counterargument coming from detractors of dark matter. It isn’t the displacement of gravitation from normal matter that “proves” dark matter exists, but rather the fact that the displacement only occurs in environments where dark matter and normal matter would be separated by astrophysical processes. This is a fundamental issue that must be addressed, if alternatives to dark matter are to be taken seriously as complete theories, rather than ideas in their infancy. That time is not yet at hand.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:18 pm on November 12, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, Why Was The Universe Dark For So Long?   

    From Ethan Siegel: “Why Was The Universe Dark For So Long?” 

    From Ethan Siegel

    Nov 11, 2017

    1
    The expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. Once neutral atoms form, however, it takes roughly 550 million years for the ‘dark ages’ to end. Image credit: C. Faucher-Giguère, A. Lidz, and L. Hernquist, Science 319, 5859 (47).

    The first stars formed almost half a billion years before we could see their light. Here’s why.

    At the moment of the Big Bang, the Universe was full of matter and radiation, but there were no stars. As it expanded and cooled, you formed protons and neutrons in the first fraction of a second, atomic nuclei in the first 3–4 minutes, and neutral atoms after about 380,000 years. After another 50–100 million years, you form the very first stars. But the Universe remains dark, and observers within it are unable to see that starlight, until 550 million years after the Big Bang. Why so long? Iustin Pop wants to know:

    One thing I wonder though is why did the dark ages last hundreds of millions of years? I would have expected an order of magnitude smaller, or more.

    Forming stars and galaxies is a huge step in the creation of light, but it isn’t enough to end the “dark ages” on its own. Here’s the story.

    2
    The early Universe was full of matter and radiation, and was so hot and dense that it prevented protons and neutrons from stably forming for the first fraction-of-a-second. Once they do, however, and the antimatter annihilates away, we wind up with a sea of matter and radiation particles, zipping around close to the speed of light. Image credit: RHIC collaboration, Brookhaven.

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    Try and imagine the Universe as it was when it was only a few minutes old: before the formation of neutral atoms. Space is full of protons, light nuclei, electrons, neutrinos, and radiation. Three important things happen at this early stage:

    1.The Universe is very uniform in terms of how much matter there is in any location, with the densest regions only a few parts in 100,000 more dense than the least dense regions.
    2.Gravitation works hard to pull matter in, with overdense regions exerting an extra, attractive force to make that happen.
    3.And radiation, mostly in the form of photons, pushes outwards, resisting the gravitating effects of the matter.

    As long as we have radiation that’s energetic enough, it prevents neutral atoms from stably forming. It’s only when the expansion of the Universe cools the radiation enough that neutral atoms won’t immediately get reionized.

    3
    In the hot, early Universe, prior to the formation of neutral atoms, photons scatter off of electrons (and to a lesser extent, protons) at a very high rate, transferring momentum when they do. After neutral atoms form, the photons simply travel in a straight line. Image credit: Amanda Yoho.

    After this occurs, 380,000 years into the history of the Universe, that radiation (mostly photons) simply free-streams in whatever direction it was traveling last, through the now-neutral matter. 13.8 billion years later, we can view this leftover glow from the Big Bang: the Cosmic Microwave Background [CMB].

    CMB per ESA/Planck

    ESA/Planck

    It’s in the “microwave” part of the spectrum today because of the stretching-of-wavelengths due to the Universe’s expansion. But more importantly, there’s a pattern of fluctuations in there of hot-and-cold spots, corresponding to overdense and underdense regions of the Universe.

    4
    The overdense, average density, and underdense regions that existed when the Universe was just 380,000 years old now correspond to cold, average, and hot spots in the CMB. Image credit: E. Siegel / Beyond The Galaxy.

    Once you form neutral atoms, it becomes much easier for gravitational collapse to ensue, since photons interact very easily with free electrons, but much less so with neutral atoms. As the photons cool to lower and lower energies, the matter becomes more important to the Universe, and so gravitational growth begins to occur. It takes roughly 50–100 million years for gravity to pull enough matter together, and for the gas to cool enough to allow collapse, so that the very first stars form. When they do, nuclear fusion ignites, and the first heavy elements in the Universe come into existence.

    5
    The large-scale structure of the Universe changes over time, as tiny imperfections grow to form the first stars and galaxies, then merge together to form the large, modern galaxies we see today. Looking to great distances reveals a younger Universe, similar to how our local region was in the past. Image credit: Chris Blake and Sam Moorfield.

    But even with those stars, we’re still in the dark ages. The culprit? All those neutral atoms spread throughout the Universe. There are some 1080 of them, and while the low-energy photons left over from the Big Bang are transparent to this normal matter, the higher-energy starlight is opaque. This is the same reason why you can’t see the stars in the galactic center in visible light, but at longer (infrared, for example) wavelengths, you can see right through the neutral gas and dust.

    6
    This four-panel view shows the Milky Way’s central region in four different wavelengths of light, with the longer (submillimeter) wavelengths at top, going through the far-and-near infrared (2nd and 3rd) and ending in a visible-light view of the Milky Way. Note that the dust lanes and foreground stars obscure the center in visible light. Image credit: ESO/ATLASGAL consortium/NASA/GLIMPSE consortium/VVV Survey/ESA/Planck/D. Minniti/S. Guisard Acknowledgement: Ignacio Toledo, Martin Kornmesser.

    Milky Way map. ATLASGAL .Image credit ESO APEX ATLASGAL consortium NASA GLIMPSE consortium ESA Planck

    ESO/APEX high on the Chajnantor plateau in Chile’s Atacama region, at an altitude of over 4,800 m (15,700 ft)

    NASA/GLIMPSE

    In order for the Universe to become transparent to starlight, these neutral atoms need to become ionized. They were ionized once a long time ago: before the Universe was 380,000 years old, so we call the process of ionizing them one more time reionization. It’s only when you’ve formed enough new stars, and emitted enough high-energy, ultraviolet photons, that you can complete this process of reionization and bring the dark ages to an end. While the very first stars may exist after just 50–100 million years after the Big Bang, our detailed observations have shown us that reionization doesn’t complete until the Universe is around 550 million years old.

    8
    Schematic diagram of the Universe’s history, highlighting reionization, which occurs in earnest only after the formation of the first stars and galaxies. Before stars or galaxies formed, the Universe was full of light-blocking, neutral atoms. While most of the Universe doesn’t become reionized until 550 million years afterwards, a few fortunate regions are mostly reionized at earlier times. Image credit: S. G. Djorgovski et al., Caltech Digital Media Center.

    How is it, then, that the earliest galaxies we see are from when the Universe was only 400 million years old? And how is it the case that the James Webb Space Telescope will see even farther back than that? There are two factors that come into play:

    NASA/ESA/CSA Webb Telescope annotated

    1.) Reionization is non-uniform. The Universe is full of clumps, imperfections, and inhomogeneities. This is great, as it allows us to form stars, galaxies, planets, and also human beings. But it also means that some regions of space, and some directions on the sky, experience total reionization before others. The farthest known galaxy we’ve ever seen, GN-z11, is a bright and spectacular galaxy for as young as it is, but it also happens to be located in a direction where the Universe is mostly already completely reionized. It’s mere serendipity that this occurred 150 million years before the “average” reionization time.

    9
    Only because this distant galaxy, GN-z11, is located in a region where the intergalactic medium is mostly reionized, can Hubble reveal it to us at the present time. James Webb will go much farther. Image credit: NASA, ESA, and A. Feild (STScI).

    10
    Hubble Space Telescope astronomers, studying the northern hemisphere field from the Great Observatories Origins Deep Survey (GOODS), have measured the distance to the farthest galaxy ever seen. The survey field contains tens of thousands of galaxies stretching far back into time. Galaxy GN-z11, shown in the inset, is seen as it was 13.4 billion years in the past, just 400 million years after the big bang, when the universe was only three percent of its current age. The galaxy is ablaze with bright, young, blue stars, but looks red in this image because its light has been stretched to longer spectral wavelengths by the expansion of the universe.

    NASA/ESA Hubble Telescope

    2.) Longer wavelengths are transparent to these neutral atoms. While the Universe is dark at these early times as far as visible and ultraviolet light goes, the longer wavelengths are transparent to those neutral atoms. For example, the “Pillars of Creation” are famously opaque to visible light, but if we view them in infrared light, we can easily see the stars inside.

    11
    The visible light (L) and infrared (R) wavelength views of the same object: the Pillars of Creation. Note how much more transparent the gas-and-dust is to infrared radiation, and how that affects the background and interior stars that we can detect. Image credit: NASA/ESA/Hubble Heritage Team.

    The James Webb Space Telescope will not only be a primarily infrared observatory, but will be designed to view light that was infrared when it was emitted from these early stars. By extending out to wavelength of 30 microns, well into the mid-infrared, it will be able to view objects during the dark ages themselves.

    12
    As we’re exploring more and more of the Universe, we’re becoming sensitive to not only less faint objects, but objects that are ‘blocked’ by the neutral atoms intervening. But with infrared observatories, we can see them, after all. Image credit: NASA / JWST and HST teams.

    The Universe was dark for so long because the atoms within it were neutral for so long. Even a 98% reionized Universe is still opaque to visible light, and it takes roughly 500 million years of starlight to completely ionize all the atoms and give us a Universe that’s truly transparent. When the dark ages end, we can see everything in all wavelengths of light, but prior to that, we need to either get lucky or look in longer, less-well-absorbed wavelengths.

    Letting there be light, by forming stars and galaxies, isn’t enough to end the dark ages in the Universe. Creating light is only half the story; creating an environment where it can propagate all the way to your eyes is just as important. For that, we need lots of ultraviolet light, and that requires time. Yet by looking in just the right way, we can peer into the darkness, and see what we’ve never observed before. In less than two years, that story will begin.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 9:00 am on October 29, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, How Sure Are We That The Universe Is 13.8 Billion Years Old? Very sure. Here’s how we know.   

    From Ethan Siegel: “How Sure Are We That The Universe Is 13.8 Billion Years Old?” 

    Ethan Siegel

    Oct 28, 2017

    1
    If you look farther and farther away, you also look farther and farther into the past. The farthest we can see back in time is 13.8 billion years: our estimate for the age of the Universe. But is it correct? Image credit: NASA / STScI / A. Feild.

    Very sure. Here’s how we know.

    You’ve no doubt heard that the Universe itself has been around for 13.8 billion years since the Big Bang, and that scientists are extremely confident of that figure. In fact, the uncertainty on that figure is under 100 million years: less than 1% of the estimated age. But science has been wrong in the past. Could it be wrong again, about this? That’s the question of John Deer, who asks:

    Lord Kelvin estimated the age of the Sun between 20 and 40 million years because his model didn’t (couldn’t) include quantum mechanics and relativity. How probable is it we’re doing a similar mistake when looking at the universe at large?

    Let’s take a look at the historical problem, and then jump to the modern-day situation to understand more.

    2
    The clusters, stars, and nebulae in our Milky Way are useful for coming up with an age estimate for the Universe, but just as our lack-of-understanding of stellar processes led to big mistakes in our estimate for the age of the Solar System, could we be fooling ourselves about the age of the Universe? Image credit: ESO / VST survey.


    ESO VST telescope, at ESO’s Cerro Paranal Observatory, with an elevation of 2,635 metres (8,645 ft) above sea level

    Back at the end of the 19th century, there was a huge controversy over the age of the Universe. Charles Darwin, looking at the evidence from biology and geology, concluded that the Earth itself must be at least hundreds of millions, if not billions of years old. But Lord Kelvin, looking at the stars and how they worked, concluded that the Sun itself needed to be far younger. The only reactions he knew of were chemical reactions, such as combustion, and gravitational contraction. The latter turns out to be how white dwarf stars get their energy, but to put out as much energy as the Sun would imply a lifetime of tens of millions of years only. The two pictures didn’t add up.

    3
    A solar flare from our Sun, which ejects matter out away from our parent star and into the Solar System, is dwarfed in terms of ‘mass loss’ by nuclear fusion, which has reduced the Sun’s mass by a total of 0.03% of its starting value: a loss equivalent to the mass of Saturn. Until we discovered nuclear fusion, however, we could not accurately estimate the Sun’s age. Image credit: NASA’s Solar Dynamics Observatory / GSFC.

    NASA/SDO

    Of course, this was resolved decades later, with the discovery of nuclear reactions, and the application of Einstein’s E = mc² to the hydrogen fusion that occurs in the Sun. When the calculations were fully worked out, we realized that the Sun’s lifetime would be something more like 10–12 billion years, and that we were about 4.5 billion years into our Solar System’s existence. The ages of the Sun (from astronomy), the Earth (from geology), and life (from biology) all lined up in a consistent, coherent picture.

    We have two ways to calculate the age of the Universe today: to look at the ages of the individual stars and galaxies within it, and to look at the physics of the expanding Universe. The stars themselves are the less precise metric, as we can only view them at one instant in time, and then extrapolate stellar evolution backwards. This is useful when we have large populations of stars, like globular clusters, but is more difficult for individual stars. The method is simple: when large populations of stars are born together, they come in all different sizes and colors, from hot, massive, and blue, to cool, small, and red. As time goes on, the more massive stars burn through their fuel the fastest, and so they begin to evolve and, later, die.

    4
    The life cycles of stars can be understood in the context of the color/magnitude diagram shown here. As the population of stars age, they ‘turn off’ the diagram, allowing us to date the age of the cluster. Image credit: Richard Powell under c.c.-by-s.a.-2.5 (L); R. J. Hall under c.c.-by-s.a.-1.0 (R).

    If we look at the survivors, therefore, we can date how old a population of stars is. Many globular clusters have ages in excess of 12 billion years, with some even exceeding 13 billion years. With advances in both observational techniques and capabilities, we’ve measured not only the carbon, oxygen, or iron content of individual stars, but by using the radioactive decay abundances of uranium and thorium, in conjunction with the elements created in the Universe’s first supernovae, we can date their ages directly.

    5
    Located around 4,140 light-years away in the galactic halo, SDSS J102915+172927 is an ancient star that contains just 1/20,000th the heavy elements the Sun possesses, and should be over 13 billion years old: one of the oldest in the Universe, and having possibly formed before even the Milky Way. Image credit: ESO, Digitized Sky Survey 2.

    The star HE 1523–0901, which is about 80% of the Sun’s mass, contains only 0.1% of the Sun’s iron, and is measured to be 13.2 billion years old from its radioactive element abundances.

    6
    Artist’s impression of “the oldest star of our Galaxy”: HE 1523-0901. ESO, European Southern Observatory.

    In 2015, a set of nine stars near the Milky Way’s center were dated to have formed 13.5 billion years ago: just 300,000,000 years after the Big Bang, and before the initial formation of the Milky Way, with one of them having has less than 0.001% of the Sun’s iron: the most pristine star ever found. And controversially, there’s the Methuselah star, which comes in at a surprising 14.46 billion years, albeit with a large uncertainty of around 800 million years.

    But there’s a better, more precise way to measure the age of the Universe: through its cosmic expansion.

    7
    The four possible fates of our Universe into the future; the last one appears to be the Universe we live in, dominated by dark energy. What’s in the Universe, along with the laws of physics, determines not only how the Universe evolves, but how old it is. Image credit: E. Siegel / Beyond The Galaxy.

    By measuring what’s in the Universe today, how distant objects appear to move, and how the light from them behaves nearby, at intermediate distances, and for the greatest distances observable, we can reconstruct the expansion history of the Universe. We now know our Universe consists of approximately 68% dark energy, 27% dark matter, 4.9% normal matter, 0.1% neutrinos, and 0.01% radiation, today. We also know how these components evolve in time, and that the Universe obeys the laws of General Relativity. Combine those pieces of information, and a single, compelling picture of our cosmic origins emerge.

    8
    Three different types of measurements, distant stars and galaxies, the large scale structure of the Universe, and the fluctuations in the CMB, tell us the expansion history of the Universe. Image credit: NASA/ESA Hubble (top L), SDSS (top R), ESA and the Planck Collaboration (bottom).

    NASA/ESA Hubble Telescope

    SDSS Telescope at Apache Point Observatory, NM, USA, Altitude2,788 meters (9,147 ft)

    ESA/Planck

    For a few seconds, the Universe was an ionized mess of particles and antiparticles, which eventually cooled and allowed the formation of leftover atomic nuclei after a few minutes. After 380,000 years, the first stable, neutral atoms formed. Over tens to hundreds of millions of years, gravitational attraction brought this matter together into stars and then galaxies. And over billions of years further, galaxies merged and grew to give us the Universe we see today. With the data collected from a variety of sources, including the cosmic microwave background, the large-scale clustering of galaxies, distant supernovae, and baryon acoustic oscillations, we arrive at a single, compelling picture: a Universe that’s 13.8 billion years old today.

    9
    The cosmic history of the entire known Universe shows that we owe the origin of all the matter within it, and all the light, ultimately, to the end of inflation and the beginning of the Hot Big Bang. Since then, we’ve had 13.8 billion years of cosmic evolution, a picture confirmed by multiple sources. Image credit: ESA and the Planck Collaboration / E. Siegel (corrections).

    There are some uncertainties that go beyond what gets reported by, say, Wikipedia, which quotes our Universe as being 13.799 ± 0.021 billion years old. That 21 million year uncertainty could easily get five-to-ten times as large if there’s a systematic mistake that’s been made somewhere. There’s presently a controversy over the expansion rate (the Hubble constant) today, with the CMB indicating it’s closer to 67 km/s/Mpc, while stars and supernovae point towards a figure more like 74 km/s/Mpc. There are uncertainties in the dark matter/dark energy mix, with some measurements favoring a ratio as low as 1:2, while others favor 1:3 or anything in between. Depending on the resolution to these puzzles, it’s conceivable that the Universe could be as young as 13.6 billion years, or as old as 14 billion.

    10
    One way of measuring the Universe’s expansion history involves going all the way back to the first light we can see, when the Universe was just 380,000 years old. The other ways don’t go backwards nearly as far, but also have a lesser potential to be contaminated by systematic errors. Image credit: European Southern Observatory.

    What’s unlikely, however, is that there’s going to be a major revision of this 13.8 billion year figure. Even if there is more fundamental physics than the forces, particles, and interactions that we know of, they are unlikely to change the physics of how stars work, how gravity works over time, how the Universe expands, or how radiation/matter/dark energy make up our Universe. These things are well-measured, well-constrained, and as well-understood as one could reasonably ask for. Even if dark energy evolves, fundamental constants like G or c or h change over time, or the Standard Model particles can be further broken up, the age of the Universe won’t change by very much from the Big Bang until the present.

    Revisions and surprises may certainly be coming, but when it comes to the age of the Universe, after millennia of wondering, humanity finally has an answer it can trust.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:08 pm on October 27, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, , The Hubble Space Telescope Is Falling And if we don’t prepare to catch it now it’ll be too late   

    From Ethan Siegel: “The Hubble Space Telescope Is Falling…” 

    From Ethan Siegel

    Oct 25, 2017

    …And if we don’t prepare to catch it now, it’ll be too late.

    1
    NASA/ESA Hubble

    Since 1990, the Hubble Space Telescope has been redefining how we view our Universe. From hundreds of miles above the surface of the Earth, it orbits the entire world every 97 minutes. Multiple servicing missions, including the final one in 2009, have corrected its optics, enhanced its cameras, replaced worn-out parts, and boosted it to higher orbits. With the decommissioning of the space shuttle, however, the telescope that changed the world is now looking ahead to its inevitable end-of-life. Even if the fine-guidance sensors never fail; even if the reaction wheels remain operational; even if the communications equipment never dies, Hubble is in trouble. It’s presently falling back towards Earth, and there are no plans in place to stop its orbital decay.

    2
    When a spacecraft re-enters Earth’s atmosphere, it almost always inevitably breaks up into many pieces. If the deorbiting isn’t done in a controlled fashion, the debris could land over populated areas, causing catastrophic damage. Image credit: NASA/ESA/Bill Moede and Jesse Carpenter.

    Hubble is currently orbiting Earth at a mean altitude of 353 miles, or 568 kilometers. We typically define the border between Earth’s atmosphere and outer space as 60 miles (about 100 kilometers) up, but in reality the situation is far more complicated. The atmosphere never truly ends, but simply gets more and more diffuse the higher up you go, with atoms and molecules that are gravitationally bound to the Earth extending to altitudes up to 10,000 km (6,200 miles). Beyond that point, the Earth’s atmosphere is indistinguishable from the solar wind, with both consisting of tenuous, hot atoms and ionized particles.

    3
    The layers of Earth’s atmosphere, as shown here to scale, go up far higher than the typically-defined boundary of space. Every object in low-Earth orbit is subject to atmospheric drag at some level. Image credit: Wikimedia Commons user Kelvinsong.

    Although the overwhelming majority of our atmosphere (by mass) is contained in the lowest layers, with the troposphere containing 75% of Earth’s atmopshere, the stratosphere containing another 20%, and the mesosphere containing almost all of the remaining 5%. Beyond that, atmospheric drag drops off significantly, and long-term orbits are possible. From space, those lowest three layers are the only ones that are optically visible, with most satellites in low-Earth orbit being located above them all: in the thermosphere. Up at these incredible altitudes, a typical atmospheric molecule (of oxygen, for instance) might travel for a kilometer or more before colliding with another one.

    5
    The troposphere (orange), stratosphere (white), and mesosphere (blue) are where the overwhelming majority of the molecules in Earth’s atmosphere lie. But beyond that, air is still present, causing satellites to fall and eventually de-orbit if left alone. Image credit: NASA/Crew of Expedition 22.

    But the Hubble Space Telescope is much larger than an oxygen molecule, and is moving much more quickly than one, too. Moving at about 5 miles per second, it collides with these high-altitude air molecules on a continuous basis, with each collision stripping it of a tiny, imperceptible bit of speed. Over the course of an hour, a day, or even a month, the changes aren’t noticeable. Give it enough time, though, and those changes add up to something big. The loss of altitude and speed means that, very slowly, Hubble will start to spiral closer to Earth.

    Which is too bad, because the science we’ve not only gotten, but continue to get from Hubble, is unlike anything else in human history. As the observatory falls to lower altitudes, collisions with air molecules become more frequent, accelerating the process. Additionally, it’s an uneven effect, as Hubble spends half of every 97 minutes in sunlight and half in darkness, which will cause the highly asymmetrical Hubble Space Telescope to begin to tumble. If we do nothing, these drag forces will add up until Hubble becomes a fireball in the atmosphere, disintegrating into a multitude of parts and experiencing what’s known as an “uncontrolled entry.” The telescope is far too big to simply burn up, and the fiery debris could literally land anywhere.

    7
    An uncontrolled re-entry, as illustrated here, could cause large, massive chunks to land pretty much anywhere on Earth. Heavy, solid objects, like Hubble’s primary mirror, could easily cause significant amounts of damage or even kill, depending on where those chunks landed. Image credit: ESA.

    During the previous servicing missions, Hubble has been “boosted” to higher orbits, in order to maintain it for longer. Without a crewed, reusable servicing vehicle like the shuttle, however, this is no longer feasible. Unless we develop some new technology and heavily invest in the training necessary to complete a life-saving mission, Hubble’s stint as humanity’s greatest optical observatory will unceremoniously come to an end. An uncrewed mission could be sent to robotically program a controlled re-entry, where the surviving components would land in the ocean, but this will only shorten its lifespan.

    8
    This image shows Hubble servicing Mission 4 astronauts practice on a Hubble model underwater at the Neutral Buoyancy Lab in Houston under the watchful eyes of NASA engineers and safety divers.

    If we keep the status quo, it’s conceivable that the components on Hubble will last for decades to come. But its orbit won’t. As Michael Massimino, one of the astronauts who serviced Hubble aboard the Space Shuttle for the final time in 2009, related:

    Its orbit will decay. The telescope will be fine, but its orbit will be bringing it closer and closer to Earth. That’s when it’s game over.

    That final mission, therefore, included a docking mechanism that was installed onto the telescope: the Soft Capture and Rendezvous System. Any properly-outfitted rocket could safely take it home.

    9
    The soft capture mechanism installed on Hubble (illustration) uses a Low Impact Docking System (LIDS) interface and associated relative navigation targets for future rendezvous, capture, and docking operations. The system’s LIDS interface is designed to be compatible with the rendezvous and docking systems to be used on the next-generation space transportation vehicle. Image credit: NASA.

    But time is of the essence to develop the technology that can either save it and prolong its life, or to take it safely out of orbit. If it continues on its current path, it will likely come crashing down to Earth in an uncontrolled fashion by the mid-2030s at the latest, and possibly in just over a decade, depending on a number of unpredictable factors. The only planned apparatus capable of servicing or boosting Hubble, NASA’s Space Launch System, has already seen its first planned flight slip behind schedule. If things slip far enough, we may have no option but to de-orbit.

    10
    Unless NASA’s Space Launch System is ready in time, and the space administration decides to invest the resources in servicing and boosting Hubble once again, a de-orbit will be the only way to prevent an uncontrolled potential disaster. Image credit: NASA/Marshall Space Flight Center.

    The truth is that, more than any other observatory in history, the Hubble Space Telescope has changed how we view the Universe. Although other ground-based and space-based observatories have been built and will be flying that surpass Hubble on a number of fronts, for some classes of observing, it’s still the best tool humanity has ever created. But by the very nature of its orbit, not only is its lifetime finite, but its demise will come in a horrific, potentially dangerous fashion if we do nothing. Saving it for further use is a long-term project that requires planning now. Hubble is falling, and if we don’t take the steps to catch it soon, it will be too late.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:58 am on October 26, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, Merging Neutron Stars Deliver Deathblow To Dark Matter And Dark Energy Alternatives   

    From Ethan Siegel: “Merging Neutron Stars Deliver Deathblow To Dark Matter And Dark Energy Alternatives” 

    From Ethan Siegel

    Oct 25, 2017

    1
    In the final moments of merging, two neutron stars don’t merely emit gravitational waves, but a catastrophic explosion that echoes across the electromagnetic spectrum. The arrival time difference between light and gravitational waves enables us to learn a lot about the Universe. University of Warwick / Mark Garlick

    If you ask an astrophysicist what’s the greatest puzzle in the Universe today, two of the most common answers you’ll get are dark matter and dark energy. The stuff that makes up everything we know of here on Earth, atoms, which is in turn made up of other fundamental particles, adds up to only around 5% of the cosmic energy budget. Either 95% of the energy in the Universe is in these two forms, dark matter and dark energy, that have never been directly detected, or something is wrong with our current picture of the Universe. These alternatives have been explored at length, with many options leading to slightly different physical consequences. With the first observation of merging neutron stars, and signals in both gravitational waves and light from across the electromagnetic spectrum arriving, a huge slew of these options have just been ruled out. When put to the test, dark matter and dark energy both survive.

    2
    The ultramassive, merging dynamical galaxy cluster Abell 370, with gravitational mass (mostly dark matter) inferred in blue. NASA, ESA, D. Harvey (Swiss Federal Institute of Technology), R. Massey (Durham University, UK), the Hubble SM4 ERO Team and ST-ECF

    There are a few major puzzles in astrophysics and cosmology that dark matter and dark energy were designed to solve. For dark matter, they largely relate to how galaxies form, rotate, and cluster together; for dark energy, they’re about the expansion rate of the Universe and how it evolves over time. If you make an appropriate modification to your theory of gravity, you can alter some of those observables without introducing dark matter and/or dark energy. The hope of those working on these alternatives is that the right modification will be found — one that also makes new predictions distinct from those of dark matter/dark energy — and they can be put to the test.

    3
    The cosmic web is driven by dark matter, with the largest-scale structure set by the expansion rate and dark energy. The small structures along the filaments form by the collapse of normal, electromagnetically-interacting matter. Ralf Kaehler, Oliver Hahn and Tom Abel (KIPAC)

    But modifying gravity, either to account for dark matter or dark energy (much less both), is a game you have to play very carefully. Einstein’s theory of General Relativity has already been tested quite rigorously, and its predictions have been borne out every time. If you modify gravity, you’re altering that theory, so you have to do it in a way that doesn’t impinge upon the observations and measurements that have already taken place. Many of the options out there have, therefore, ventured into a regime that hadn’t been tested very well: one that allowed the speed of gravity to vary. In Einstein’s theory, the speed of gravity equals the speed of light, exactly, and at all times. But in many alternatives, that assumption gets tweaked.

    4
    Large scale projection through the Illustris volume at z=0, centered on the most massive cluster, 15 Mpc/h deep. Shows dark matter density (left) transitioning to gas density (right). The large-scale structure of the Universe cannot be explained without dark matter, though many modified gravity attempts exist. Illustris Collaboration / Illustris Simulation

    Dark energy is generally assumed to be a cosmological constant, where the speed of light and the speed of gravity are both constants (and equal to one another) as well. Alternative formulations instead add something slightly more complex: a scalar field or a set of additional fields. This is a generic feature of modifications in models, such as the covariant Galileon, massive gravity, Einstein-Aether theories, TeVeS, and Hořava gravity. Many scenarios, depending on how the scalar field interacts with the standard “gravity” (tensor) field of General Relativity, give a speed of gravity that’s either different from the speed of light or that varies in time. But the fact that gamma rays and gravitational waves from the merging neutron star event GW170817 arrived within 1.7 seconds of one another means that the speed of gravity must equal the speed of light to better than 1 part in 10^15.

    5
    All massless particles travel at the speed of light, including the photon, gluon and gravitational waves, which carry the electromagnetic, strong nuclear and gravitational interactions, respectively. The near-identical arrival time of gravitational waves and electromagnetic waves from GW170817 are incredibly important. NASA/Sonoma State University/Aurore Simonnet

    As a result, a huge slew of alternatives to standard General Relativity with standard dark energy are ruled out. The fact that an arrival-time difference of 1.7s for a light signal and a gravitational wave signal over a distance of 130 million light years is so minuscule means that the speed of gravity cannot vary with time, nor can it be systematically higher or lower than the speed of light. If you add a scalar field to a tensor theory of gravity, you get two generic effects:

    1. There’s generally a tensor speed excess term, which modifies (increases) the propagation speed of gravitational waves.
    2. The scale of the effective Planck mass changes over cosmic times, which alters the damping of the gravitational wave signal as the Universe expands.

    The fact that the speed of light and the speed of gravity are equal to such high precision means that all theories that have this type of modification are highly constrained, and that most such models are largely ruled out.

    6
    Jose María Ezquiaga and Miguel Zumalacárregui, ‘Dark Energy after GW170817’
    Many modifications of gravity that attempt to do away with dark energy have been ruled out as a result of the arrival time of gravitational and electromagnetic waves.

    For dark matter, attempts to modify gravity get even worse. What most modifications generically do is change the force law between massive objects, which alters the gravitational potential in regions of spacetime containing mass. When objects traveling at the speed of light, like photons or gravitational waves, pass through that space, those signals get delayed according to the rules of General Relativity: the Shapiro time delay. From 130 million light years away, the amount of intervening matter ought to delay that signal by about three years, if the standard dark matter picture is correct. But if you’re modifying gravity in such a way that you get rid of dark matter, you vastly change the propagation properties of gravitational waves through space.

    7
    When light, gravitational waves, or any massless particle passes through a region of space containing large amounts of matter, that space gets distorted and the light path bends, causing a delay in the arrival time. In most modified gravity theories, the delay for light and gravitational waves would be different. ALMA (ESO/NRAO/NAOJ), L. Calçada (ESO), Y. Hezaveh et al.

    No-dark-matter modified gravity theories like Bekenstein’s TeVeS or Moffat’s MoG/Scalar-Tensor-Vector ideas have the property that gravitational waves propagate on different geodesics — a.k.a. different spacetime paths — from those followed by photons and neutrinos. In short, gravitational waves should travel along the paths defined by the normal matter alone, while the photons and neutrinos should travel along paths defined by the effective mass: the normal matter plus the effects that emulate dark matter. This would give a difference in arrival times between photons and gravitational waves by approximately 800 days, instead of the 1.7 seconds observed.

    With the cross-correlation of gravitational waves and electromagnetic signals, these no-dark-matter scenarios are busted.

    8
    Sibel Boran, Shantanu Desai, Emre Kahya, and Richard Woodard, ‘GW170817 Falsifies Dark Matter Emulators’
    The various mass sources in between NGC 4993, where the neutron star-neutron star merger occurred, and the quantified delay that they cause in light/gravitational wave travel time.

    When gravitational waves and photons (electromagnetic waves) pass through space, they’re affected by the curvature and expansion of space in the exact same way. That is, as long as General Relativity is your theory of gravity. If you modify your theory of gravity — to try and eliminate the need for dark matter and/or dark energy, for example — then gravitational waves are affected only by the matter/mass part, while the modification effects hit photons and other particles. Because the gravitational waves and light signals from merging neutron stars arrived at the same time, they traveled at the same speeds through space, and were delayed by the same amounts: to within 1 part in a quadrillion. This level of precision is enough to rule out the leading contenders for a modified theory of gravity without dark matter.

    9
    The X-ray (pink) and overall matter (blue) maps of various colliding galaxy clusters show a clear separation between normal matter and gravitational effects, some of the strongest evidence for dark matter. Alternative theories now need to be so contrived that they are considered by many to be quite ridiculous. X-ray: NASA/CXC/Ecole Polytechnique Federale de Lausanne, Switzerland/D.Harvey NASA/CXC/Durham Univ/R.Massey; Optical/Lensing Map: NASA, ESA, D. Harvey (Ecole Polytechnique Federale de Lausanne, Switzerland) and R. Massey (Durham University, UK)

    There are still a few contrived models out there that may hold hope for modified gravity, like non-local theories of gravity (where gravitational effects and the locations of masses don’t match up) or theories where gravitational waves and electromagnetic waves obey two different sets of rules. But even these ideas are severely constrained by our new gravitational wave observations, and are required to become closer-and-closer mimics to the effects of dark matter and dark energy in order to survive. Modified gravity isn’t dead yet, but many of its greatest hopes have just been dashed. Einstein, however, with his theory in its original, unmodified form, still survives.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:57 pm on October 17, 2017 Permalink | Reply
    Tags: , , , , , , Ethan Siegel,   

    From Ethan Siegel: “Why Neutron Stars, Not Black Holes, Show The Future Of Gravitational Wave Astronomy” 

    From Ethan Siegel

    Oct 17, 2017

    2
    In the final moments of merging, two neutron stars don’t merely emit gravitational waves, but a catastrophic explosion that echoes across the electromagnetic spectrum. University of Warwick / Mark Garlick

    On August 17, the signals from two merging neutron stars reached Earth after a journey of 130 million light years. After an 11 billion year dance, these remnants of once-massive, blue stars that died in supernovae so long ago spiraled into one another after emitting enough gravitational radiation to see their orbits decay. As each one moves through the changing spacetime created by the gravitational field and motion of the other, its momentum changes, causing the two masses to orbit one another more closely over time. Eventually, they meet, and when they do, they undergo a catastrophic reaction: a kilonova. For the first time, we’ve recorded the inspiral and merger in the gravitational wave sky, noticing it in all three detectors (LIGO Livingston, LIGO Hanford, and Virgo), as well as in the electromagnetic sky, from gamma rays all the way through the optical and into the radio. At last, gravitational wave astronomy is now a part of astronomy.


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    ESA/eLISA the future of gravitational wave research

    1
    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    3
    From the very first binary neutron star system ever discovered, we knew that gravitational radiation was carrying energy away. It was only a matter of time before we found a system in the final stages of inspiral and merger. NASA (L), Max Planck Institute for Radio Astronomy / Michael Kramer

    We knew this had to happen eventually. Neutron stars have very large masses, estimated at over the mass of the Sun each, and very small sizes. Imagine an atomic nucleus that didn’t contain a handful, a few dozen, or even a few hundred protons and neutrons inside, but rather a star’s worth: 1057 of them. These incredible objects swoop through space, faster and faster, as the fabric of space itself bends and radiates due to their mutual presence. Pulsars in binary systems coalesce, and in the very final stages of inspiral, the strain they impose on a detector even a hundred million light years away can be detectable. We’ve seen the indirect evidence for decades: the decay of their mutual orbits. But the direct evidence, now available, changes everything.

    4
    The strain on the detectors, from the inspiral of the two neutron stars, can be clearly seen even visibly from the twin LIGO detectors. The less-sensitive Virgo detector provides incredibly accurate location information as well. B.P. Abbott et al., PRL 119, 161101 (2017)

    Each time these waves pass through your detector, they cause a slight expanding-and-contracting of the laser arms. Because the neutron star system is so thoroughly predictable, decaying at the rate predicted by Einstein’s equations, we know exactly how the frequency and amplitude of the inspiral ought to behave. Unlike black hole systems of higher masses, the frequency of these low-mass systems falls in the detectable range of the LIGO and Virgo detectors for much longer time periods. While the overwhelming majority of black hole-black hole mergers registered in the LIGO detectors for only a fraction of a second, these neutron stars, even at a distance of over 100 million light years, had their signals detected for almost half a minute!

    5
    This figure shows reconstructions of the four confident and one candidate (LVT151012) gravitational wave signals detected by LIGO and Virgo to date, including the most recent black hole detection GW170814 (which was observed in all three detectors). LIGO/Virgo/B. Farr (University of Oregon)

    This time, the Fermi gamma-ray satellite detected a transient burst, consistent with previously seen kilonovae, just 1.7 seconds after the arrival of the final “chirp” of the gravitational wave signal.

    NASA/Fermi Telescope


    NASA/Fermi LAT

    By time 11 hours had passed, the LIGO/Virgo team had pinpointed an area on the sky just 28 square degrees in size: the smallest localized region ever seen. Even though the neutron star signal was so much less intense in magnitude than the black hole signals were, the fact that the detectors had caught so many orbits gave the team the strongest signal to date: a signal-to-noise ratio of more than 32!

    6
    By adding in the data from the Virgo detector, even though the signal-to-noise ratio was low, we were able to make the greatest-precision detection of a gravitational wave source of all time. B.P. Abbott et al., PRL 119, 161101 (2017)

    By knowing where this signal was, we could then train our greatest optical, infrared, and radio telescopes on this site in the sky, where the galaxy NGC 4993 was located (at the correct distance). Over the next two weeks, we saw an electromagnetic counterpart to the gravitational wave source, and the afterglow of the gamma-ray burst that Fermi saw. For the first time, we had observed a neutron star merger in gravitational waves and across the light spectrum, confirming what theorists had suspected in spectacular fashion: that this is where the majority of the heaviest elements in the Universe originate.

    7
    Just hours after the gravitational wave signal arrived, optical telescopes were able to hone in on the galaxy home to the merger, watching the site of the blast brighten and fade in practically real-time.

    Dark Energy Survey


    Dark Energy Camera [DECam], built at FNAL


    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam

    But also encoded in this merger are a few incredible facts that you may not realize; facts that point the way to the future of gravitational wave astronomy.

    1.) Binary neutron stars barely spin at all! In isolation, neutron stars can be some of the most rapidly spinning objects in the Universe, up to a significant percentage of the speed of light. The fastest rotate over 700 times per second… but not in a binary system! The close presence of another large mass means that tidal forces are large, and hence the friction of one rotating body on another causes them both to slow down. By time they merge, neither one can be rotating at any appreciable speed, allowing us to constrain the orbital parameters from the gravitational wave signal extremely tightly.

    7
    Some of the most important parameters of the merging gravitational wave system were reported quite precisely, owing to the non-rotating nature of the neutron star-neutron star system. B.P. Abbott et al., PRL 119, 161101 (2017)

    2.) At least 28 Jupiter masses’ worth of material was converted into energy via E = mc2. We’ve never seen neutron star-neutron star mergers in gravitational waves before. In black hole-black hole systems of equivalent mass, up to 5% of the total mass gets converted into energy. In neutron star systems, its expected to be less, because the collision occurs between nuclei, not between singularities; the two masses can’t get as close. Still, at least 1% of the total mass was converted into pure energy via Einstein’s mass-energy equivalence, a very impressive and large amount of energy!

    8
    All massless particles travel at the speed of light, including the photon, gluon and gravitational waves, which carry the electromagnetic, strong nuclear and gravitational interactions, respectively. NASA/Sonoma State University/Aurore Simonnet

    3.) Gravitational waves move at exactly the speed of light! Before this detection, we never had a gravitational wave and a light signal simultaneously identifiable to compare with one another. After a journey of 130 million light years, the first electromagnetic signal from this detection arrived just 1.7 seconds after the peak of the gravitational wave signal. That means, at most, the difference between the speed of gravity and the speed of light is about 0.12 microns-per-second, or 0.00000000000004%. It’s anticipated that these two speeds are exactly equal, and the delay of the light signal comes from the fact that the light-producing reactions in the neutron star take a second or two to reach the surface.

    8
    The galaxy NGC 4993, located 130 million light years away, had been imaged many times before. But just after the August 17, 2017 detection of gravitational waves, a new transient source of light was seen: the optical counterpart of a neutron star-neutron star merger. P.K. Blanchard / E. Berger / Pan-STARRS / DECam

    Pann-STARS telescope, U Hawaii, Mauna Kea, Hawaii, USA, 4,207 m (13,802 ft) above sea level

    4.) A faster response time is possible! By time we first located the three-dimensional place on the sky where the electromagnetic signal was, twelve hours had passed. Sure, we were able to observe the optical counterpart immediately, but it would’ve been better to get in on the ground floor. As automated analysis improves, as well as the synchronization of all three detectors, the better we’ll do. Over the coming years, LIGO will get slightly more sensitive, Virgo will do better, and two additional LIGO-like detectors, KAGRA in Japan and LIGO-India, will come online. Instead of half a day, we may be soon talking about response times in a matter of minutes or even seconds.

    KAGRA gravitational wave detector, Kamioka mine in Kamioka-cho, Hida-city, Gifu-prefecture, Japan

    LIGO-India in the Hingoli district in western India

    9
    On the ground, a noise ‘glitch’ in the LIGO Livingston detector meant that the automated software failed to extract the signal, requiring manual intervention. B.P. Abbott et al., PRL 119, 161101 (2017)

    5.) Going to space will be the ultimate in gravitational wave observing. Here on the ground, part of the reason it took so long to find the location was that in Livingston, LA, there was a “noise” glitch: something caused the detector on the ground to vibrate. As a result, the automated software couldn’t extract the true signal, and manual intervention was required. The LIGO-Virgo team did an amazing job, but were these detectors in space, this wouldn’t even have been an issue in the first place. There is no seismic noise in the abyss of interplanetary space.

    ESA/eLISA space based the future of gravitational wave research

    10
    Neutron stars, when they merge, can exhibit gravitational wave and electromagnetic signals simultaneously, unlike black holes.
    Dana Berry / Skyworks Digital, Inc.

    Unlike merging black holes, inspiraling and merging neutron stars:

    Can be seen for a much longer time, due to their low masses,
    Will emit electromagnetic counterparts, allowing for the gravitational and electromagnetic skies to be unified,
    Are far more numerous, with the only reason we’ve seen more black holes is due to the increased range for them,
    And can be used to learn information about the Universe, such as the speed of gravity, that black holes cannot teach us.
    The delay of around 11 hours from the merger to the first optical and infrared signatures isn’t due to physics, but due to our own instrumental limitations here. As our analysis techniques improve, and more events are discovered, we’ll learn exactly how long it takes before visible light signatures are created by neutron star-neutron star mergers.

    At last, the origin of the heavy elements are confirmed; the speed of gravity is definitively known; and the gravitational wave and electromagnetic skies are one. Any doubters of LIGO now have the independent confirmation they’ve been clamoring for, and there is no ambiguity left. The future of astronomy includes gravitational waves, and that future is here, today. Congratulations, one and all. Today, all of Earth is the beneficiary of this incredible knowledge.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 1:08 pm on October 12, 2017 Permalink | Reply
    Tags: , , , , , , , Ethan Siegel   

    From Ethan Siegel: “Inflation Isn’t Just Science, It’s The Origin Of Our Universe” 

    From Ethan Siegel

    Oct 12, 2017

    1
    The stars and galaxies we see today didn’t always exist, and the farther back we go, the closer to an apparent singularity the Universe gets, but there is a limit to that extrapolation. To go all the way back, we need a modification to the Big Bang: cosmological inflation. Image credit: NASA, ESA, and A. Feild (STScI).

    “There’s no obvious reason to assume that the very same rare properties that allow for our existence would also provide the best overall setting to make discoveries about the world around us. We don’t think this is merely coincidental.” -Guillermo Gonzalez

    In order to be considered a scientific theory, there are three things your idea needs to do. First off, you have to reproduce all of the successes of the prior, leading theory. Second, you need to explain a new phenomenon that isn’t presently explained by the theory you’re seeking to replace. And third, you need to make a new prediction that you can then go out and test: where your new idea predicts something entirely different or novel from the pre-existing theory. Do that, and you’re science. Do it successfully, and you’re bound to become the new, leading scientific theory in your area. Many prominent physicists have recently come out against inflation, with some claiming that it isn’t even science. But the facts say otherwise. Not only is inflation science, it’s now the leading scientific theory about where our Universe comes from.

    2
    The expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. But even that initial state had its origins, with cosmic inflation as the leading candidate for where that all came from. Image credit: C. Faucher-Giguère, A. Lidz, and L. Hernquist, Science 319, 5859 (47).

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    The Big Bang was first confirmed in the 1960s, with the observation of the Cosmic Microwave Background [CMB].

    Cosmic Microwave Background NASA/WMAP

    NASA/WMAP

    CMB per ESA/Planck

    ESA/Planck

    Since that first detection of the leftover glow, predicted from an early, hot, dense state, we’ve been able to validate and confirm the Big Bang’s predictions in a number of important ways. The large-scale structure of the Universe is consistent with having formed from a nearly-uniform past state, under the influence of gravity over billions of years. The Hubble expansion and the temperature in the distant past is consistent with an expanding, cooling Universe filled with matter and energy of various types. The abundances of hydrogen, helium, lithium, and their various isotopes matches the predictions from an early, hot, dense state. And the blackbody spectrum of the Big Bang’s leftover glow matches our observations precisely.

    3
    The light from the cosmic microwave background and the pattern of fluctuations from it gives us one way to measure the Universe’s curvature. To the best of our measurements, to within 1 part in about 400, the Universe is perfectly spatially flat. Image credit: Smoot Cosmology Group / Lawrence Berkeley Labs.

    But there are a number of things that we observe that the Big Bang doesn’t explain. The fact that the Universe is the same exact temperature in all directions, to better than 99.99%, is an observational fact without a theoretical cause. The fact that the Universe, in all directions, appears to be spatially flat (rather than positively or negatively curved), is another true fact without an explanation. And the fact that there are no leftover high-energy relics, like magnetic monopoles, is a curiosity that we wouldn’t expect if the Universe began from an arbitrarily hot, dense state.

    In other words, the implication is that despite all of the Big Bang’s successes, it doesn’t explain everything about the origin of the Universe. Either we can look at these unexplained phenomena and conjecture, “maybe the Universe was simply born this way,” or we can look for an explanation that meets our requirements for a scientific theory. That’s exactly what Alan Guth did in 1979, when he first stumbled upon the idea of cosmological inflation.

    4
    Alan Guth, Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    5
    Alan Guth’s notes. http://www.bestchinanews.com/Explore/4730.html

    The big idea of cosmic inflation was that the matter-and-radiation-filled Universe, the one that has been expanding and cooling for billions of years, arose from a very different state that existed prior to what we know as our observable Universe. Instead of being filled with matter-and-radiation, space was full of vacuum energy, which caused it to expand not just rapidly, but exponentially, meaning the expansion rate doesn’t fall with time as long as inflation goes on. It’s only when inflation comes to an end that this vacuum energy gets converted into matter, antimatter, and radiation, and the hot Big Bang results.

    5
    This illustration shows regions where inflation continues into the future (blue), and where it ends, giving rise to a Big Bang and a Universe like ours (red X). Note that this could go back indefinitely, and we’d never know. Image credit: E. Siegel / Beyond The Galaxy.

    It was generally recognized that inflation, if true, would solve those three puzzles that the Big Bang could only posit as initial conditions: the horizon (temperature), flatness (curvature), and monopole (lack-of-relics) problems. In the early-to-mid 1980s, lots of work went into meeting that first criteria: reproducing the successes of the Big Bang. The key was to arrive at an isotropic, homogeneous Universe with conditions that matched what we observed.

    6
    he two simplest classes of inflationary potentials, with chaotic inflation (L) and new inflation (R) shown. Image credit: E. Siegel / Google Graph.

    After a few years, we had two generic classes of models that worked:

    “New inflation” models, where vacuum energy starts off at the top of a hill and rolls down it, with inflation ending when the ball rolls into the valley, and
    “Chaotic inflation” models, where vacuum energy starts out high on a parabola-like potential, rolling into the valley to end inflation.

    Both of these classes of models reproduced the successes of the Big Bang, but also made a number of similar, quite generic predictions for the observable Universe. They were as follows:

    7
    The earliest stages of the Universe, before the Big Bang, are what set up the initial conditions that everything we see today has evolved from. Image credit: E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research.

    1. The Universe should be nearly perfectly flat. Yes, the flatness problem was one of the original motivations for it, but at the time, we had very weak constraints. 100% of the Universe could be in matter and 0% in curvature; 5% could be matter and 95% could be curvature, or anywhere in between. Inflation, quite generically, predicted that 100% needed to be “matter plus whatever else,” but curvature should be between 0.01% and 0.0001%. This prediction has been validated by our ΛCDM model, where 5% is matter, 27% is dark matter and 68% is dark energy; curvature is constrained to be 0.25% or less. As observations continue to improve, we may, in fact, someday be able to measure the non-zero curvature predicted by inflation.

    2. There should be an almost scale-invariant spectrum of fluctuations. If quantum physics is real, then the Universe should have experienced quantum fluctuations even during inflation. These fluctuations should be stretched, exponentially, across the Universe. When inflation ends, these fluctuations should get turned into matter and radiation, giving rise to overdense and underdense regions that grow into stars and galaxies, or great cosmic voids. Because of how inflation proceeds in the final stages, the fluctuations should be slightly greater on either small scales or large scales, depending on the model of inflation, which means there should be a slight departure from perfect scale invariance. If scale invariance were exact, a parameter we call n_s would equal 1; n_s is observed to be 0.96, and wasn’t measured until WMAP in the 2000s.

    3. There should be fluctuations on scales larger than light could have traveled since the Big Bang. This is another consequence of inflation, but there’s no way to get a coherent set of fluctuations on large scales like this without something stretching them across cosmic distances. The fact that we see these fluctuations in the cosmic microwave background and in the large-scale structure of the Universe — and didn’t know about them until the COBE and WMAP satellites in the 1990s and 2000s — further validates inflation.

    NASA/COBE

    Cosmic Infrared Background, Credit: Michael Hauser (Space Telescope Science Institute), the COBE/DIRBE Science Team, and NASA

    4. These quantum fluctuations, which translate into density fluctuations, should be adiabatic. Fluctuations could have come in different types: adiabatic, isocurvature, or a mixture of the two. Inflation predicted that these fluctuations should have been 100% adiabatic, which should leave unique signatures in both the cosmic microwave background and the Universe’s large-scale structure. Observations bear out that yes, in fact, the fluctuations were adiabatic: of constant entropy everywhere.

    5. There should be an upper limit, smaller than the Planck scale, to the temperature of the Universe in the distant past. This is also a signature that shows up in the cosmic microwave background: how high a temperature the Universe reached at its hottest. Remember, if there were no inflation, the Universe should have gone up to arbitrarily high temperatures at early times, approaching a singularity. But with inflation, there’s a maximum temperature that must be at energies lower than the Planck scale (~10^19 GeV). What we see, from our observations, is that the Universe achieved temperatures no higher than about 0.1% of that (~10^16 GeV) at any point, further confirming inflation. This is an even better solution to the monopole problem than the one initially envisioned by Guth.

    6. And finally, there should be a set of primordial gravitational waves, with a particular spectrum. Just as we had an almost perfectly scale-invariant spectrum of density fluctuations, inflation predicts a spectrum of tensor fluctuations in General Relativity, which translate into gravitational waves. The magnitude of these fluctuations are model-dependent on inflation, but the spectrum has a set of unique predictions. This sixth prediction is the only one that has not been verified observationally in any way.

    7
    The contribution of gravitational waves left over from inflation to the B-mode polarization of the Cosmic Microwave background has a known shape, but its amplitude is dependent on the specific model of inflation. These B-modes from gravitational waves from inflation have not yet been observed. Image credit: Planck science team.

    On all three counts — of reproducing the successes of the non-inflationary Big Bang, of explaining observations that the Big Bang cannot, and of making new predictions that can be (and, in large number, have been) verified — inflation undoubtedly succeeds as science. It does so in a way that other theories which only give rise to non-observable predictions, such as string theory, does not. Yes, when critics talk about inflation and mention a huge amount of model-building, that is a problem; inflation is a theory in search of a single, unique, definitive model. It’s true that you can contrive as complex a model as you want, and it’s virtually impossible to rule them out.

    8
    A variety of inflationary models and the scalar and tensor fluctuations predicted by cosmic inflation. Note that the observational constraints leave a huge variety of inflationary models as still valid. Image credit: Kamionkowski and Kovetz, ARAA, 2016, via http://lanl.arxiv.org/abs/1510.06042.

    But that is not a flaw inherent to the theory of inflation; it is an indicator that we don’t yet know enough about the mechanics of inflation to discern which models have the features our Universe requires. It is an indicator that the inflationary paradigm itself has limits to its predictive power, and that a further advance will be necessary to move the needle forward. But simply because inflation isn’t the ultimate answer to everything doesn’t mean it isn’t science. Rather, it’s exactly in line with what science has always shown itself to be: humanity’s best toolkit for understanding the Universe, one incremental improvement at a time.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 7:53 pm on October 10, 2017 Permalink | Reply
    Tags: , , , , Ethan Siegel, Missing Matter Found But Doesn't Dent Dark Matter   

    From Ethan Siegel: “Missing Matter Found, But Doesn’t Dent Dark Matter” 

    From Ethan Siegel

    Oct 10, 2017

    1
    A nearly uniform Universe, expanding over time and under the influence of gravity, will create a cosmic web of structure. The web contains both dark and normal matter. Western Washington University

    Look out at the Universe as deeply as possible, and everywhere you look, there they are: stars and galaxies, beautiful, distant, and in all directions. All told, there are some two trillion galaxies in the observable Universe, each one with hundreds of billions of stars, on average. But if we take all that light, even knowing how stars work, it only explains a tiny fraction of the Universe’s mass. Looking within the galaxies themselves for gas, dust, black holes, nebulae, and more, we still don’t get close to enough mass to make up our Universe. A recent new set of studies have revealed new “missing matter” in between the galaxies for the first time, inching us closer. But even so, over 80% is completely unknown. Until we find dark matter, this mystery won’t be solved.

    2
    The full UV-visible-IR composite of the XDF; the greatest image ever released of the distant Universe. Note that these spectacular images only showcase the emitted light from the normal matter that’s formed stars, but that doesn’t account for the overwhelming majority of matter. NASA, ESA, H. Teplitz and M. Rafelski (IPAC/Caltech), A. Koekemoer (STScI), R. Windhorst (Arizona State University), and Z. Levay (STScI)

    We know how much total matter there has to be in the Universe. The expansion rate is dependent on what’s present in the Universe, so measuring the Hubble flow of variable stars, galaxies, supernovae, etc., tells us how much matter, radiation, and other forms of energy need to be present. We can also measure the large-scale structure of the Universe, and from the clustering of galaxies on a variety of scales, determine how much total matter, as well as how much is normal and how much is dark, there needs to be. And the fluctuations in the cosmic microwave background, the Big Bang’s leftover glow, tell us a whole lot about not just the total amount of matter necessary to give the Universe, but how much is normal matter and how much is dark matter.

    3
    The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s.

    NASA/COBE

    NASA/WMAP

    This image encodes a huge amount of information about the early Universe, including its composition, age, and history. ESA and the Planck Collaboration

    ESA/Planck

    Finally, looking at the light elements left over from the Big Bang offers a completely independent piece of data: the total amount of normal (i.e., atom-based) matter that must exist. From all the different lines of evidence, we see the same picture. The fact that about 5% of the Universe’s energy is in normal matter, 27% is dark matter, and the other 68% is dark energy has been known for nearly 20 years now, but it remains as puzzling as ever. For instance:

    We still don’t know what dark energy is, or what causes it.
    We know from a slew of observations that dark matter exists, and we know its generic properties, but we have yet to directly detect it or find the particle(s) responsible for it.
    And even the normal matter — the stuff made of protons, neutrons, and electrons — isn’t fully accounted for.

    In fact, if we add up all the normal matter we know about, we’re still missing the majority of it.

    4
    Constraints on dark energy from three independent sources: supernovae, the CMB and BAO. Note that even without supernovae, we’d need dark energy, and that only 1/6th of the matter found can be normal matter; the rest must be dark matter. Supernova Cosmology Project, Amanullah, et al., Ap.J. (2010)

    There are two ways to measure the Universe that are completely independent of one another: through the light that objects emit or absorb, and through the gravitational effects of matter. The earlier methods described — the expansion of the Universe, the large-scale structure, and the cosmic microwave background — all use gravity to make their measurements. But light plays a major role, too. Stars shine because of the internal physics that causes nuclear reactions inside them, and so measuring the light coming from all of them tells you how much mass there is. Measure the absorption and emission of other wavelengths of light, and you can calculate how much mass there is in not only stars, but gas, dust, nebulae, and black holes. Go to high energies, and you’ll even be able to measure hot plasmas within galaxies. But we’re still missing more than half, perhaps even up to 90%, of the total normal matter. In other words, of that 5%, we’re missing most of it.

    5
    An illustration of a slice of the cosmic web, as viewed by Hubble. The missing matter we can detect through electromagnetic signals is the normal matter alone; the dark matter is unaffected. NASA/ESA Hubble and A. Feild (STScI)

    NASA/ESA Hubble Telescope

    So where should the rest of it be? Not in galaxies at all, but between them. Dark matter should clump and cluster together in large-scale filaments, but so should normal matter. When the high-energy radiation from the first stars passes through intergalactic space, the dark matter and light completely ignore each other, but the normal matter is vulnerable. Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).

    6
    The warm-hot intergalactic medium (WHIM) has been seen before, but only along incredibly overdense regions, like the Sculptor wall, illustrated above. Spectrum: NASA/CXC/Univ. of California Irvine/T. Fang. Illustration: CXC/M. Weiss

    Up until now, the WHIM has been mostly theoretical, as our tools haven’t been good enough to measure it except in a few rare locations. The WHIM should be very low in density, located along dark matter filaments, and at very high temperatures: between 100,000 K and 10,000,000 K. For the first time, now, there’s a statistically significant signal that exceeds the 5σ statistical significance mark, thanks to research by two independent teams. One, led by Anna de Graaff, looked at the cosmic web; one, led by Hideki Tanimura looked at the space between luminous red galaxies. Both of them detected the WHIM to greater than 5σ significance, and both used the same method to do it: the Sunyaev-Zel’dovich effect.

    7
    By scattering lower-energy photons to higher energies, ionized plasmas found throughout the Universe bump lower-energy light to higher energies, increasing their temperatures. J.E. Carlstrom, G.P. Holder and E.D. Reese, ARAA, 2002, V40

    What is the Sunyaev-Zel’dovich effect? Imagine you’re sending light uniformly, in all directions, throughout the Universe. As it travels, the expansion of the Universe stretches it, causing it to fall to lower wavelengths. But in some places, it will pass through a hot, ionized plasma. When photons pass through a plasma, there’s a slight effect due to the electromagnetic, wave nature of light: the photons gets shifted to slightly higher energies, due to both the temperature and the motion of the plasma.

    It was way back in 1969 that the Sunyaev-Zel’dovich paper predicting this effect came out, interaction of matter and radiation in a hot-model universe SAO/NASA ADS Astronomy Abstract Service, but it would be decades before the effect was first detected. In fact, the paper was written almost entirely by Sunyaev, with Zel’dovich merely adding in how difficult the effect would be to detect. Nearly 50 years later, we’ve used it to detect the missing normal matter in the Universe.

    8
    The cosmic web is driven by dark matter, but the small structures along the filaments form by the collapse of normal, electromagnetically-interacting matter. For the first time, normal matter overdensities along the filaments without stars or galaxies has been detected. Ralf Kaehler, Oliver Hahn and Tom Abel (KIPAC)

    But this doesn’t eliminate the need for dark matter; it doesn’t touch that undiscovered 27% of matter in the Universe, not in the slightest. It’s another piece of that 5% that we know is out there, that we’re struggling to put together. It’s just protons, neutrons, and electrons, existing in about six times the abundance within these filaments as compared to the cosmic average. The fact that this filamentary structure contains normal matter at all is further evidence for dark matter, since without it there’d be no gravitationally overdense regions to hold the extra normal matter in place. In this case, the WHIM traces the dark matter, further confirming what we know must be out there.

    9
    The cosmic web of dark matter and the large-scale structure it forms. Normal matter is present, but is only 1/6th of the total matter. The other 5/6ths is dark matter, and no amount of normal matter will get rid of that. The Millenium Simulation, V. Springel et al.

    Yes, we’ve found some of the missing matter in the Universe, and that’s incredible! But the missing matter we found was part of the normal matter — part of the 5% of the Universe that includes us — and leaves all of the dark matter untouched. The latest discovery suggests something incredible: that the missing baryon problem might be solved by looking to the great cosmic web that gave rise to everything we see. But that remaining 27% of the Universe must still be out there, and we still don’t know what that is. We can see its effects, but no amount of missing normal matter is going to make a dent in the dark matter problem. We still need it, and no matter how much normal matter we find, even if we get all of it, we’ll still only be 1/6th of the way to understanding all of the matter in our Universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 6:44 am on October 8, 2017 Permalink | Reply
    Tags: , , CCD array, , Ethan Siegel, Why don’t we build a telescope without mirrors or lenses?   

    From Ethan Siegel: “Ask Ethan: Why don’t we build a telescope without mirrors or lenses?” 

    Ethan Siegel

    Oct 7, 2017

    1
    Placing a CCD array at the prime focus of a telescope or observatory is a surefire way to get an outstanding image; a technique that’s been in use for well over 100 years. But is it possible to use CCDs in place of a mirror or lens entirely? Image credit: Large Area Imager for Calar Alto (LAICA) / J.W. Fried.

    Calar Alto 3.5 meter Telescope, located in Almería province in Spain on Calar Alto, a 2,168-meter-high (7,113 ft) mountain in Sierra de Los Filabres

    Calar Alto Observatory located in Almería province in Spain on Calar Alto, a 2,168-meter-high (7,113 ft) mountain in Sierra de Los Filabres

    Why not just put your detectors in place of a giant mirror?

    “Look and think before opening the shutter. The heart and mind are the true lens of the camera.” -Yousuf Karsh

    For hundreds of years, the principle behind the telescope has been as simple as it gets: build a lens or mirror to collect a large amount of light, focus that light onto a detector (like an eye, a photographic plate, or an electronic device), and see far beyond the capabilities of your unaided vision. Over time, lenses and mirrors have gotten bigger in diameter and have been crafted to higher precision, while detectors have advanced to the point where they can collect and make good use of every single incoming photon. The quality of detectors might make you wonder why we bother with mirrors at all! That’s what Pedro Teixeira wants to know:

    “Why do we need a lens and a mirror to make a telescope now that we have CCD sensors? Instead of having a 10m mirror and lens that focus the light on a small sensor, why not have a 10m sensor instead?”

    It’s a very astute question, because if we could do this, it would be revolutionary.

    2
    A comparison of the mirror sizes of various existing and proposed telescopes. When GMT comes online, it will be the world’s largest, and will be the first 25 meter+ class optical telescope in history, later to be surpassed by the ELT. But all of these telescopes have mirrors. Image credit: Wikimedia Commons user Cmglee.

    Giant Magellan Telescope, to be at Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    No matter how reflective we make our surfaces, no matter how finely we grind and polish our lenses, no matter how uniformly and carefully we coat the top layers, and no matter how well we repel and eliminate dust, no mirror or lens will ever be 100% optically perfect. Some fraction of light will be lost at every step and with every reflection. Given that the largest, modern designs require mulitple stages of mirrors, including a large hole in the primary mirror to have a good location to reflect light, there’s an inherent limitation to the design of using mirrors and lenses to gather information about the Universe.

    The goal is clear and admirable: to cut out any unnecessary steps and eliminate any losses when it comes to your light. It might seem like a simple idea, and as CCD sensors become more widespread and come down in cost, perhaps this will someday be involved in the future of astronomy. But implementing a dream like this won’t be very simple, because there are a few very important obstacles that need to be overcome in order to have a telescope without a mirror or lens. Let’s go over exactly what they are.

    3
    This 1887 picture of the Great Nebula in Andromeda was the first to show the spiral armed structure of the nearest large galaxy to the Milky Way. The fact that it appears so thoroughly white is because this was simply taken in unfiltered light, rather than looking in red, green, and blue, and then adding those colors together. Image credit: Isaac Roberts.

    1.) CCDs are outstanding at measuring light, but they don’t sort or filter by wavelength. Have you ever wondered why the old photographs you see of stars and galaxies are all monochrome, even though the stars and galaxies themselves have definite colors? It’s because they didn’t collect light in multiple, separate wavelength filters. Even modern telescopes place a filter between the incoming light and the CCDs/cameras to hone in on a particular wavelength or set of wavelengths, so that multiple images with multiple filters can be taken, reconstructing either a true-color or false-color image in the end.

    4
    The Andromeda galaxy (M31), as imaged from a ground-based telescope with multiple filters and reconstructed to show a colorized portrait. Image credit: Adam Evans / cc-by-2.0.

    This could be overcome by creating a complete set of filters for each individual CCD element, but that would be cumbersome, expensive, and would require that these filters be placed somewhere behind the CCD elements themselves, since you want to keep the complete collecting area, where a mirror or lens would normally go, open to the sky. It’s not a dealbreaker, but it’s an element that we don’t have a solution for at present.

    5
    Large-area CCDs are incredibly useful for gathering and detecting light, and for maximizing each individual photon that comes in. But without a mirror or lens to previously focus the light, the omnidirectional nature of CCDs will fail to produce a meaningful image of the object being observed. Image credit: Large Area Imager for Calar Alto (LAICA) / J.W. Fried.

    2.) CCDs don’t measure the direction of the incoming light. To produce those meaningful images they create so well, telescopes don’t just need to measure the intensity and wavelength of the incoming light, but its direction as well. Lenses and mirrors have the wonderful property that light coming in from an ultra-distant source that’s perpendicular to the plane of the mirror gets focused in such a way that it reaches your camera/photographic plate/eye/CCD, while the light from other directions is reflected away. Not so for a CCD alone: if light comes in from any direction, it gets registered. Unless you can collimate/focus the light ahead of time, you’re simply going to see a brilliant, white sky everywhere, because you won’t have direction-based information in there.

    5
    A schematic diagram of the McMath-Pierce Solar Telescope Facility, the longest telescope shaft/optical tunnel in the world. Even this requires a mirror at the end in order to do high-quality imaging. Image credit: NOAO / AURA / NSF.

    6
    McMath-Pierce Solar Telescope at Kitt Peak National Observatory in Arizona, USA

    You might think a possible solution to this is to build an extremely long, opaque tube that’s perpendicular to the plane of your CCD array, but even this has a problem: without a lens or mirror, the light from anything in your field-of-view can still strike every pixel in your array. Even the longest tunnel shaft ever built for these purposes, the McMath-Pierce Solar Telescope, still requires an actual mirror or a lens to focus the light. This is the biggest dealbreaker to using a CCD alone to measure light, and the biggest reason you need a mirror or lens.

    7
    This photo, taken at the Astrium France facility in Toulouse, shows the complete set of 106 CCDs that make up Gaia’s focal plane. The CCDs are bolted to the CCD support structure (CSS). The CSS (the grey plate underneath the CCDs in this photo) weighs about 20 kg and is made of silicon carbide (SiC), a material that provides remarkable thermal and mechanical stability. The focal plane measures 1 × 0.5 metres. Image credit: ESA’s Gaia / Astrium.

    3.) CCDs are far too expensive to cover a 10-meter diameter array. The CCDs themselves are a very expensive piece of equipment; a state-of-the-art 12 MegaPixel CCD, with each pixel (and a microlens that covers it) just 3.1 microns across, retails for around $3,700 today. To cover an area equivalent to a 10 meter diameter mirror would require around 700,000 of them: a cost approaching a prohibitive 3 billion dollars. For comparison, the European Extremely Large Telescope (ELT), with a 39 meter primary mirror diameter, has an estimated cost for the entire facility and equipment of less than half of that, at just 1083 million euros.

    8
    This diagram shows the novel 5-mirror optical system of ESO’s Extremely Large Telescope (ELT). Before reaching the science instruments the light is first reflected from the telescope’s giant concave 39-metre segmented primary mirror (M1), it then bounces off two further 4-metre-class mirrors, one convex (M2) and one concave (M3). The final two mirrors (M4 and M5) form a built-in adaptive optics system to allow extremely sharp images to be formed at the final focal plane. Image credit: ESO.

    The extra amount of light you would gain by using CCDs without mirrors is tiny, as you only lose about 5–10% of your light per reflection, but gain an extra 1500% (that is not a typo!) by going from a 10-meter diameter to a 39-meter diameter telescope. Put simply, there are better ways to spend your money if your goal is to gather more light and obtain higher resolution.

    8
    Lick Observatory’s Great Lick 91-centimeter (36-inch) telescope housed in the South (large) Dome of main building. On the ground, large, massive telescopes don’t particularly pose a problem, so long as the shape of the mirror remains ideal for reflecting light. But in space, your launch costs are determined by size and weight, so every bit that you can save makes all the difference.

    4.) If your goal is to save on weight, there’s a better solution. The Hubble Space Telescope was an incredible challenge to launch and deploy, not simply because of its size, but because of its weight.

    NASA/ESA Hubble Telescope

    The heaviness of the primary mirror was one of the biggest obstacles facing the mission. By contrast, James Webb will have more than seven times the light-collecting area of Hubble, but will barely weigh half as much as its much smaller predecessor.

    NASA/ESA/CSA Webb Telescope annotated

    The secret? Cast your mirror, shape it, polish it, and then drill out the material on the back.

    9
    The installation of the 18th and final segment of the JWST primary mirror. The black covers protect the gold-coated mirror segments, while the rear of the mirrors have already had 92% of their original material removed. Image credit: NASA / Chris Gunn.

    When you’re in space and you don’t have to fight with gravity, you don’t need nearly as much structure to support the telescope. After each of the 18 segments was manufactured for James Webb, the rear side had 92% of the original mass drilled out of it, maintaining the front shape of the mirror while saving tremendously on weight.

    10
    The interior and the primary mirror of the GTC, the largest single optical telescope in the world today. Image credit: Miguel Briganti (SMM/IAC).

    Gran Telescopio Canarias at the Roque de los Muchachos Observatory on the island of La Palma, in the Canaries, Spain, sited on a volcanic peak 2,267 metres (7,438 ft) above sea level

    There are lots of reasons why you might want to build a telescope without a lens or mirror, as optimizing for weight, cost, materials, light-gathering power, image quality, and resolution are always going to necessitate a trade-off. But the fact that CCDs, on their own, cannot measure the direction of incoming light is a hard dealbreaker for a mirror-free telescope. Although each mirrored surface that you reflect off of will necessitate some loss of signal, mirrors are still the best way to get a high-resolution, pristine-quality, large-collecting-area, (relatively) low-cost look at the Universe. If costs for CCDs come down, if an array as large as a telescope mirror can be built, and if the direction of incoming photons can be measured in real-time as well, we just might have something to talk about. But for right now, there’s no substitute for the science of optics. More than 300 years after he first published his groundbreaking treatise on the science of light, Newton’s rules are still undefeated when it comes to single telescopes!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: