Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:54 pm on February 24, 2019 Permalink | Reply
    Tags: "Ask Ethan: How Can We Measure The Curvature Of Spacetime?", A difference in the height of two atomic clocks of even ~1 foot (33 cm) can lead to a measurable difference in the speed at which those clocks run, A team of physicists working in Europe were able to conjugate three atom interferometers simultaneously, At every point you can infer the force of gravity or the amount of spacetime curvature, , Decades before Newton put forth his law of universal gravitation Italian scientists Francesco Grimaldi and Giovanni Riccioli made the first calculations of the gravitational constant G, , Ethan Siegel, , In the future it may be possible to extend this technique to measure the curvature of spacetime not just on Earth but on any worlds we can put a lander on. This includes other planets moons asteroids , It’s been over 100 years since Einstein and over 300 since Newton. We’ve still got a long way to go, Making multiple measurements of the field gradient simultaneously allows you to measure G between multiple locations that eliminates a source of error: the error induced when you move the apparatus. B, Pound-Rebka experiment, , The same law of gravity governs the entire Universe, We can do even better than the Pound-Rebka experiment today by using the technology of atomic clocks, You can even infer G the gravitational constant of the Universe.   

    From Ethan Siegel: “Ask Ethan: How Can We Measure The Curvature Of Spacetime?” 

    From Ethan Siegel
    Feb 23, 2019

    Instead of an empty, blank, 3D grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, but all forms of energy, including but not limited to mass, contribute to spacetime curvature. For the first time, we can measure the curvature at Earth’s surface, as well as how that curvature changes with altitude. (CHRISTOPHER VITALE OF NETWORKOLOGIES AND THE PRATT INSTITUTE)

    It’s been over 100 years since Einstein, and over 300 since Newton. We’ve still got a long way to go.

    From measuring how objects fall on Earth to observing the motion of the Moon and planets, the same law of gravity governs the entire Universe. From Galileo to Newton to Einstein, our understanding of the most universal force of all still has some major holes in it. It’s the only force without a quantum description. The fundamental constant governing gravitation, G, is so poorly known that many find it embarrassing. And the curvature of the fabric of spacetime itself went unmeasured for a century after Einstein put forth the theory of General Relativity. But much of that has the potential to change dramatically, as our Patreon supporter Nick Delroy realized, asking:

    Can you please explain to us how awesome this is, and what you hope the future holds for gravity measurement. The instrument is obviously localized but my imagination can’t stop coming up with applications for this.

    The big news he’s excited about, of course, is a new experimental technique that measured the curvature of spacetime due to gravity for the first time [Physical Review Letters].

    The identical behavior of a ball falling to the floor in an accelerated rocket (left) and on Earth (right) is a demonstration of Einstein’s equivalence principle. Although you cannot tell whether an acceleration is due to gravity or any other acceleration from a single measurement, measuring differing accelerations at different points can show whether there’s a gravitational gradient along the direction of acceleration. (WIKIMEDIA COMMONS USER MARKUS POESSEL, RETOUCHED BY PBROKS13)

    Think about how you might design an experiment to measure the strength of the gravitational force at any location in space. Your first instinct might be something simple and straightforward: take an object at rest, release it so it’s in free-fall, and observe how it accelerates.

    By measuring the change in position over time, you can reconstruct what the acceleration at this location must be. If you know the rules governing the gravitational force — i.e., you have the correct law of physics, like Newton’s or Einstein’s theories — you can use this information to determine even more information. At every point, you can infer the force of gravity or the amount of spacetime curvature. Beyond that, if you know additional information (like the relevant matter distribution), you can even infer G, the gravitational constant of the Universe.

    Newton’s law of Universal Gravitation relied on the concept of an instantaneous action (force) at a distance, and is incredibly straightforward. The gravitational constant in this equation, G, along with the values of the two masses and the distance between them, are the only factors in determining a gravitational force. Although Newton’s theory has since been superseded by Einstein’s General Relativity, G also appears in Einstein’s theory. (WIKIMEDIA COMMONS USER DENNIS NILSSON)

    This simple approach was the first one taken to investigate the nature of gravity. Building on the work of others, Galileo determined the gravitational acceleration at Earth’s surface. Decades before Newton put forth his law of universal gravitation, Italian scientists Francesco Grimaldi and Giovanni Riccioli made the first calculations of the gravitational constant, G.

    But experiments like this, as valuable as they are, are limited. They can only give you information about gravitation along one dimension: towards the center of the Earth. Acceleration is based on either the sum of all the net forces (Newton) acting on an object, or the net curvature of spacetime (Einstein) at one particular location in the Universe. Since you’re observing an object in free-fall, you’re only getting a simplistic picture.

    According to legend, the first experiment to show that all objects fell at the same rate, irrespective of mass, was performed by Galileo Galilei atop the Leaning Tower of Pisa. Any two objects dropped in a gravitational field, in the absence of (or neglecting) air resistance, will accelerate down to the ground at the same rate. This was later codified as part of Newton’s investigations into the matter. (GETTY IMAGES)

    Thankfully, there’s a way to get a multidimensional picture as well: perform an experiment that’s sensitive to changes in the gravitational field/potential as an object changes its position. This was first accomplished, experimentally, in the 1950s by the Pound-Rebka experiment [ Explanation of the Pound-Rebka experiment http://vixra.org/pdf/1212.0035v1.pdf ].

    What the experiment did was cause a nuclear emission at a low elevation, and note that the corresponding nuclear absorption didn’t occur at a higher elevation, presumably due to gravitational redshift, as predicted by Einstein. Yet if you gave the low-elevation emitter a positive boost to its speed, through attaching it to a speaker cone, that extra energy would balance the loss of energy that traveling upwards in a gravitational field extracted. As a result, the arriving photon has the right energy, and absorption occurs. This was one of the classical tests of General Relativity, confirming Einstein where his theory’s predictions departed from Newton’s.

    Physicist Glen Rebka, at the lower end of the Jefferson Towers, Harvard University, calling Professor Pound on the phone during setup of the famed Pound-Rebka experiment. (CORBIS MEDIA / HARVARD UNIVERSITY)

    We can do even better than the Pound-Rebka experiment today, by using the technology of atomic clocks. These clocks are the best timekeepers in the Universe, having surpassed the best natural clocks — pulsars — decades ago. Now capable of monitoring time differences to some 18 significant features between clocks, Nobel Laureate David Wineland led a team that demonstrated that raising an atomic clock by barely a foot (about 33 cm in the experiment) above another one caused a measurable frequency shift in what the clock registered as a second.

    If we were to take these two clocks to any location on Earth, and adjust the heights as we saw fit, we could understand how the gravitational field changes as a function of elevation. Not only can we measure gravitational acceleration, but the changes in acceleration as we move away from Earth’s surface.

    A difference in the height of two atomic clocks of even ~1 foot (33 cm) can lead to a measurable difference in the speed at which those clocks run. This allows us to measure not only the strength of the gravitational field, but the gradient of the field as a function of altitude/elevation. (DAVID WINELAND AT PERIMETER INSTITUTE, 2015)

    But even these achievements cannot map out the true curvature of space. That next step wouldn’t be achieved until 2015: exactly 100 years after Einstein first put forth his theory of General Relativity. In addition, there was another problem that has cropped up in the interim, which is the fact that various methods of measuring the gravitational constant, G, appear to give different answers.

    Three different experimental techniques have been used to determine G: torsion balances, torsion pendulums, and atom interferometry experiments. Over the past 15 years, measured values of the gravitational constant have ranged from as high as 6.6757 × 10–11 N/kg2⋅m2 to as low as 6.6719 × 10–11 N/kg2⋅m2. This difference of 0.05%, for a fundamental constant, makes it one of the most poorly-determined constants in all of nature.

    In 1997, the team of Bagley and Luther performed a torsion balance experiment that yielded a result of 6.674 x 10^-11 N/kg²/m², which was taken seriously enough to cast doubt on the previously reported significance of the determination of G. Note the relatively large variations in the measured values, even since the year 2000.(DBACHMANN / WIKIMEDIA COMMONS)

    But that’s where the new study, first published in 2015 but refined many times over the past four years, comes in. A team of physicists, working in Europe, were able to conjugate three atom interferometers simultaneously. Instead of using just two locations at different heights, they were able to get the mutual differences between three different heights at a single location on the surface, which enables you to not simply get a single difference, or even the gradient of the gravitational field, but the change in the gradient as a function of distance.

    When you explore how the gravitational field changes as a function of distance, you can understand the shape of the change in spacetime curvature. When you measure the gravitational acceleration in a single location, you’re sensitive to everything around you, including what’s underground and how it’s moving. Measuring the gradient of the field is more informative than just a single value; measuring how that gradient changes gives you even more information.

    The scheme of the experiment that measures the three atomic groupings launched in rapid sequence and then excited by lasers to measure not only the gravitational acceleration, but showing the effects of the changes in curvature that had never been measured before. (G. ROSI ET AL., PHYS. REV. LETT. 114, 013001, 2015)

    That’s what makes this new technique so powerful. We’re not simply going to a single location and finding out what the gravitational force is. Nor are we going to a location and finding out what the force is and how that force is changing with elevation. Instead, we’re determining the gravitational force, how it changes with elevation, and how the change in the force is changing with elevation.

    “Big deal,” you might say, “we already know the laws of physics. We know what those laws predict. Why should I care that we’re measuring something that confirms to slightly better accuracy what we’ve known should be true all along?”

    Well, there are multiple reasons. One is that making multiple measurements of the field gradient simultaneously allows you to measure G between multiple locations that eliminates a source of error: the error induced when you move the apparatus. By making three measurements, rather than two, simultaneously, you get three differences (between 1 and 2, 2 and 3, and 1 and 3) rather than just 1 (between 1 and 2).

    The top of the Makkah royal clock tower runs a few quadrillionths of a second faster than the same clock would at the base, due to differences in the gravitational field. Measuring the changes in the gradient of the gravitational field provides even more information, enabling us to finally measure the curvature of space directly. (AL JAZEERA ENGLISH C/O: FADI EL BENNI)

    But another reason that’s perhaps even more important is to better understand the gravitational pull of the objects we’re measuring. The idea that we know the rules governing gravity is true, but we only know what the gravitational force should be if we know the magnitude and distribution of all the masses that are relevant to our measurement. The Earth, for example, is not a uniform structure at all. There are fluctuations in the gravitational strength we experience everywhere we go, dependent on factors like:

    the density of the crust beneath your feet,
    the location of the crust-mantle boundary,
    the extent of isostatic compensation that takes place at that boundary,
    the presence or absence of oil reservoirs or other density-varying deposits underground,

    and so on. If we can implement this technique of three-atom interferometry wherever we like on Earth, we can better understand our planet’s interior simply by making measurements at the surface.

    Various geologic zones in the Earth’s mantle create and move magma chambers, leading to a variety of geological phenomena. It’s possible that external intervention could trigger a catastrophic event. Improvements in geodesy could improve our understanding of what’s happening, existing, and changing beneath Earth’s surface. (KDS4444 / WIKIMEDIA COMMONS)

    In the future, it may be possible to extend this technique to measure the curvature of spacetime not just on Earth, but on any worlds we can put a lander on. This includes other planets, moons, asteroids and more. If we want to do asteroid mining, this could be the ultimate prospecting tool. We could improve our geodesy experiments significantly, and improve our ability to monitor the planet. We could better track internal changes in magma chambers, as just one example. If we applied this technology to upcoming spacecrafts, it could even help correct for Newtonian noise in next-generation gravitational wave observatories like LISA or beyond.

    ESA/NASA eLISA space based, the future of gravitational wave research

    The gold-platinum alloy cubes, of central importance to the upcoming LISA mission, have already been built and tested in the proof-of-concept LISA Pathfinder mission.

    ESA/LISA Pathfinder

    This image shows the assembly of one of the Inertial Sensor Heads for the LISA Technology Package (LTP). Improved techniques for accounting for Newtonian noise in the experiment might improve LISA’s sensitivity significantly. (CGS SPA)

    The Universe is not simply made of point masses, but of complex, intricate objects. If we ever hope to tease out the most sensitive signals of all and learn the details that elude us today, we need to become more precise than ever. Thanks to three-atom interferometry, we can, for the first time, directly measure the curvature of space.

    Understanding the Earth’s interior better than ever is the first thing we’re going to gain, but that’s just the beginning. Scientific discovery isn’t the end of the game; it’s the starting point for new applications and novel technologies. Come back in a few years; you might be surprised at what becomes possible based on what we’re learning for the first time today.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 3:02 pm on February 16, 2019 Permalink | Reply
    Tags: Ask Ethan: What Will Our First Direct Image Of An Earth-Like Exoplanet Look Like?, , , , , Ethan Siegel, You’d be amazed at what you can learn from even one single pixel   

    From Ethan Siegel: “Ask Ethan: What Will Our First Direct Image Of An Earth-Like Exoplanet Look Like?” 

    From Ethan Siegel
    Feb 16, 2019

    You’d be amazed at what you can learn from even one single pixel.

    Left, an image of Earth from the DSCOVR-EPIC camera. Right, the same image degraded to a resolution of 3 x 3 pixels, similar to what researchers will see in future exoplanet observations.(NOAA/NASA/STEPHEN KANE)

    NOAA DISCOVR Deep Space Climate Observatory

    NOAA Deep Space Climate Observatory

    NASA EPIC (Earth Polychromatic Imaging Camera) on NOAA DSCOVR (Deep Space Climate Observatory)

    Over the past decade, owing largely to NASA’s Kepler mission, our knowledge of planets around star systems beyond our own has increased tremendously.

    NASA/Kepler Telescope

    From just a few worlds — mostly massive, with quick, inner orbits, and around lower-mass stars — to literally thousands of widely-varying sizes, we now know that Earth-sized and slightly larger worlds are extremely common. With the next generation of coming observatories from both space (like the James Webb Space Telescope) and the ground (with observatories like GMTand ELT), the closest such worlds will be able to be directly imaged. What will that look like? That’s what Patreon supporter Tim Graham wants to know, asking:

    “[W]hat kind of resolution can we expect? [A] few pixels only or some features visible?”

    The picture itself won’t be impressive. But what it will teach us is everything we could reasonably dream of.

    NASA/ESA/CSA Webb Telescope annotated

    Giant Magellan Telescope, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    An artist’s rendition of Proxima b orbiting Proxima Centauri. With 30-meter class telescopes like GMT and ELT, we’ll be able to directly image it, as well as any outer, yet-undetected worlds. However, it won’t look anything like this through our telescopes. (ESO/M. KORNMESSER)

    Let’s get the bad news out of the way first. The closest star system to us is the Alpha Centauri system, itself located just over 4 light years away. It consists of three stars:

    Alpha Centauri A, which is a Sun-like (G-class) star,
    Alpha Centauri B, which is a little cooler and less massive (K-class), but orbits Alpha Centauri A at a distance of the gas giants in our Solar System, and
    Proxima Centauri, which is much cooler and less massive (M-class), and is known to have at least one Earth-sized planet.

    Centauris Alpha Beta Proxima 27, February 2012. Skatebiker

    While there might be many more planets around this trinary star system, the fact is that planets are small and the distances to them, particularly beyond our own Solar System, are tremendous.

    This diagram shows the novel 5-mirror optical system of ESO’s Extremely Large Telescope (ELT). Before reaching the science instruments the light is first reflected from the telescope’s giant concave 39-metre segmented primary mirror (M1), it then bounces off two further 4-metre-class mirrors, one convex (M2) and one concave (M3). The final two mirrors (M4 and M5) form a built-in adaptive optics system to allow extremely sharp images to be formed at the final focal plane. This telescope will have more light-gathering power and better angular resolution, down to 0.005″, than any telescope in history. (ESO)

    The largest telescope being built of all, the ELT, will be 39 meters in diameter, meaning it has a maximum angular resolution of 0.005 arc seconds, where 60 arc seconds make up 1 arc minute, and 60 arc minutes make up 1 degree. If you put an Earth-sized planet at the distance of Proxima Centauri, the nearest star beyond our Sun at 4.24 light years, it would have an angular diameter of 67 micro-arc seconds (μas), meaning that even our most powerful upcoming telescope would be about a factor of 74 too small to fully resolve an Earth-sized planet.

    The best we could hope for was a single, saturated pixel, where the light bled into the surrounding, adjacent pixels on our most advanced, highest-resolution cameras. Visually, it’s a tremendous disappointment for anyone hoping to get a spectacular view like the illustrations NASA has been putting out.

    Artist’s conception of the exoplanet Kepler-186f, which may exhibit Earth-like (or early, life-free Earth-like) properties. As imagination-sparking as illustrations like this are, they’re mere speculations, and the incoming data won’t provide any views akin to this at all. (NASA AMES/SETI INSTITUTE/JPL-CALTECH)

    But that’s where the letdown ends. By using coronagraph technology, we’ll be able to block out the light from the parent star, viewing the light from the planet directly. Sure, we’ll only get a pixel’s worth of light, but it won’t be one continuous, steady pixel at all. Instead, we’ll get to monitor that light in three different ways:

    In a variety of colors, photometrically, teaching us what the overall optical properties of any imaged planet are.

    Spectroscopically, which means we can break that light up into its individual wavelengths, and look for signatures of particular molecules and atoms on its surface and in its atmosphere.

    Over time, meaning we can measure how both of the above change as the planet both rotates on its axis and revolves, seasonally, around its parent star.

    From just a single pixel’s worth of light, we can determine a whole slew of properties about any world in question. Here are some of the highlights.

    Illustration of an exoplanetary system, potentially with an exomoon orbiting it. (NASA/DAVID HARDY, VIA ASTROART.ORG)

    By measuring the light reflecting off of a planet over the course of its orbit, we’ll be sensitive to a variety of phenomena, some of which we already see on Earth. If the world has a difference in albedo (reflectivity) from one hemisphere to another, and rotates in any fashion other than one that’s tidally locked to its star in a 1-to-1 resonance, we’ll be able to see a periodic signal emerging as the star-facing side changes with time.

    A world with continents and oceans, for example, would display a signal that rose-and-fell in a variety of wavelengths, corresponding to the portion that was in direct sunlight reflecting that light back to our telescopes here in the Solar System.

    Hundreds of candidate planets have been discovered so far in the data collected and released by NASA’s Transiting Exoplanet Survey Satellite (TESS), with eight of them having been confirmed thus far by follow-up measurements.


    Three of the most unique, interesting exoplanets are illustrated here, with many more to come. Some of the closest worlds to be discovered by TESS will be candidates for being Earth-like and within the reach of direct imaging. (NASA/MIT/TESS)

    Owing to the power of direct imaging, we could directly measure changes in the weather on a planet beyond our own Solar System.

    The 2001–2002 composite images of the Blue Marble, constructed with NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) data.

    NASA Terra MODIS schematic

    NASA Terra satellite

    As an exoplanet rotates and its weather changes, we can tease out or reconstruct variations in the planetary continent/ocean/icecap ratios, as well as the signal of cloud cover.(NASA)

    Life may be a more difficult signal to tease out, but if there were an exoplanet with life on it, similar to Earth, we would see some very specific seasonal changes. On Earth, the fact that our planet rotates on its axis means that in winter, where our hemisphere faces away from the Sun, the icecaps grow larger, the continents grow more reflective with snow extending down to lower latitudes, and the world becomes less green in its overall color.

    Conversely, in the summer, our hemisphere faces towards the Sun. The icecaps shrink while the continents turn green: the dominant color of plant life on our planet. Similar seasonal changes will affect the light coming from any exoplanet we image, allowing us to tease out not only seasonal variations, but the specific percent changes in color distribution and reflectivity.

    In this image of Titan, the methane haze and atmosphere is shown in a near-transparent blue, with surface features beneath the clouds displayed. A composite of ultraviolet, optical, and infrared light was used to construct this view. By combining similar data sets over time for a directly imaged exoplanet, even with just a single pixel, we could reconstruct a huge slew of its atmospheric, surface, and seasonal properties. (NASA/JPL/SPACE SCIENCE INSTITUTE)

    Overall planetary and orbital characteristics should emerge as well. Unless we’ve observed a planetary transit from our point of view — where the planet in question passes between us and the star it orbits — we cannot know the orientation of its orbit.

    Planet transit. NASA/Ames

    This means we can’t know what the planet’s mass is; we can only know some combination of its mass and the angle of its orbit’s tilt.

    But if we can measure how the light from it changes over time, we can infer what its phases must look like, and how those change over time. We can use that information to break that degeneracy, and determine its mass and orbital tilt, as well as the presence or absence of any large moons around that planet. From even just a single pixel, the way the brightness changes once color, cloud cover, rotation, and seasonal changes are subtracted out should allow us to learn all of this.

    The phases of Venus, as viewed from Earth, are analogous to an exoplanet’s phases as it orbits its star. If the ‘night’ side exhibits certain temperature/infrared properties, exactly the ones that James Webb [above] will be sensitive to, we can determine whether they have atmospheres, as well as spectroscopically determining what the atmospheric contents are. This remains true even without measuring them directly via a transit. (WIKIMEDIA COMMONS USERS NICHALP AND SAGREDO)

    This will be important for a huge number of reasons. Yes, the big, obvious hope is that we’ll find an oxygen-rich atmosphere, perhaps even coupled with an inert but common molecule like nitrogen gas, creating a truly Earth-like atmosphere. But we can go beyond that and look for the presence of water. Other signatures of potential life, like methane and carbon dioxide, can be sought out as well. And another fun advance that’s greatly underappreciated today will come in the direct imaging of super-Earth worlds. Which ones have giant hydrogen and helium gas envelopes and which ones don’t? In a direct fashion, we’ll finally be able to draw a conclusive line.

    The classification scheme of planets as either rocky, Neptune-like, Jupiter-like or stellar-like. The border between Earth-like and Neptune-like is murky, but direct imaging of candidate super-Earth worlds should enable us to determine whether there’s a gas envelope around each planet in question or not. (CHEN AND KIPPING, 2016, VIA ARXIV.ORG/PDF/1603.08614V2.PDF)

    If we truly wanted to image features on a planet beyond our Solar System, we’d need a telescope hundreds of times as large as the largest ones currently being planned: multiple kilometers in diameter. Until that day comes, however, we can look forward to learning so many important things about the nearest Earth-like worlds in our galaxy. TESS is out there, finding those planets right now. James Webb is complete, waiting for its 2021 launch date. Three 30-meter class telescopes are in the works, with the first one (GMT) slated to come online in 2024 and the largest one (ELT) to see first light in 2025. By this time a decade from now, we’ll have direct image (optical and infrared) data on dozens of Earth-sized and slightly larger worlds, all beyond our Solar System.

    A single pixel may not seem like much, but when you think about how much we can learn — about seasons, weather, continents, oceans, icecaps, and even life — it’s enough to take your breath away.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:24 am on February 13, 2019 Permalink | Reply
    Tags: , , , , , Ethan Siegel, ,   

    From Ethan Siegel: “We Must Not Give Up On Answering The Biggest Scientific Questions Of All” 

    From Ethan Siegel
    Feb 12, 2019

    The doubly charmed baryon, Ξcc++, contains two charm quarks and one up quark, and was first experimentally discovered at CERN. Now, researchers have simulated how to synthesize it from other charmed baryons that ‘melt’ together, and the energy yields are tremendous. To uncover yet-unrevealed truths about the Universe requires investing in experiments that have never yet been performed. (DANIEL DOMINGUEZ, CERN)

    Theoretical work tells you where to look, but only experiments can reveal what you’ll find.

    There are fundamental mysteries out there about the nature of the Universe itself, and it’s our inherent curiosity about those unanswered questions that drives science forward. There’s an incredible amount we’ve learned already, and the successes of our two leading theories — the quantum field theory describing the Standard Model and General Relativity for gravity — is a testament to how far we’ve come in understanding reality itself.

    Many people are pessimistic about our current attempts and future plans to try and solve the great cosmic mysteries that stymie us today. Our best hypotheses for new physics, including supersymmetry, extra dimensions, technicolor, string theory and more, have all failed to yield any experimental confirmation at all. But that doesn’t mean physics is in crisis. It means it’s working exactly as we’d expect: by telling the truth about the Universe. Our next steps will show us how well we’ve been listening.

    From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known.(MAGDALENA KOWALSKA / CERN / ISOLDE TEAM)


    The ALPHA-g detector, built at Canada’s particle accelerator facility, TRIUMF, is the first of its kind designed to measure the effect of gravity on antimatter. When oriented vertically, it should be able to measure in which direction antimatter falls, and at what magnitude. Experiments such as this were unfathomable a century ago, as antimatter’s existence was not even known. (STU SHEPHERD/TRIUMF)

    In nuclear fusion, two lighter nuclei fuse together to create a heavier one, but where the final products have less mass than the initial reactants, and where energy is therefore released via E = mc². In the ‘melting quark’ scenario, two baryons with heavy quarks produce a doubly-heavy baryon, releasing energy via the same mechanism.(GERALD A. MILLER / NATURE)

    With everything we know about the fundamental particles, we know there should be more to the Universe than just the ones we know of. We cannot explain dark matter’s apparent existence, nor do we understand dark energy or why the Universe expands with the properties it does.

    We do not know why the particles have the masses that they do, why matter dominates the Universe and not antimatter, or why neutrinos have mass at all. We do not know if the proton is stable or will someday decay, or whether gravity is an inherently quantum force in nature. And even though we know the Big Bang was preceded by inflation, we do not know whether inflation itself had a beginning, or was eternal to the past.

    There is certainly new physics beyond the Standard Model, but it might not show up until energies far, far greater than what a terrestrial collider could ever reach. Still, whether this scenario is true or not, the only way we’ll know is to look. In the meantime, properties of the known particles can be better explored with a future collider than any other tool. (UNIVERSE-REVIEW.CA)

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Most of the ideas one can concoct in physics have already been either ruled out or highly constrained by the data we already have in our coffers. If you want to discover a new particle, field, interaction, or phenomenon, it doesn’t do you any good to postulate something that’s inconsistent with what we already know to be true today. Sure, there might be assumptions we’ve made that later turn out to be incorrect, but the data itself must be in agreement with any new theory.

    The vertices shown in the above Feynman diagrams all contain three Higgs bosons meeting at a single point, which would enable us to measure the Higgs self-coupling, a key parameter in understanding fundamental physics. (ALAIN BLONDEL AND PATRICK JANOT / ARXIV:1809.10041)

    That’s why the greatest amount of effort in physics goes not into new theories or new ideas, but into experiments that push past the regimes we’ve already explored. Sure, finding the Higgs boson may make tremendous headlines, but how strongly does the Higgs couple to the Z-boson? What are all the couplings between those two particles and the others in the Standard Model? How easy are they to create? And once you create them, are there any mutual decays that are different from a standard Higgs decay plus a standard Z-boson decay?

    There’s a technique you can use to probe this: create an electron-positron collision at exactly the mass of the Higgs plus the Z-boson. Instead of a few dozen to perhaps 100 events that create both a Higgs and a Z-boson, which is what the LHC has yielded, you can create thousands, hundreds of thousands, or even millions.

    When you collide electrons at high energies with hadrons (such as protons) moving in the opposite direction at high energies, you can gain the ability to probe the internal structure of the hadrons as never before. This was a trememdous advance of the DESY (German Electron Synchrotron) experiment. (JOACHIM MEYER; DESY / HERA)

    H1 detector at DESY HERA ring

    Not every experiment is designed to make new particles, nor should they be. Some are designed to probe matter that we already know exists, and to study its properties in detail as never before. LEP, the Large Electron-Positron collider and the predecessor to the LHC, never found a single new fundamental particle. Neither did the DESY experiment, which collided electrons with protons. Neither did RHIC, the Relativistic Heavy Ion Collider.

    CERN LEP Collider


    And that’s to be expected; that wasn’t the point of those colliders. Their purpose was to study the matter that we know exists to never-before-studied precisions.

    With six quarks and six antiquarks to choose from, where their spins can sum to 1/2, 3/2 or 5/2, there are expected to be more pentaquark possibilities than all baryon and meson possibilities combined.(CERN / LHC / LHCB COLLABORATION)

    CERN/LHCb detector

    The purpose of the next great science experiment isn’t to simply look for one new thing or test one new theory. It’s to gather a huge suite of otherwise unattainable data, and to let that data guide the development of the field.

    A hypothetical new accelerator, either a long linear one or one inhabiting a large tunnel beneath the Earth, could dwarf the LHC’s energies. Even at that, there’s no guarantee we’ll find anything new, but we’re certain to find nothing new if we fail to try. (ILC COLLABORATION)

    Linear Collider Collaboration

    CERN FCC Future Circular Collider details of proposed 100km-diameter successor to LHC

    Proposed Future Colliders

    Sure, we can design and build experiments or observatories with an eye towards what we anticipate might be there. But the best bet for the future of science is a multi-purpose machine that can gather large and varied amounts of data that could never be collected without such a tremendous investment. It’s why Hubble was so successful, why Fermilab and the LHC have pushed boundaries as never before, and why future missions such as the James Webb Space Telescope, future 30-meter class observatories like the GMT or the ELT, or future colliders beyond the LHC such as the FCC, CLIC, or the ILC are required if we ever hope to answer the most fundamental questions of all.

    NASA/ESA Hubble Telescope


    CERN map

    CERN LHC Tunnel

    CERN LHC particles

    NASA/ESA/CSA Webb Telescope annotated

    Giant Magellan Telescope, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    CLIC collider

    ILC schematic, being planned for the Kitakami highland, in the Iwate prefecture of northern Japan

    There’s an old saying in business that applies to science just as well: “Faster. Better. Cheaper. Pick two.” The world is moving faster than ever before. If we start pinching pennies and don’t invest in “better,” it’s tantamount to already having given up.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:32 pm on February 2, 2019 Permalink | Reply
    Tags: , , , Big Bang Observer, , , , Ethan Siegel, , Gravity is talking. Lisa will listen,   

    From Ethan Siegel: “Ask Ethan: How Can LISA, Without Fixed-Length Arms, Ever Detect Gravitational Waves?” 

    From Ethan Siegel

    LIGO, here on Earth, has exquisitely-precise distances its lasers travel. With three spacecrafts in motion, how could LISA work?

    Since it began operating in 2015, advanced LIGO has ushered in an era of a new type of astronomy: using gravitational wave signals. The way we do it, however, is through a very special technique known as laser interferometry. By splitting a laser and sending each half of the beam down a perpendicular path, reflecting them back, and recombining them, we can create an interference pattern. If the lengths of those paths change, the interference pattern changes, enabling us to detect those waves. And that leads to the best question I got about science during my recent Astrotour in Iceland, courtesy of Ben Turner, who asked:

    LIGO works by having these exquisitely precise lasers, reflected down perfectly length-calibrated paths, to detect these tiny changes in distance (less than the width of a proton) induced by a passing gravitational wave. With LISA, we plan on having three independent, untethered spacecrafts freely-floating in space. They’ll be affected by all sorts of phenomena, from gravity to radiation to the solar wind. How can we possibly get a gravitational wave signal out of this?

    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    Caltech/MIT Advanced aLigo Hanford, WA, USA installation

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger

    Gravity is talking. Lisa will listen. Dialogos of Eide

    ESA/eLISA the future of gravitational wave research

    Localizations of gravitational-wave signals detected by LIGO in 2015 (GW150914, LVT151012, GW151226, GW170104), more recently, by the LIGO-Virgo network (GW170814, GW170817). After Virgo came online in August 2018

    Skymap showing how adding Virgo to LIGO helps in reducing the size of the source-likely region in the sky. (Credit: Giuseppe Greco (Virgo Urbino group)

    It’s a great question, and the toughest one posed to me all year thus far. Let’s explore the answer.

    3D rendering of the gravitational waves emitted from a binary neutron star system at merger. The central region (in density) is stretched by a factor of ~5 for better visibility. The orientation of the merger itself determines how the signal will be polarized. (AEI POTSDAM-GOLM)

    Since the dawn of time, humanity has been practicing astronomy with light, which has progressed from naked-eye viewing to the use of telescopes, cameras, and wavelengths that go far beyond the limits of human vision. We’ve detected cosmic particles from space in a wide variety of flavors: electrons, protons, atomic nuclei, antimatter, and even neutrinos.

    But gravitational waves are an entirely new way for humanity to view the Universe. Instead of some detectable, discrete quantum particle that interacts with another, leading to a detectable signal in some sort of electronic device, gravitational waves act as ripples in the fabric of space itself. With a certain set of properties, including:

    propagation speed,
    frequency, and

    they affect everything occupying the space that they pass through.

    Gravitational waves propagate in one direction, alternately expanding and compressing space in mutually perpendicular directions, defined by the gravitational wave’s polarization. Gravitational waves themselves, in a quantum theory of gravity, should be made of individual quanta of the gravitational field: gravitons. (M. PÖSSEL/EINSTEIN ONLINE)

    When one of these gravitational waves passes through a LIGO-like detector, it does exactly what you might suspect. The gravitational wave, along the direction it propagates at the speed of gravity (which equals the speed of light), doesn’t affect space at all. Along the plane perpendicular to its propagation, however, it alternately causes space to expand and contract in mutually perpendicular directions. There are multiple types of polarization that are possible:

    “plus” (+) polarization, where the up-down and left-right directions expand and contract,
    “cross” (×) polarization, where the left-diagonal and right-diagonal directions expand and contract,
    or “circularly” polarized waves, similar to way light can be circularly polarized; this is a different parameterization of plus and cross polarizations.

    Whatever the physical case, the polarization is determined by the nature of the source.

    When a wave enters a detector, any two perpendicular directions will be compelled to contract and expand, alternately and in-phase, relative to one another. The amount that they contract or expand is related to the amplitude of the wave. The period of the expansion and contraction is determined by the frequency of the wave, which a detector of a specific arm length (or effective arm length, where there are multiple reflections down the arms, as in the case of LIGO) will be sensitive to.

    With multiple such detectors in a variety of orientations to one another in three-dimensional space, the location, orientation, and even polarization of the original source can be reconstructed. By using the predictive power of Einstein’s General Relativity and the effects of gravitational waves on the matter-and-energy occupying the space they pass through, we can learn about events happening all across the Universe.

    LIGO and Virgo have discovered a new population of black holes with masses that are larger than what had been seen before with X-ray studies alone (purple). This plot shows the masses of all ten confident binary black hole mergers detected by LIGO/Virgo (blue), along with the one neutron star-neutron star merger seen (orange). LIGO/Virgo, with the upgrade in sensitivity, should detect multiple mergers every week. (LIGO/VIRGO/NORTHWESTERN UNIV./FRANK ELAVSKY)

    But it’s only due to the extraordinary technical achievement of these interferometers that we can actually make these measurements. In a terrestrial, LIGO-like detector, the distances of the two perpendicular arms are fixed. Laser light, even if reflected back-and-forth along the arms thousands of times, will eventually see the two beams come back together and construct a very specific interference pattern.

    If the noise can be minimized below a certain level, the pattern will hold absolutely steady, so long as no gravitational waves are present.

    If, then, a gravitational wave passes through, and one arm contracts while the other expands, the pattern will shift.

    When the two arms are of exactly equal length and there is no gravitational wave passing through, the signal is null and the interference pattern is constant. As the arm lengths change, the signal is real and oscillatory, and the interference pattern changes with time in a predictable fashion. (NASA’S SPACE PLACE)

    By measuring the amplitude and frequency at which the pattern shifts, the properties of a gravitational wave can be reconstructed. By measuring a coincident signal in multiple such gravitational wave detectors, the source properties and location can be reconstructed as well. The more detectors with differing orientations and locations are present, the better-constrained the properties of the gravitational wave source will be.

    This is why adding the Virgo detector to the twin LIGO detectors in Livingston and Hanford enabled a far superior reconstruction of the location of gravitational wave sources. In the future, additional LIGO-like detectors in Japan and India will allow scientists to pinpoint gravitational waves in an even superior fashion.

    But there’s a limit to what we can do with detectors like this. Seismic noise from being located on the Earth itself limits how sensitive a ground-based detector can be. Signals below a certain amplitude can never be detected. Additionally, when light signals are reflected between mirrors, the noise generated by the Earth accumulates cumulatively.

    The fact that the Earth itself exists in the Solar System, even if there were no plate tectonics, ensures that the most common type of gravitational wave events — binary stars, supermassive black holes, and other low-frequency sources (taking 100 seconds or more to oscillate) — cannot be seen from the ground. Earth’s gravitational field, human activity, and natural geological processes means that these low-frequency signals cannot be practically seen from Earth. For that, we need to go to space.

    And that’s where LISA comes in.

    The sensitivities of a variety of gravitational wave detectors, old, new, and proposed. Note, in particular, Advanced LIGO (in orange), LISA (in dark blue), and BBO (in light blue). LIGO can only detect low-mass and short-period events; longer-baseline, lower-noise observatories are needed for more massive black holes. (MINGLEI TONG, CLASS.QUANT.GRAV. 29 (2012) 155006)

    LISA is the Laser Interferometer Space Antenna. In its current design, it consists of three dual-purpose spacecrafts, separated in an equilateral triangle configuration by roughly 5,000,000 kilometers along each laser arm.

    Inside each spacecraft, there are two free-floating cubes that are shielded by the spacecraft itself from the effects of interplanetary space. They will remain at a constant temperature, pressure, and will be unaffected by the solar wind, radiation pressure, or the bombardment of micrometeorites.

    By carefully measuring the distances between pairs of cubes on different spacecrafts, using the same laser interferometry technique, scientists can do everything that multiple LIGO detectors do, except for these long-period gravitational waves that only LISA is sensitive to. Without the Earth to create noise, it seems like an ideal setup.

    The primary scientific goal of the Laser Interferometer Space Antenna (LISA) mission is to detect and observe gravitational waves from massive black holes and galactic binaries with periods in the range of a tens of seconds to a few hours. This low-frequency range is inaccessible to ground-based interferometers because of the unshieldable background of local gravitational noise arising from atmospheric effects and seismic activity. (ESA-C. VIJOUX)

    But even without the terrestrial effects of human activity, seismic noise, and being deep within Earth’s gravitational field, there are still sources of noise that LISA must contend with. The solar wind will strike the detectors, and the LISA spacecrafts must be able to compensate for that. The gravitational influence of other planets and solar radiation pressure will induce tiny orbital changes relative to one another. Quite simply, there is no way to hold the spacecract at a fixed, constant distance of exactly 5 million km, relative to one another, in space. No amount of rocket fuel or electric thrusters will be able to maintain that exactly.

    Remember: the goal is to detect gravitational waves — themselves a tiny, minuscule signal — over and above the background of all this noise.

    The three LISA spacecraft will be placed in orbits that form a triangular formation with center 20° behind the Earth and side length 5 million km. This figure is not to scale. (NASA)

    So how does LISA plan to do it?

    The secret is in these gold-platinum alloy cubes. In the center of each optical system, a solid cube that’s 4 centimeters (about 1.6″) on each side floats freely in the weightless conditions of space. While external sensors monitor the solar wind and solar radiation pressure, with electronic sensors compensating for those extraneous forces, the gravitational forces from all the known bodies in the Solar System can be calculated and anticipated.

    As the spacecrafts, and the cubes, move relative to one another, the lasers adjust in a predictable, well-known fashion. So long as they continue to reflect off of the cubes, the distances between them can be measured.

    The gold-platinum alloy cubes, of central importance to the upcoming LISA mission, have already been built and tested in the proof-of-concept LISA Pathfinder mission

    ESA/LISA Pathfinder

    It’s not a matter of keeping the distances fixed and measuring a tiny change due to a passing wave; it’s a matter of understanding exactly how the distances will behave over time, accounting for them, and then looking for the periodic departures from those measurements to a high-enough precision. LISA won’t hold the three spacecrafts in a fixed position, but will allow them to adjust freely as Einstein’s laws dictate. It’s only because gravity is so well-understood that the additional signal of the gravitational waves, assuming the wind and radiation from the Sun is sufficiently compensated for, can be teased out.

    The proposed ‘Big Bang Observer’ would take the design of LISA, the Laser Interferometer Space Antenna, and create a large equilateral triangle around Earth’s orbit to get the longest-baseline gravitational wave observatory ever. (GREGORY HARRY, MIT, FROM THE LIGO WORKSHOP OF 2009, LIGO-G0900426)

    If we want to go even farther, we have dreams of putting three LISA-like detectors in an equilateral triangle around different points in Earth’s orbit: a proposed mission called Big Bang Observer (BBO). While LISA can detect binary systems with periods ranging from minutes to hours, BBO will be able to detect the grandest behemonths of all: supermassive binary black holes anywhere in the Universe, with periods of years.

    If we’re willing to invest in it, space-based gravitational wave observatories could allow us to map out all of the most massive, densest objects located throughout the entire Universe. The key isn’t holding your laser arms fixed, but simply in knowing exactly how, in the absence of gravitational waves, they’d move relative to one another. The rest is simply a matter of extracting the signal of each gravitational wave out. Without the Earth’s noise to slow us down, the entire cosmos is within our reach.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:42 am on January 5, 2019 Permalink | Reply
    Tags: Ask Ethan: How Close Could Two Alien Civilizations Get To One Another?, , , , , Ethan Siegel, There are lots of steps that have to happen to make life but the ingredients for it are literally everywhere.   

    From Ethan Siegel: “Ask Ethan: How Close Could Two Alien Civilizations Get To One Another?” 

    From Ethan Siegel
    May 12, 2018

    Here on Earth, the closest world to us is our barren, uninhabited moon. But in many imaginable cases, there could be another inhabited world close by our own, maybe even within our Solar System. How close could one be? (flickr user Kevin Gill)

    Here on Earth, all the right conditions occurred for intelligent life to come about, but the nearest aliens, if they’re on another world, are light years away. But it doesn’t have to be that way at all!

    Here on planet Earth, in orbit around the Sun, we’re the only intelligent-life game in town. There might be possibilities for either past life or microbial life elsewhere in the Solar System, but as far as intelligent, complex, differentiated and multicellular life goes, what’s on our world is far more advanced than anything else we could hope to find. Intelligent aliens, if they’re out there inhabiting another world, are at least four light years away. But must that be the case for aliens anywhere in the galaxy? That’s what our Patreon supporter Jason McCampbell wants to know:

    What’s [the] closest two, independent intelligent civilizations could be, ignoring interstellar travel and assuming they develop in different star systems and follow roughly what we know as ‘life’? Globular clusters can have a high density of stars, but does too high a density rule out habitability? An astrophysicist in a dense cluster would have a much different view of the universe and the search for exoplanets.

    There are lots of steps that have to happen to make life, but the ingredients for it are literally everywhere. Even if you’re restricting yourself to looking for life that looks (chemically) like us, the Universe is full of possibilities.

    Atoms can link up to form molecules, including organic molecules and biological processes, in interstellar space as well as on planets. Is it possible that life began not only prior to Earth, but not on a planet at all? (Jenny Mottar)

    You need to form enough heavy elements so that you can have rocky planets, organic molecules, and the building blocks of life. The Universe isn’t born with these! In the aftermath of the Big Bang, the Universe is 99.999999% hydrogen and helium, with no carbon, no oxygen, no nitrogen, phosphorous, calcium, iron, or any of the other complex elements necessary for life. In order to get there, we have to have multiple generations of stars live, burn through their fuel, die in a supernova explosion, and recycle those newly-created heavy elements into the next generation of stars. We need neutron star-neutron star mergers to build up the heaviest elements, many of which are necessary for life processes here on Earth and in our bodies, in copious amounts. This requires a lot of astrophysics to make it so.

    The Omega nebula, known also as Messier 17, is an intense and active region of star formation, viewed edge-on, which explains its dusty and beam-like appearance. Stars that form at different times in the Universe’s history have different abundances of heavy elements. (ESO / VST survey)

    ESO VST interior

    ESO VST telescope, at ESO’s Cerro Paranal Observatory, with an elevation of 2,635 metres (8,645 ft) above sea level

    Even though Earth formed over 9 billion years after the Big Bang, the Universe didn’t have to wait so long. We classify stars into three populations:

    Population I: stars like the Sun, with 1–2% of the elements making them up being heavier than hydrogen and helium. This material is very processed and leads to solar systems with a mix of gas giants and rocky planets capable of housing life.
    Population II: these are mostly older, more pristine stars. They may only have 0.001–0.1% of the heavy elements the Sun has, and most of their worlds are diffuse, gassy worlds. These may be too primitive and too low in heavy elements for life.
    Population III: the first stars in the Universe, that must be entirely unpolluted by heavy elements. These haven’t yet been discovered, but are theoretically the first stars of all.

    When we look at the earliest galaxies, they’re full of pretty much all Population II stars. But nearby, we have a mix of young-and-old, metal-rich and metal-poor stars.

    The distances between the Sun and many of the nearest stars shown here are accurate, but each star — even the largest ones here — would be less than one-one millionth of a pixel in diameter if this were to scale. Image credit: Andrew Z. Colvin, under a c.c.a.-s.a.-3.0.(Andrew Z. Colvin / Wikimedia Commons)

    One of the most important lessons came from the Kepler mission, and specifically the system Kepler-444. This is a Population I star (with planets around it), but it’s much, much older than Earth. While our world is about 4.5 billion years old, Kepler-444 is 11.2 billion years old, meaning that the Universe could’ve formed a world like Earth very early on, at least ~7 billion years earlier than Earth formed. Given that possibility, and the fact that areas like the center of our galaxy got even more metal-rich than our region did very, very quickly, it’s possible that there are locations in the Universe (and perhaps even in the Milky Way) that are even more conducive to bringing about intelligent life than the Sun-Earth system is.

    Sugar molecules in the gas surrounding a young, Sun-like star. The raw ingredients for life may exist everywhere, but not every planet that contains them will develop life. (ALMA (ESO/NAOJ/NRAO)/L. Calçada (ESO) & NASA/JPL-Caltech/WISE Team)

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    NASA Wise Telescope

    So given all that we know about where the stars that are good candidates for life can be, what’s the closest two alien civilizations could be to one another? Where would be the places to look? And what would the answers be under different circumstances? Let’s look at five major possibilities.

    This artist’s impression displays TRAPPIST-1 and its planets reflected in a surface. The potential for water on each of the worlds is also represented by the frost, water pools, and steam surrounding the scene. However, it is unknown whether any of these worlds actually still possess atmospheres, or if they’ve been blown away by their parent star. One thing is certain, however: the potentially habitable worlds are close to each other: separated by only ~1 million km each. (NASA/R. Hurt/T. Pyle)

    1.) The same solar system. This is the real dream. In the early days of our Solar System, it’s plausible that Venus, Earth, and Mars (and potentially even Theia, the hypothetical planet that collided with Earth to create the Moon) all had the same life-friendly conditions. They likely had a crust and atmosphere full of the ingredients for life, along with a past history of liquid water on their surface. Venus and Mars each, at closest approach to Earth, come within a few tens of millions of kilometers: 38 million for Venus and 54 million for Mars. But around an M-class (red dwarf) star, planetary separation distances are much smaller: separation distances are approximately only 1 million km between potentially habitable worlds in the TRAPPIST-1 system. Twin moons around a giant world, or a binary planet, could be even closer. If life succeeds once given certain conditions, why not twice in almost exactly the same place?

    The globular cluster Terzan 5 as seen by the ESO’s Very Large Telescope, with other data as well. The densities in the center of a globular cluster are higher, while still being stable, than anyplace else. (ESO-VLT, F.R. Ferraro et al., HST-NICMOS, ESA/Hubble & NASA)

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo

    NASA/ESA Hubble Telescope

    2.) Within a globular cluster. Globular clusters are massive collections of somewhere around hundreds of thousands of star contained within a sphere of perhaps a few dozen light years in radius. In the outer regions, stars are typically separated by a light year, but in the innermost regions of the densest clusters, star separations may be as small as the distance from the Sun to the Kuiper belt. The orbits of planets within those star systems should be stable even in these dense environments, and given that we know of globular clusters far younger than the 11.2 billion years that Kepler-444 is, there should be good candidates for life and habitability among them. A few hundred astronomical units, although this distance will change over time as stars move, could be a fascinatingly close encounter between two civilizations.

    High resolution near-infrared imaging has led to the discovery of three stellar superclusters at the Galactic Center. Since near-infrared wavelengths cut through the dense dust between Earth and the Galactic Center, we are able to see these superclusters. They include the Central Parsec, Quintuplet, and Arches clusters. But all the stars found there, and in the galactic center in general, are quite young. (Gemini Observatory)

    3.) Near the galactic center. The closer you get to the center of the galaxy, the denser the stars get. Within the central few light years, we have extremely high densities of stars, rivaling what we see in the cores of globular clusters. In some ways, the galactic center is an even denser environment, with large black holes, extremely massive stars, and new star-forming clusters, all things that globular clusters don’t have. But the problem with the stars that we see in the Milky Way’s core is that they’re all relatively young. Perhaps due to the volatility of the environment there, stars rarely make it to even a billion years of age. Despite the increased density, these stars are unlikely to have advanced civilizations. They just don’t live long enough.

    Stars form in a wide variety of sizes, colors and masses, including many bright, blue ones that are tens or even hundreds of times as massive as the Sun. This is demonstrated here in the open star cluster NGC 3766, in the constellation of Centaurus. (ESO)

    4.) In a dense star cluster or spiral arm. Okay, so what about the star clusters that form in the galactic plane? Spiral arms are denser than typical regions of a galaxy, and that’s where new stars are likely to form. The star clusters that remain from those epochs often contain thousands of stars located in a region just a few light years wide. But again, stars don’t remain in these environments for very long. The typical open star cluster dissociates after a few hundred million years, with only a small fraction lasting billions of years. Stars move in-and-out of spiral arms all the time, including the Sun. Overall, even though stars inside may have typical distances between them of between 0.1 and 1 light year, they’re unlikely to be good candidates for life.

    A logarithmic chart of distances, showing the Voyager spacecraft, our Solar System and our nearest star, for comparison. (NASA / JPL-Caltech)

    5.) Distributed throughout interstellar space. Otherwise, we come back to what we see in our own neighborhood: distances that are typically a few light years. As you get closer to the center of a galaxy, you can decrease that to the same distance you see in an open cluster: between 0.1–1 light years. But if you try to get closer than that, you run into the problem we’ve seen too close to the galactic center: mergers, interactions, and other catastrophes are likely to ruin your stable environment. You can get closer, but typical interstellar space isn’t the way to go. If you insist on it, your best bet is to wait for another star to pass close by, something that happens about once every million years for a typical star.

    A plot of how frequently stars within the Milky Way is likely to pass within a certain distance of our Sun. This is a log-log plot, with distance on the y-axis and how long you typically need to wait for such an event to happen on the x-axis. (E. Siegel)

    While we don’t expect intelligent alien life to be ubiquitous and plentiful throughout the Universe in the same way that planets and stars are, every such world that meets the right conditions is a chance. And every time you get a chance, that’s an opportunity, with finite odds, for success. Each one of these possibilities could be real! They may not be likely, but until we go out and find what is (and isn’t) out there, it’s vital to keep an open mind about what the Universe could bring to us as far as alien intelligence is concerned. The truth is no doubt out there, but it’s important to recognize that if we had gotten a lot luckier, it could be closer than we dare to imagine today.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 10:59 am on December 26, 2018 Permalink | Reply
    Tags: Aliens? Or Alien Impostors? Finding Oxygen Might Not Mean Life After All, , , , , , Ethan Siegel, ,   

    From Ethan Siegel: “Aliens? Or Alien Impostors? Finding Oxygen Might Not Mean Life, After All” 

    From Ethan Siegel
    Dec 25, 2018

    Both reflected sunlight on a planet and absorbed sunlight filtered through an atmosphere are two techniques humanity is presently developing to measure the atmospheric content and surface properties of distant worlds. In the future, this could include the search for organic signatures as well. (MELMAK / PIXABAY)

    The most surefire, easily-seen signature of life on Earth might be a cosmic red herring around other worlds.

    In our quest for life beyond the Solar System, it makes sense to look for a world like our own. We’ve long hoped to find an Earth-sized world around a Sun-like star at the right distance for liquid water as our first step, and with thousands of planets in our coffers already, we’re extremely close. But not every world with the right physical properties is going to have life; we need additional information to know whether a potentially habitable world is actually inhabited.

    The follow-up would be to analyze the planet’s atmosphere for Earth-like signatures: potential signs of life. Earth’s combination of atmospheric gases — nitrogen, oxygen, water vapor, carbon dioxide and more — has been assumed to be a dead giveaway for a planet with life on it. But a new study by planetary scientist Dr. Sarah Hörst’s team throws that into doubt [see paper below]. Even worlds rich in oxygen might not harbor aliens, but an impostor process that could fool us all.

    Most of the planets we know of that are comparable to Earth in size have been found around cooler, smaller stars than the Sun. This makes sense with the limits of our instruments; these systems have larger planet-to-star size ratios than our Earth does with respect to the Sun. (NASA / AMES / JPL-CALTECH)

    The scientific story of how to even reach that point is fascinating, and closer to becoming a reality than ever before. We can understand how this happens by imagining we were aliens, looking at our Sun from a large distance away, trying to determine if it possessed an inhabited world.

    By measuring the slight variations in the frequency of the Sun’s light over long periods of time, we’d be able to deduce the gravitational influence of the planets on them. This detection method is known either the radial velocity or the stellar wobble method, and can tell us information about a planet’s mass and orbital period. Most of the early (pre-Kepler) exoplanets were discovered with this technique, and it’s still the best method we have for both determining planetary masses and confirming the existence of candidate exoplanets.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http:// http://www.superwasp.org/exoplanets.htm

    Veloce Rosso, Australia’s next premier astronomical instrument. On the the Anglo-Australian Telescope (AAT). a precision radial velocity spectrograph, capable of detecting Earth-like planets

    AAO Anglo Australian Telescope near Siding Spring, New South Wales, Australia, Altitude 1,100 m (3,600 ft)

    Today, we know of over 3,500 confirmed exoplanets, with more than 2,500 of those found in the Kepler data. These planets range in size from larger than Jupiter to smaller than Earth. Yet because of the limitations on the size of Kepler and the duration of the mission, there have been zero Earth-sized planets found around Sun-like stars that fall into Earth-like orbits. (NASA/AMES RESEARCH CENTER/JESSIE DOTSON AND WENDY STENZEL; MISSING EARTH-LIKE WORLDS BY E. SIEGEL)

    We also need to know the size of the planet. With the stellar wobble alone, we’ll only know what the mass of the world is relative to the angle-of-inclination of its orbit. A world that’s the mass of Earth could be well-suited to life if it’s got an Earth-like atmosphere, but it could be disastrous for life if it’s an iron-like world with no atmosphere at all, or a low-density, puffy world with a large gaseous envelope.

    The transit method, where a planet passes in front of its parent star, is our most prolific method for measuring a planet’s radius.

    Planet transit. NASA/Ames

    By calculating how much of the parent star’s light it blocks when it crosses our line-of-sight, we can determine its size. For an alien civilization whose line-of-sight was properly aligned with Earth orbiting the Sun, we’d be able to detect it with technology only about 20% more sensitive than Kepler was.

    Kepler was designed to look for planetary transits, where a large planet orbiting a star could block a tiny fraction of its light, reducing its brightness by ‘up to’ 1%. The smaller a world is relative to its parent star, the more transits you need to build up a robust signal, and the longer its orbital period, the longer you need to observe to get a detection signal that rises above the noise. (MATT OF THE ZOONIVERSE/PLANET HUNTERS TEAM)

    This is roughly where we are today. We’ve found hundreds of worlds that we suspect are rocky orbiting their stars, many of them right around Earth-sized. For a large fraction of them, we’ve measured their mass, radius, and orbital period, with a small percentage being at the right orbital distance to have Earth-like temperatures.

    Most of them orbit red dwarf stars — the most common class of star in the Universe — which means the forces should tidally lock them: the same side should always face the star. These stars flare often, posing a danger to any potential atmospheres on these worlds.

    But a significant fraction will orbit K, G, or F-class stars, where they can rotate on their axes, maintain an atmosphere, and have the potential for Earth-like life. That’s where we want to look.

    When a planet transits in front of its parent star, some of the light is not only blocked, but if an atmosphere is present, filters through it, creating absorption or emission lines that a sophisticated-enough observatory could detect. If there are organic molecules or large amounts of molecular oxygen, we might be able to find that, too. (ESA / DAVID SING)

    And that’s where future technology is hoping to take us. If a larger Kepler-like telescope were equipped with the right instruments, we could break up the light passing through an exoplanet’s atmosphere during a transit, and determine its atomic and molecular contents. If we were looking at Earth, we could determine that it was composed of nitrogen, oxygen, argon, water vapor, and carbon dioxide, along with other trace signatures.

    Even without an ideal alignment, direct imaging will still be possible.

    Direct imaging-This false-color composite image traces the motion of the planet Fomalhaut b, a world captured by direct imaging. Credit: NASA, ESA, and P. Kalas (University of California, Berkeley and SETI Institute

    Potential NASA flagship missions, such as HabEx or LUVOIR (with either a starshade or a coronagraph), could block the light of the parent star and detect the light from an orbiting planet directly. This light could again be broken up into its individual wavelengths, determining its molecular content.

    NASA Habitable Exoplanet Imaging Mission (HabEx) The Planet Hunter

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    Whether from absorption (transit) or emission (direct imaging), we could learn what a potential Earth-twin’s atmosphere is composed of.

    The Starshade concept could enable direct exoplanet imaging as early as the 2020s. This concept drawing illustrates a telescope using a star shade, enabling us to image the planets that orbit a star while blocking the star’s light to better than one part in 10 billion. (NASA AND NORTHROP GRUMMAN)

    So what if we find an oxygen-rich world? No other planets, dwarf planets, moons, or other objects contain even 1% oxygen that we know of. Earth’s atmosphere transformed over nearly 2 billion years before it had an oxygen content comparable to what it does today, and it was anaerobic life processes that created our modern atmosphere that’s rich in molecular oxygen. Because of how easily oxygen is destroyed by ultraviolet light and how difficult it is to produce in large quantities via inorganic, chemical processes, oxygen has long been taken as the one biosignature we could rely on to indicate a living world.

    If organic molecules were found there as well, it would seem like a surefire indicator that life, indeed, must have taken hold on such a planet.

    And that’s where the Hörst lab’s new findings come into play. In a paper just published in ACS Earth and Space Chemistry, a specially-designed chamber to mimic the environment of a hazy exoplanet atmosphere showed that molecular oxygen (O2) could be created in a number of environmental conditions likely to occur naturally, with no life necessary to create it.

    The ingenious method was to create a gas mixture that would be consistent with what we expect an Earth-like or super-Earth-like environment might hold. That mixture was then inserted into a specially-designed chamber and subjected to a variety of temperature, pressure, and energy-injection conditions that would likely mimic the activity that could occur on actual exoplanets.

    Chao He explaining how the study’s PHAZER setup works, where PHAZER is the specially-designed Planetary HAZE chamber found in the Hörst lab at Johns Hopkins University. (CHANAPA TANTIBANCHACHAI / JOHNS HOPKINS UNIVERSITY)

    A total of nine different gas mixtures were used at temperatures ranging from 27 °C (80 °F) up to approximately 370 °C (700 °F), representing the temperature range expected to naturally occur. The energy injection came in two different forms: from ultraviolet light and from plasma discharges, which represent natural conditions likely to be caused by sunlight or lightning-like activity.

    The results? There were multiple scenarios that resulted in the production of both organic molecules (like sugar and amino acid precursors) and oxygen, yet didn’t require any life at all to get them. According to first author Chao He,

    People used to suggest that oxygen and organics being present together indicates life, but we produced them abiotically in multiple simulations. This suggests that even the co-presence of commonly accepted biosignatures could be a false positive for life.

    By heating atmospheric gases thought to mimic exoplanet atmospheres to various temperatures and subjecting them to ultraviolet and plasma-based energy injections, organic molecules and oxygen can be produced. We must be careful that we don’t mistake an abiotic signature of coincidence oxygen and organics for life. (C. HE ET AL., ‘GAS PHASE CHEMISTRY OF COOL EXOPLANET ATMOSPHERES: INSIGHT FROM LABORATORY SIMULATIONS,’ ACS EARTH SPACE CHEM. (2018))

    The experiment wasn’t some cherry-picked design to attempt to produce this false-positive result, either. The gases inside the chamber were designed to mimic the contents of known exoplanetary atmospheres, with the ultraviolet energy injection designed to simulate sunlight. The experiments simulated a variety of atmospheric (hydrogen-rich, water-rich, and carbon dioxide-rich) environments, and all of them created haze particles and yielded organic molecules such as hydrogen cyanide, acetylene, and methanimine.

    Multiple environments generated organic molecules, prebiotic precursor molecules, and oxygen all at once, at Earth-like temperatures and much hotter temperatures as well. The paper itself states the main conclusion very succinctly:

    Our laboratory results indicate that complex atmospheric photochemistry can happen in diverse exoplanet atmospheres and lead to the formation of new gas products and haze particles, including compounds (O2 and organics) that could be falsely identified as biosignatures.

    The amount of molecular oxygen produced in these experiments was relatively small by some metrics; Hörst herself wouldn’t call the atmospheres created in the lab “oxygen-rich.” But it’s nevertheless possible that these processes would translate into an oxygen-rich atmosphere on an exoplanet, given the right conditions and enough time. At this point, it appears possible that finding the presence of both organics and molecular oxygen could be due to abiotic, non-life processes exclusively.

    Signatures of organic, life-giving molecules are found all over the cosmos, including in the largest, nearby star-forming region: the Orion Nebula. Someday soon, we may be able to look for biosignatures in the atmospheres of Earth-sized worlds around other stars, or we may detect simple life directly on another world in our Solar System. (ESA, HEXOS AND THE HIFI CONSORTIUM; E. BERGIN)

    This doesn’t mean that finding an Earth-like world with an oxygen-rich atmosphere won’t be incredibly interesting; it absolutely will be. It doesn’t mean that finding organic molecules coincident with the oxygen won’t be compelling; it will be a finding worth getting excited over. It doesn’t even mean that it won’t be indicative of life; a world with oxygen and organic molecules may well be overflowing with living organisms. But it does mean that we have to be careful.

    Historically, when we’ve looked to the skies for evidence of life beyond Earth, we’ve been biased by hope and what we know on Earth. Theories of dinosaurs on Venus or canals on Mars still linger in our memories, and we must be careful that extraterrestial oxygen signatures don’t lead us to falsely optimistic conclusions. We now know that both abiotic processes and life-dependent ones can create an oxygen-rich atmosphere.

    The hard problem, then, will be disentangling the potential causes when we actually find our first oxygen-rich, Earth-like exoplanet. Our reward, if we’re successful, will be the knowledge of whether or not we’ve actually found life around another star.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 12:27 pm on December 24, 2018 Permalink | Reply
    Tags: 20 Incredible New Images Show How Planets First Form Around Stars, , , , , Ethan Siegel   

    From Ethan Siegel: “20 Incredible New Images Show How Planets First Form Around Stars” 

    From Ethan Siegel
    Dec 24, 2018

    20 new protoplanetary disks, as imaged by the Disk Substructures at High Angular Resolution Project (DSHARP) collaboration, showcasing what newly-forming planetary systems look like. (S. M. ANDREWS ET AL. AND THE DSHARP COLLABORATION, ARXIV:1812.04040)

    For generations, planet formation was only a theory. As 2018 comes to an end, here’s the evidence of what’s going on.

    The theory of planet formation has been around for a long time, but lacked validation.

    Artist’s impression of a young star surrounded by a protoplanetary disk. There are many unknown properties about protoplanetary disks around Sun-like stars, but they all exhibit infrared radiation. Tabby’s star has none. (ESO/L. CALÇADA)

    The very young protostar M17-SO1, as imaged way back in 2005 with the ground-based Subaru telescope, shows features of a protoplanetary disk around a newly-forming star, but internal features were unable to be resolved with instrumentation of that time. (SUBARU / NAOJ)

    NAOJ/Subaru Telescope at Mauna Kea Hawaii, USA,4,207 m (13,802 ft) above sea level

    As protostars grow, they heat up, while their disks race to form planets before the volatile material evaporates.

    30 protoplanetary disks, or proplyds, as imaged by Hubble in the Orion Nebula. Hubble is a brilliant resource for identifying these disk signatures in the optical, but has little power to probe the internal features of these disks, even from its location in space. (NASA/ESA AND L. RICCI (ESO))

    With observatories like Hubble, we’ve found and identified many disks, but couldn’t measure their internal properties.

    In theory, those disks ought to display gaps where massive, early planets have begun their formation.

    At the Very Large Telescope, the SPHERE instrument successfully imaged a number of protoplanetary disks directly.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo

    ESO SPHERE extreme adaptive optics system and coronagraphic facility on the extreme adaptive optics system and coronagraphic facility on the VLT MELIPAL UT3, Cerro Paranal, Chile, with an elevation of 2,635 metres (8,645 ft) above sea level

    The observational structure of the young star MWC 758, at right, compared with a simulation involving a large outer planet, at left. This Herbig star is much more massive than our Sun ever was, but also is not a true star. (NASA, ESA, ESO, M. BENISTY ET AL. (UNIVERSITY OF GRENOBLE), R. DONG (LAWRENCE BERKELEY NATIONAL LABORATORY), AND Z. ZHU (PRINCETON UNIVERSITY))

    Some displayed spirals due to massive outer planets, while others possessed symmetric rings caused by lower-mass worlds.

    Eight young T Tauri stars, as imaged by SPHERE, show disks, rings, and symmetric, unperturbed structures. These 8 disks range in age from 1 to 15 million years, and are all around stars of 2 solar masses or less. (H. AVENHAUS ET AL. (2018), ARXIV.ORG/ABS/1803.10882)

    The best portraits of protoplanetary disks, however, arise from ALMA.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ALMA’s crisp images are striking.

    The distance from the young, central star determines the type of material that’s present. Heat and energy flux changes everything in these systems. The gaps in the rings and disk indicate the likely presence of planets, which are details that ALMA can reveal. (K. ZHANG IN G. A. BLAKE’S RESEARCH GROUP, FROM GEOFFREY A. BLAKE & EDWIN A. BERGIN, NATURE 520, 161–162 (09 APRIL 2015))

    Its Disk Substructures at High Angular Resolution Project (DSHARP) has just released their first results, revealing 20 nearby protoplanetary disks.

    These 20 protoplanetary disks, as they appear in the most recent ApJ letters paper (in press), showcase the diversity and intricate details found in both face-on and tilted protoplanetary disks imaged by the DSHARP team. (S. M. ANDREWS ET AL. AND THE DSHARP COLLABORATION, ARXIV:1812.04040)

    Most have gaps, rings, and easily-identifiable locations where candidate planets may lie.

    HD 163296 is representative of a typical protoplanetary disk viewed by the DSHARP collaboration. It has a central protoplanetary disk, outer emission rings, and gaps between them. There ought to be multiple planets in this system, and one can identify an odd artifact interior to the 2nd-from-the-outermost ring that may be a telltale sign of a perturbing planet. The scale bar at lower right is 10 AU, and appears in all DSHARP images shown here. (S. M. ANDREWS ET AL. AND THE DSHARP COLLABORATION, ARXIV:1812.04040)

    The most common features are the concentric emission rings and dust-depleted gaps.

    Understanding planetary evolution, from nebulae to protoplanets to full-blown solar systems, is finally within reach.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 3:34 pm on December 11, 2018 Permalink | Reply
    Tags: , , , , Ethan Siegel, Five Surprising Truths About Black Holes From LIGO, ,   

    From Ethan Siegel: “Five Surprising Truths About Black Holes From LIGO” 

    From Ethan Siegel
    Dec 11, 2018

    A still image of a visualization of the merging black holes that LIGO and Virgo have observed so far. As the horizons of the black holes spiral together and merge, the emitted gravitational waves become louder (larger amplitude) and higher pitched (higher in frequency). The black holes that merge range from 7.6 solar masses up to 50.6 solar masses, with about 5% of the total mass lost during each merger. (TERESITA RAMIREZ/GEOFFREY LOVELACE/SXS COLLABORATION/LIGO-VIRGO COLLABORATION)

    With a total of 10 black holes detected, what we’ve learned about the Universe is truly amazing.

    On September 14th, 2015, just days after LIGO first turned on at its new-and-improved sensitivity, a gravitational wave passed through Earth. Like the billions of similar waves that had passed through Earth over the course of its history, this one was generated by an inspiral, merger, and collision of two massive, ultra-distant objects from far beyond our own galaxy. From over a billion light years away, two massive black holes had coalesced, and the signal — moving at the speed of light — finally reached Earth.

    But this time, we were ready. The twin LIGO detectors saw their arms expand-and-contract by a subatomic amount, but that was enough for the laser light to shift and produce a telltale change in an interference pattern. For the first time, we had detected a gravitational wave. Three years later, we’ve detected 11 of them, with 10 coming from black holes. Here’s what we’ve learned.

    The 30-ish solar mass binary black holes first observed by LIGO are very difficult to form without direct collapse. Now that it’s been observed twice, these black hole pairs are thought to be quite common. But the question of whether black hole mergers emit electromagnetic emission is not yet settled. (LIGO, NSF, A. SIMONNET (SSU))

    There have been two “runs” of LIGO data: a first one from September 12, 2015 to January 19, 2016 and then a second one, at somewhat improved sensitivity, from November 30, 2016 to August 25, 2017. That latter run was, partway through, joined by the VIRGO detector in Italy, which added not only a third detector, but significantly improved our ability to pinpoint the location of where these gravitational waves occurred. LIGO is currently shut down right now, as it’s undergoing upgrades that will make it even more sensitive, as it prepares to begin a new data-taking observing run in the spring of 2019.

    On November 30th, the LIGO scientific collaboration released the results of their improved analysis, which is sensitive to the final stages of mergers between objects between about 1 and 100 solar masses.

    The 11 gravitational wave events detected by LIGO and Virgo, with their names, mass parameters, and other essential information encoded in Table form. Note how many events came in the last month of the second run: when LIGO and Virgo were operating simultaneously. (THE LIGO SCIENTIFIC COLLABORATION, THE VIRGO COLLABORATION; ARXIV:1811.12907)

    The 11 detections that have been made so far are shown above, with 10 of them representing black hole-black hole mergers, and only GW170817 representing a neutron star-neutron star merger. Those merging neutron stars was the closest event at a mere 130–140 million light years away. The most massive merger seen — GW170729 — comes to us from a location that, with the expansion of the Universe, is now 9 billion light years away.

    These two detections are also the lightest and heaviest gravitational wave mergers ever detected, with GW170817 colliding a 1.46 and a 1.27 solar mass neutron star, and GW170729 colliding a 50.6 and a 34.3 solar mass black hole together.

    Here are the five surprising truths that we’ve learned from all of these detections combined.

    LIGO, as designed, should be sensitive to black holes of a particular mass range that inspiral and merge: from 1 up to a few hundred solar masses. The fact that what we observe appears to be capped at 50 solar masses places severe constraints on black hole merger rates above that figure. (NASA / DANA BERRY (SKYWORKS DIGITAL))

    1.) The largest merging black holes are the easiest to see, and they don’t appear to get larger than about 50 solar masses. One of the best things about looking for gravitational waves is that it’s easier to see them from farther away than it is for a light source. Stars appear dimmer in proportion to their distance squared: a star 10 times the distance is just one-hundredth as bright. But gravitational waves are dimmer in direct proportion to distance: merging black holes 10 times as far away produce 10% the signal.

    As a result, we can see very massive objects to very great distances, and yet we don’t see black holes merging with 75, 100, 150, or 200+ solar masses. 20-to-50 solar masses are common, but we haven’t seen anything above that yet. Perhaps the black holes arising from ultra-massive stars truly are rare.

    Aerial view of the Virgo gravitational-wave detector, situated at Cascina, near Pisa (Italy). Virgo is a giant Michelson laser interferometer with arms that are 3 km long, and complements the twin 4 km LIGO detectors. (NICOLA BALDOCCHI / VIRGO COLLABORATION)

    2.) Adding in a third detector both improves our ability to pinpoint their positions and increases the detection rate significantly. LIGO ran for about 4 months during its first run and 9 months during its second. Yet, fully half of their detections came in the final month: when VIRGO was running alongside it, too. In 2017, gravitational wave events were detected on:

    July 29th (50.6 and 34.3 solar mass black holes),
    August 9th (35.2 and 23.8 solar mass black holes),
    August 14th (30.7 and 25.3 solar mass black holes),
    August 17th (1.46 and 1.27 solar mass neutron stars),
    August 18th (35.5 and 26.8 solar mass black holes), and
    August 23rd (39.6 and 29.4 solar mass black holes).

    During this final month of observing, we were detecting more than one event per week. It’s possible that, as we becomes sensitive to greater distances and smaller-amplitude, lower-mass signals, we may begin seeing as many as one event per day in 2019.

    Cataclysmic events occur throughout the galaxy and across the Universe, from supernovae to active black holes to merging neutron stars and more. When two black holes merge, their peak brightness is enough, for a few short milliseconds, to outshine all the stars in the observable Universe combined. (J. WISE/GEORGIA INSTITUTE OF TECHNOLOGY AND J. REGAN/DUBLIN CITY UNIVERSITY)

    3.) When the black holes we’ve detected collide, they release more energy at their peak than all the stars in the Universe combined. Our Sun is the standard by which we came to understand all other stars. It shines so brightly that its total energy energy output — 4 × 10²⁶ W — is equivalent to converting four million tons of matter into pure energy with every second that goes by.

    With an estimated ~10²³ stars in the observable Universe, the total power output of all the stars shining throughout the sky is greater than 10⁴⁹ W at any given time: a tremendous amount of energy spread out over all of space. But for a brief few milliseconds during the peak of a binary black hole merger, every one of the observed 10 events outshone, in terms of energy, all the stars in the Universe combined. (Although it’s by a relatively small amount.) Unsurprisingly, the most massive merger tops the charts.

    Even though black holes should have accretion disks, there aren’t any significant electromagnetic signals expected to be generated by a black hole-black hole merger. Their energy instead gets converted into gravitational radiation: ripples in the fabric of space itself. We see this radiation, and it’s the most energetic event to occur in the Universe when it happens. (AEI POTSDAM-GOLM)

    4.) About 5% of the total mass of both black holes gets converted into pure energy, via Einstein’s E = mc², during these mergers. The ripples in space that these black hole mergers produce need to get their energy from somewhere, and realistically, that has to come out of the mass of the merging black holes themselves. On average, based on the magnitude of the gravitational wave signals we’ve seen and the reconstructed distances to them, black holes lose about 5% of their total mass — having it converted into gravitational wave energy — when they merge.

    GW170608, the lowest mass black hole merger (of 10.9 and 7.6 solar masses), converted 0.9 solar masses into energy.
    GW150914, the first black hole merger (of 35.6 and 30.6 solar masses), converted 3.1 solar masses into energy.
    And GW170729, the most massive black hole merger (at 50.6 and 34.3 solar masses), converted 4.8 solar masses into energy.

    These events, creating ripples in spacetime, are the most energetic events we know of since the Big Bang. They produce more energy than any neutron star merger, gamma-ray burst, or supernova ever created.

    Illustrated here is the range of Advanced LIGO and its capability of detecting merging black holes. Merging neutron stars may have only one-tenth the range and 0.1% the volume, but we caught one, last year, just 130 million light years away. Additional black holes are likely present and merging, and perhaps run III of LIGO will find them.(LIGO COLLABORATION / AMBER STUVER / RICHARD POWELL / ATLAS OF THE UNIVERSE)

    5.) With everything we’ve seen so far, we fully expect there are lower-mass, more frequent black hole mergers just waiting to be seen. The most massive black hole mergers produce the largest-amplitude signals, and so are the easiest to spot. But with the way volume and distance are related, going twice as distant means encompassing eight times the volume. As LIGO gets more sensitive, it’s easier to spot massive objects at greater distances than low-mass objects that are close by.

    We know there are black holes of 7, 10, 15, and 20 solar masses out there, but it’s easier for LIGO to spot a more massive one farther away. We expect there are black hole binaries with mismatched masses: where one is much more massive than the other. As our sensitivities improve, we expect there are more of these out there to find, but the most massive ones are easier to find. We expect the most massive ones to dominate the early searches, just as “hot Jupiters” dominated early exoplanet searches. As we get better at finding them, expect there to be greater numbers of lower-mass black holes out there.

    LIGO and Virgo have discovered a new population of black holes with masses that are larger than what had been seen before with X-ray studies alone (purple). This plot shows the masses of all ten confident binary black hole mergers detected by LIGO/Virgo (blue). Also shown are neutron stars with known masses (yellow), and the component masses of the binary neutron star merger GW170817 (orange).(LIGO/VIRGO/NORTHWESTERN UNIV./FRANK ELAVSKY)

    When the first gravitational wave detection was announced, it was heralded as the birth of gravitational wave astronomy. People likened it to when Galileo first pointed his telescope at the skies, but it was so much more than that. It was as though our view of the gravitational wave sky had always been shrouded in clouds, and for the first time, we had developed a device to see through them if we got a bright enough gravitational source: merging black holes or neutron stars. The future of gravitational wave astronomy promises to revolutionize our Universe by letting us see it in a whole new way. And that future has already arrived; we are seeing the first fruits of our labor.

    This visualization shows the coalescence of two orbiting neutron stars. The right panel contains a visualization of the matter of the neutron stars. The left panel shows how space-time is distorted near the collisions. For black holes, there is no matter-generated signal expected, but thanks to LIGO and Virgo, we can still see the gravitational waves. (KARAN JANI/GEORGIA TECH)

    As our technology improves, we gain an ever-improved ability to see through those clouds: to see fainter, lower-mass, and more distant gravitational sources. When LIGO starts taking data again in 2019, we fully expect greater rates of ~30 solar mass black holes merging, but we hope to finally know what the lower-mass black holes are doing. We hope to see neutron star-black hole mergers. And we hope to go even farther out into the distant reaches of the Universe.

    Now that we’ve made it into the double digits for the number of detected events, it’s time to go even farther. With LIGO and VIRGO fully operational, and at better sensitivities than ever, we’re ready to go one step deeper in our exploration of the gravitational wave Universe. These merging, massive stellar remnants were just the start. It’s time to visit the stellar graveyard, and find out what the skeletons are truly like.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 2:59 pm on December 1, 2018 Permalink | Reply
    Tags: Ask Ethan: Are The Smallest Particles Of All Truly Fundamental?, , , , , Ethan Siegel   

    From Ethan Siegel: “Ask Ethan: Are The Smallest Particles Of All Truly Fundamental?” 

    From Ethan Siegel
    Dec 1, 2018

    Going to smaller and smaller distance scales reveals more fundamental views of nature, which means if we can understand and describe the smallest scales, we can build our way to an understanding of the largest ones. (PERIMETER INSTITUTE)

    We can go to deeper and deeper levels, finding more fundamental quantities as we do. But is there a truly fundamental quantity?

    What is the Universe, at a fundamental level, truly made out of? Is there a smallest possible building block, or set of building blocks, that we can both construct everything in our entire Universe out of and also that can never be divided into something smaller? It’s a question that science can say a lot about, but it doesn’t necessarily give us the final, ultimate answer. It’s also the question that Paul Riggs wants us to look at for this edition of Ask Ethan:

    Is there theoretical or experimental evidence which unambiguously establishes the existence of fundamental particles?

    There is always room for uncertainty in physics, especially when it comes to speculating what we’ll find in the future. But whether that ambiguity is reasonable or not is up for us to decide.

    If you wanted to know what the Universe was made of, how would you approach the problem? Thousands of years ago, imaginative ideation and the application of logic were the best tools we had. We knew about matter, but we had no way of knowing what composed it. It was hypothesized that there were a few fundamental ingredients that could be combined together — in various ways and under different conditions — to create everything that exists today.

    We could experimentally demonstrate that matter, whether solid, liquid, or gas, occupied space. We could show that it possessed mass. We could combine it into larger quantities or break it down into smaller ones. It’s only this last idea, however, of breaking the matter we can access down into smaller components, that lead to the idea of what “fundamental” truly might be.

    From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known.(MAGDALENA KOWALSKA / CERN / ISOLDE TEAM)

    Some thought matter might be made of different elements, such as fire, earth, air, and water. Others, such as the monists, thought that there was just one fundamental component of reality from which all others could be derived and assembled from. Still others, such as the Pythagoreans, opined that there must be a geometric mathematical structure that set out the rules for reality to obey, and the assembly of these structures led to the Universe we perceive today.

    The five Platonic solids are the only five polygonal shapes in three dimensions that are made of regular, 2D polygons. Many early scientists equated these five solids to the five fundamental elements. It’s a nice idea, but doesn’t come close to the standards of modern science. (ENGLISH WIKIPEDIA PAGE FOR PLATONIC SOLIDS)

    The idea that there was a truly fundamental particle, though, goes back to Democritus of Abdera, some 2400 years ago. Although it was merely an idea, Democritus held that all of matter was made of indivisible particles that he referred to as atoms (ἄτομος), meaning “uncuttable,” that combined together amidst a backdrop of otherwise empty space. Although his ideas contained many other irrelevant and bizarre details, the notion of fundamental particles persisted.

    Individual protons and neutrons may be colorless entities, but there is still a residual strong force between them. All the known matter in the Universe can be divided into atoms, which can be divided into nuclei and electrons, where nuclei can be divided even farther. We may not have even yet reached the limit of division, or the ability to cut a particle into multiple components. (WIKIMEDIA COMMONS USER MANISHEARTH)

    Take whatever piece of matter you want and try cutting it. Try breaking it up into a smaller and smaller component. Every time you succeed, try cutting it again, until you have to go beyond even the idea of cutting to arrive at the next layer. Macroscopic objects become microscopic ones; complex compounds become simple molecules; molecules become atoms; atoms become electrons and atomic nuclei; atomic nuclei become protons and neutrons, which themselves divide into quarks and gluons.

    At the smallest level imaginable, we can reduce everything we know of into fundamental, indivisible, particle-like entities: the quarks, leptons, and bosons of the Standard Model.

    The particles and antiparticles of the Standard Model have now all been directly detected, with the last holdout, the Higgs Boson, falling at the LHC earlier this decade. All of these particles can be created at LHC energies, and the masses of the particles lead to fundamental constants that are absolutely necessary to describe them fully. These particles can be well-described by the physics of the quantum field theories underlying the Standard Model, but whether they are fundamental is not yet known. (E. SIEGEL / BEYOND THE GALAXY)

    As far as physical sizes go, we have the rules of quantum physics to guide us. Every quantum in the Universe — a structure with a non-zero energy to it — can be described as containing a certain amount of energy. Because everything that exists can be described as both particle-like and wave-like in nature, you can place limits and constraints on a physical size for any such quanta.

    While molecules might be good descriptors of reality at the nanometer-level (10^-9 meters) scale, and atoms are good at Angstrom (10^-10 meter) scales, atomic nuclei are even smaller, with individual protons and neutrons getting down to femtometer (10^-15 meter) scales. But for the Standard Model particles, they go even smaller. At the energies we’ve probed, we can safely say that all the known particles are point-like and structure-free down to 10^-19 meter scales.

    A candidate Higgs event in the ATLAS detector. Note how even with the clear signatures and transverse tracks, there is a shower of other particles; this is due to the fact that protons are composite particles. This is only the case because the Higgs gives mass to the fundamental constituents that compose these particles. At high enough energies, the currently most-fundamental particles known may yet split apart themselves. (THE ATLAS COLLABORATION / CERN)

    To the best of our experimental knowledge, these are what we equate to being truly fundamental in nature. The particles and antiparticles and bosons of the Standard Model appear to be fundamental, from both an experimental and theoretical perspective. As we go to higher and higher particle energies, we can probe the structure of reality to even greater levels.

    The Large Hadron Collider offers the best constraints to date, but future colliders or extremely sensitive cosmic ray experiments could take us many orders of magnitude farther: to scales of 10^-21 meters for the most energetic terrestrial colliders and potentially all the way down to 10^-26 meters for the most extreme-energy cosmic rays.

    The objects we’ve interacted with in the Universe range from very large, cosmic scales down to about 10^-19 meters, with the newest record set by the LHC. There’s a long, long way down (in size) and up (in energy) to the scales that the hot Big Bang achieves, which is only about a factor of ~1000 lower than the Planck energy. If the Standard Model particles are composite in nature, higher energy probes may reveal that. (UNIVERSITY OF NEW SOUTH WALES / SCHOOL OF PHYSICS)

    Even at that, though, these ideas only impose limits on what we know and can say. They tell us that if we collide a particle (or antiparticle, or photon) with a certain amount of energy to it with another particle at rest, the particle that gets struck will behave in a fundamentally point-like fashion to within the limits of our experiments, detectors, and attainable energies. These experiments set an empirical limit on how large a presently thought-to-be fundamental particle can be, and are collectively known as deep inelastic scattering experiments.

    When you collide any two particles together, you probe the internal structure of the particles colliding. If one of them isn’t fundamental, but is rather a composite particle, these experiments can reveal its internal structure. Here, an experiment is designed to measure the dark matter/nucleon scattering signal. However, there are many mundane, background contributions that could give a similar result. This particular signal will show up in Germanium, liquid XENON and liquid ARGON detectors. (DARK MATTER OVERVIEW: COLLIDER, DIRECT AND INDIRECT DETECTION SEARCHES — QUEIROZ, FARINALDO S. ARXIV:1605.08788)

    But does this mean that these particles are truly fundamental? Not at all. They could be:

    further divisible, meaning that they could be broken up into smaller sub-components,
    or they could be resonances of one another, where the heavier “cousins” of the lightest particles are either excited states or composite versions of the lighter ones,
    or these particles could all be not “particles” at all, but rather apparent particles with a deeper, underlying structure.

    These ideas abound in scenarios like technicolor (which is constrained since the discovery of the Higgs boson, but not ruled out), but are most prominently represented by String Theory.

    Feynman diagrams (top) are based off of point particles and their interactions. Converting them into their string theory analogues (bottom) gives rise to surfaces which can have non-trivial curvature. In string theory, all particles are simply different vibrating modes of an underlying, more fundamental structure: strings. (PHYS. TODAY 68, 11, 38 (2015))

    There is no immutable law requiring that everything be made out of particles at all. Particle-based reality is a theoretical idea that is supported by and is consistent with experiments, but our experiments are limited in energy and the kind of information they can tell us about fundamental reality. In a scenario like String Theory, everything that we call a “fundamental particle” today might be nothing more than a string, vibrating or rotating at a certain frequency, with either an open nature (where the two ends are unattached) or a closed nature (where the two ends are attached to one another). Strings can snap, creating two quanta where one existed previously, or combine, creating a single quantum from two pre-existing ones.

    At a fundamental level, there is no requirement that the components of our Universe be zero-dimensional, point-like particles.

    Quantum gravity tries to combine Einstein’s general theory of relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. Whether space (or time) itself is discrete or continuous is not yet decided, as is the question of whether gravity is quantized at all, or particles, as we know them today, are fundamental or not. (SLAC NATIONAL ACCELERATOR LAB)

    There are many scenarios where the undiscovered mysteries of our Universe, such as dark matter and dark energy, aren’t made of particles at all, but rather are either some type of fluid or property of space. The nature of space and time themselves is not yet known; they could be fundamentally quantum or non-quantum in nature; they could be discrete (capable of being broken-up into chunks) or continuous.

    The particles we know of today, that we assume are fundamental today, could either have a finite, non-zero size in one or more dimensions, or they could be truly point-like, potentially all the way down to the Planck length or even, conceivably, smaller.

    Instead of an empty, blank, 3D grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, and masses/particles as discrete and fundamental. Neither one of these is necessarily the case. (CHRISTOPHER VITALE OF NETWORKOLOGIES AND THE PRATT INSTITUTE)

    The most important thing you should take away from this question — of whether truly fundamental particles exist or not — is that everything we know in science is only provisional. There is nothing that we know so well or so solidly that it is immutable. All of our scientific knowledge is merely the best approximation of reality that we’ve been able to construct at present. The theories that best describe our Universe might explain all the phenomena we can observe, they might make new, powerful, testable predictions, and they might even be unchallenged by any alternatives we know of at present.

    But that does not mean they are correct in any absolute sense. Science is always seeking to collect more data, explore new territory and scenarios, and to revise itself if ever a conflict arises. The particles we know of look fundamental today, but that’s no guarantee that nature will continue to indicate the existence of fundamental particles the deeper we learn to look.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 12:47 pm on November 8, 2018 Permalink | Reply
    Tags: , , , , Ethan Siegel, This Is Why There Are No Alternatives To The Big Bang   

    From Ethan Siegel: “This Is Why There Are No Alternatives To The Big Bang” 

    From Ethan Siegel
    Sep 11, 2018

    A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and the growth and formation of structure subsequently. The full suite of data, including the observations of the light elements and the Cosmic Microwave Background, leaves only the Big Bang as a valid explanation for all we see. (NASA / CXC / M. WEISS)

    Not everyone is satisfied with the Big Bang. But every alternative is a disastrous failure.

    It’s treated as though it’s an unassailable scientific truth: 13.8 billion years ago, the Universe as we know it emerged from a hot, dense state known as the Big Bang. While there were a number of serious alternatives considered for decades, throughout the 20th century, a scientific consensus emerged more than 50 years ago with the discovery of the Cosmic Microwave Background [CMB].

    CMB per ESA/Planck

    Despite many attempts to revive a variety of the discredited ideas, as well as attempts to formulate new possibilities, all have fallen away under the burden of the full suite of astronomical data. The Big Bang reigns supreme as the only valid theory of our cosmic origins.

    Here’s how we discovered our Universe started with a bang.

    he expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. It took thousands of scientists working for hundreds of years for us to arrive at this picture, and yet the lack of viable alternatives isn’t a flaw, but a feature of how successful the Big Bang truly is. (C. FAUCHER-GIGUÈRE, A. LIDZ, AND L. HERNQUIST, SCIENCE 319, 5859 (47))

    A suite of new discoveries in the early 20th century revolutionized our view of the Universe. In 1923, Edwin Hubble measured individual stars in spiral nebulae, measuring their variable periods and their observed brightness.

    Edwin Hubble at Caltech Palomar Samuel Oschin 48 inch Telescope, (credit: Emilio Segre Visual Archives/AIP/SPL)

    Thanks to the work of Henrietta Leavitt in formulating Leavitt’s law, which related such a star’s variable period to its intrinsic brightness, we obtained distance measurements to the galaxies that housed them. These galaxies were well outside our own Milky Way, with most residing millions of light years away.

    Henrietta Swan Leavitt discovered a relationship between the period of a star’s brightness cycle to its absolute magnitude. The discovery made it possible to calculate their distance from Earth

    Hubble’s discovery of a Cepheid variable in Andromeda galaxy, M31, opened up the Universe to us, giving us the observational evidence we needed for galaxies beyond the Milky Way and leading to the expanding Universe. (E. HUBBLE, NASA, ESA, R. GENDLER, Z. LEVAY AND THE HUBBLE HERITAGE TEAM)

    Combined with redshift measurements, we were able to discover an important relationship: the farther away a galaxy appeared to be from us, the greater its redshift was measured to be. A number of possible explanations were advanced, such as the light from these objects lost energy as they traveled through space, or the more distant galaxies were moving away faster than the nearer ones, as though they all originated from an explosion.

    However, one explanation emerged as the most compelling: the Universe was expanding. This explanation was consistent with the predictions of General Relativity, as well as the observed large-scale smoothness observed in all directions and locations. As more galaxies at greater distances were discovered, this picture was validated further. The Universe was expanding.

    The farther a galaxy is, the faster it expands away from us, and the more its light appears redshifted. A galaxy moving with the expanding Universe will be even a greater number of light years away, today, than the number of years (multiplied by the speed of light) that it took the light emitted from it to reach us. (LARRY MCNISH OF RASC CALGARY CENTER)

    Again, multiple valid explanations, even in the context of General Relativity, emerged. Sure, if the Universe were expanding in all directions, then we’d see distant objects moving away from us, with the more distant objects appearing to recede more rapidly. But this could be:

    because the objects also had large, unmeasurable transverse motions, as though the Universe were rotating as well,
    or because the Universe was oscillating, and if we looked far enough, we would see the expansion reverse,
    or because the expansion caused the slow creation of new matter, resulting in a Universe that appeared unchanging in time,
    or because the Universe originated from a hot, dense state.

    Only this last option represents the hot Big Bang.

    As far back as humanity has ever seen in the Universe, just a few hundred million years after the Big Bang, we still know that the very first stars and galaxies should have existed before even that. Our picture of the Big Bang, General Relativity, the seeds of structure formation, and much more, all forms a consistent picture that tells us we are not quite yet at the beginning. (NASA, ESA, AND A. FEILD (STSCI))

    But if the idea of the Big Bang were correct, there would be a slew of new predictions that should arise. The expanding Universe, in the context of General Relativity, was the first, but there were three other, major ones that would lead to different observable consequences from the alternatives.

    Galaxies comparable to the present-day Milky Way are numerous, but younger galaxies that are Milky Way-like are inherently smaller, bluer, more chaotic, and richer in gas in general than the galaxies we see today. For the first galaxies of all, this ought to be taken to the extreme. (NASA AND ESA)

    The first is that if the Universe originated from an arbitrarily hot, dense, and more uniform state to expand-and-cool to what we see today, then as we look farther away, we’re looking back in time, and we should see the Universe as it was when it was younger. We should see, therefore, galaxies that were smaller, less massive, and made up of younger, bluer stars at great distances, before arriving at a time where there were no stars or galaxies at all.

    A Universe where electrons and protons are free and collide with photons transitions to a neutral one that’s transparent to photons as the Universe expands and cools. Shown here is the ionized plasma (L) before the CMB is emitted, followed by the transition to a neutral Universe (R) that’s transparent to photons. It’s the spectacular two-photon transition in a hydrogen atom which enables the Universe to become neutral exactly as we observe it. (AMANDA YOHO)

    The second, extrapolating even farther back, would be that there should be a time when the Universe was so hot and energetic that not even neutral atoms could form. At some very early stage, therefore, the Universe transitioned from an ionized plasma to one filled with neutral atoms. Any radiation that was around at that early stage should simply stream to our eyes, affected only by the expansion of the Universe.

    Arno Penzias and Robert Wilson, AT&T, Holmdel, NJ USA, with the Holmdel horn antenna, first caught the faint echo of the Big Bang

    Based on the temperature at which atoms become neutral vs. ionized, we expect this radiation to be just a few degrees above absolute zero, shifting it into the microwave portion of the spectrum today. This is where the term Cosmic Microwave Background comes from. Furthermore, because it had a thermal origin but redshifted with the expanding Universe, we also expect it to exhibit a particular shape to its spectrum: a blackbody spectrum. The radiation background was initially detected at right around 3 K, and has since had measurements refined so that we not only know it to be 2.7255 K, but that its spectrum is definitively blackbody and not consistent with an explanation of reflected starlight. (Which could be accommodated by one of the alternative explanations.)

    Long before the data from BOOMERanG came back, the measurement of the spectrum of the CMB, from COBE, demonstrated that the leftover glow from the Big Bang was a perfect blackbody in a way that reflected starlight, as the quasi-steady-state model predicted, could not explain what we saw. (E. SIEGEL / BEYOND THE GALAXY)


    NASA/COBE 1989 to 1993.

    Finally, there’s a third prediction: that based on the early history of the Universe, elements should have been forged by nuclear fusion in particular ratios. Today, this should mean that before any stars were formed, the Universe should have been about:

    75% hydrogen (by mass),
    25% helium-4,
    0.01% deuterium,
    0.01% helium-3, and
    1-part-in-a-billion lithium-7.

    That’s it; there should have been no elements heavier than that. Hydrogen, helium, a little bit of isotopes of each, and a tiny bit of lithium.

    The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. The Universe is 75–76% hydrogen, 24–25% helium, a little bit of deuterium and helium-3, and a trace amount of lithium by mass. After tritium and beryllium decay away, this is what we’re left with, and this remains unchanged until stars form. (NASA / WMAP SCIENCE TEAM)


    NASA/WMAP 2001 to 2010

    Observationally, this has been confirmed as well. Distant light, either from early galaxies or distant quasars, gets absorbed by intervening clouds of gas, allowing us to probe the contents of that gas. In 2011, we discovered two pristine clouds of gas, detecting hydrogen and helium in the exact, predicted ratios, and discovering (for the first time) a population of gas that had no oxygen or carbon: the first products of newly-formed stars.

    The absorption spectra of different populations of gas (L) allow us to derive the relative abundances of elements and isotopes (center). In 2011, two distant gas clouds containing no heavy elements and a pristine deuterium-to-hydrogen ratio (R) were discovered for the first time. (MICHELE FUMAGALLI, JOHN M. O’MEARA, AND J. XAVIER PROCHASKA, VIA ARXIV.ORG/ABS/1111.2334)

    The only way to arrive at the Cosmic Microwave Background with the uniformity, spectrum, and temperature it possesses is to posit a hot, thermal origin for it in the context of the expanding Universe. This was conjectured back in the 1940s by George Gamow and his collaborators, first observed in the 1960s by Arno Penzias and Bob Wilson, and had its spectrum definitively proven to be blackbody in the 1990s with the COBE satellite.

    The large-scale structure of the Universe has been determined through all-sky surveys and deep field measurements with ground-and-space-based observatories, and has revealed a Universe consistent with the Big Bang and not with the alternatives. And the evolution of the elemental abundances, from metal-free early stages to metal-poor intermediate stages to the late-time, metal-rich stages that we observe today, all demonstrate the validity of the Big Bang.

    There are now many independent observations of pristine gas from shortly after the Big Bang, showcasing the sensitive deuterium quantities relative to hydrogen. The agreement between observation and the theoretical predictions of the Big Bang is another victory for our best model of the Universe’s origin. (S. RIEMER-SØRENSEN AND E. S. JENSSEN, UNIVERSE 2017, 3(2), 44)

    If you can come up with an alternative explanation for these four observations, you will have the start of a viable alternative to the Big Bang. Explain the observed expansion of the Universe, the large-scale structure and the evolution of galaxies, the Cosmic Microwave Background along with its temperature and spectral properties, and the relative abundances and evolution of the elements in the Universe, and you’ll challenge the theory of our cosmic beginnings.

    After the Big Bang, the Universe was almost perfectly uniform, and full of matter, energy and radiation in a rapidly expanding state. As time goes on, the Universe not only forms elements, atoms, and clumps and clusters together that lead to stars and galaxies, but expands and cools the entire time. No alternative can match it. (NASA / GSFC)

    For more than 50 years, no alternative has been able to deliver on all four counts. No alternative can even deliver the Cosmic Microwave Background as we see it today. It isn’t for lack of trying or a lack of good ideas; it’s because this is what the data indicates. Scientists don’t believe in the Big Bang; they conclude it based on the full suite of observations. The last adherents to the ancient, discredited alternatives are at last dying away. The Big Bang is no longer a revolutionary endpoint of the scientific enterprise; it’s the solid foundation we build upon. It’s predictive successes have been overwhelming, and no alternative has yet stepped up to the challenge of matching its scientific accuracy in describing the Universe.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: