Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:20 am on July 2, 2019 Permalink | Reply
    Tags: "Meet The Largest X-Ray Jet In The Universe", , , , , Ethan Siegel, , The active galaxy Pictor A   

    From Ethan Siegel: “Meet The Largest X-Ray Jet In The Universe” 

    From Ethan Siegel
    July 1, 2019

    Discovered by NASA’s Chandra X-ray observatory, it’s powered by a supermassive black hole.

    2019 marks 20 years of NASA’s Chandra, humanity’s most powerful X-ray observatory.

    Artist illustration of the Chandra X-ray Observatory. Chandra is the most sensitive X-ray telescope ever built, and its mission was extended through at least 2024 as the flagship X-ray observatory in the NASA arsenal. (NASA/CXC/NGST TEAM)

    It’s viewed everything from pulsars to colliding gas to galaxy clusters and supermassive black holes.

    A map of the 7 million second exposure of the Chandra Deep Field-South. This region shows hundreds of supermassive black holes, each one in a galaxy far beyond our own. The GOODS-South field, a Hubble project, was chosen to be centered on this original image. Its view of supermassive black holes is only one incredible application of the NASA’s Chandra X-ray observatory. (NASA/CXC/B. LUO ET AL., 2017, APJS, 228, 2)

    In 2015, it set its sights on a galaxy some 485 million light-years away: the radio-loud behemoth known as Pictor A.

    The jet of the active galaxy Pictor A, with X-rays in blue and radio lobes in pink. When galaxies merge together, they’re expected to activate similarly to how this one has. (X-RAY: NASA/CXC/UNIV OF HERTFORDSHIRE/M.HARDCASTLE ET AL., RADIO: CSIRO/ATNF/ATCA)

    When Chandra took a look at it with its X-ray eyes, it saw something unprecendented and spectacular: a jet 300,000 light-years long.

    The X-ray (B&W) and radio (red contours) emissions from the galaxy Pictor A. The greyscale image shows all the X-rays emitted with 500 to 5000 eV of energy, more than enough to ionize any atoms or molecules it encounters. The red contours are radio data shown superimposed atop the X-ray data. (M.J. HARDCASTLE ET AL. (2015), FROM ARXIV.ORG/ABS/1510.08392)

    Like all known active galaxies, Pictor A is powered by a supermassive black hole many millions to billions of times our Sun’s mass.

    The galaxy Centaurus A is the closest example of an active galaxy to Earth, with its high-energy jets caused by electromagnetic acceleration around the central black hole. The extent of its jets are far smaller than the jets that Chandra has observed around Pictor A. (NASA/CXC/CFA/R.KRAFT ET AL.)

    Black holes can accelerate and eject infalling matter, leading to intense emissions.

    A black hole more than six billion times the mass of the Sun powers the X-ray jet at the center of M87, which is many thousands of light-years in extent. If this image looks familiar, it might be: M87 is the first galaxy to have its event horizon imaged directly, owing to the incredible collaborative work of scientists working on the Event Horizon Telescope. (NASA/HUBBLE/WIKISKY)

    The light released spans the spectrum from high-energy X-rays to low-energy radio waves.

    Appearing on a scale far greater than the scale of the galaxy itself, the jet emitted from Pictor A can be seen in the data at various points, thanks to the interactions between these high-energy emissions and the gas in the surrounding environment of the galaxy itself. The ‘hot spot’ at the end of the jet can be seen at the far right of the upper view of this image. (M.J. HARDCASTLE ET AL. (2015), FROM ARXIV.ORG/ABS/1510.08392)

    The radio lobes of gas provide a medium for high-energy X-rays to interact with.

    While distant host galaxies for quasars and active galactic nuclei can often be imaged in visible/infrared light, the jets themselves and the surrounding emission is best viewed in both the X-ray and the radio, as illustrated here for the galaxy Hercules A. The gaseous outflows are highlighted in the radio, and if X-ray emissions follow the same path into the gas, they can be responsible for creating hot spots owing to the acceleration of electrons. (NASA, ESA, S. BAUM AND C. O’DEA (RIT), R. PERLEY AND W. COTTON (NRAO/AUI/NSF), AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    When these interactions cause electrons to exceed the speed of sound in the gaseous medium, it creates intense shock waves.

    An annotated version of the X-ray/radio composite image of Pictor A, showing the counterjet, the Hot Spot, and many other fascinating features. (X-RAY: NASA/CXC/UNIV OF HERTFORDSHIRE/M.HARDCASTLE ET AL., RADIO: CSIRO/ATNF/ATCA)

    The “hot spot” illustrated on the above NASA image is the definitive evidence of the jet-like nature of these X-rays and accelerated electrons.

    Artist’s impression of an active galactic nucleus. The supermassive black hole at the center of the accretion disk sends a narrow high-energy jet of matter into space, perpendicular to the disc. A blazar about 4 billion light years away is the origin of many of the highest-energy cosmic rays and neutrinos, but even the full suite of active galaxies cannot compete with Pictor A in terms of raw size of the X-ray jet. (DESY, SCIENCE COMMUNICATION LAB)

    Alternative explanations involving boosted CMB photons have been ruled out.

    The most distant X-ray jet in the Universe, from quasar GB 1428, located 12.4 billion light years from Earth. This jet comes from electrons heating CMB photons, but that mechanism is ruled out for Pictor A. (X-RAY: NASA/CXC/NRC/C.CHEUNG ET AL; OPTICAL: NASA/STSCI; RADIO: NSF/NRAO/VLA)

    Pictor A possesses the largest X-ray jet in the known Universe.

    Despite many years of observations, we still don’t know whether the galaxy Pictor A, shown as viewed in optical light (main) and ultraviolet light (inset), is a spiral, elliptical, or irregular galaxy. Superior observations of the galaxy itself have yet to be acquired. (DIGITIZED SKY SURVEY 2 (MAIN); NASA/GALEX (INSET))

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:17 am on June 25, 2019 Permalink | Reply
    Tags: "Is This The Most Massive Star In The Universe?", , , , , Ethan Siegel   

    From Ethan Siegel: “Is This The Most Massive Star In The Universe?” 

    From Ethan Siegel

    June 24, 2019

    The largest group of newborn stars in our Local Group of galaxies, cluster R136, contains the most massive stars we’ve ever discovered: over 250 times the mass of our Sun for the largest. The brightest of the stars found here are more than 8,000,000 times as luminous as our Sun. And yet, there are still likely even more massive ones out there. (NASA, ESA, AND F. PARESCE, INAF-IASF, BOLOGNA, R. O’CONNELL, UNIVERSITY OF VIRGINIA, CHARLOTTESVILLE, AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)

    At the core of the largest star-forming region of the Local Group sits the biggest star we know of.

    Mass is the single most important astronomical property in determining the lives of stars.
    The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. Our Sun is a G-class star, producing light with an effective temperature of around 5800 K and a brightness of 1 solar luminosity. Stars can be as low in mass as 8% the mass of our Sun, where they’ll burn with ~0.01% our Sun’s brightness and live for more than 1000 times as long, but they can also rise to hundreds of times our Sun’s mass, with millions of times our Sun’s luminosity. (WIKIMEDIA COMMONS USER LUCASVB, ADDITIONS BY E. SIEGEL)

    Greater masses generally lead to higher temperatures, greater brightnesses, and shorter lifetimes.

    The active star-forming region, NGC 2363, is located in a nearby galaxy just 10 million light-years away. The brightest star visible here is NGC 2363-V1, visible as the isolated, bright star in the dark void at left. Despite being 6,300,000 times as bright as our Sun, it’s only 20 times as massive, having likely brightened recently as the result of an outburst. (LAURENT DRISSEN, JEAN-RENE ROY AND CARMELLE ROBERT (DEPARTMENT DE PHYSIQUE AND OBSERVATOIRE DU MONT MEGANTIC, UNIVERSITE LAVAL) AND NASA)

    Since massive stars burn through their fuel so quickly, the record holders are found in actively star-forming regions.

    The ‘supernova impostor’ of the 19th century precipitated a gigantic eruption, spewing many Suns’ worth of material into the interstellar medium from Eta Carinae. High mass stars like this within metal-rich galaxies, like our own, eject large fractions of mass in a way that stars within smaller, lower-metallicity galaxies do not. Eta Carinae might be over 100 times the mass of our Sun and is found in the Carina Nebula, but it is not among the most massive stars in the Universe. (NATHAN SMITH (UNIVERSITY OF CALIFORNIA, BERKELEY), AND NASA)

    Luminosity isn’t enough, as short-lived outbursts can cause exceptional, temporary brightening in typically massive stars.

    The star cluster NGC 3603 is located a little over 20,000 light-years away in our own Milky Way galaxy. The most massive star inside it is, NGC 3603-B, which is a Wolf-Rayet star located at the centre of the HD 97950 cluster which is contained within the large, overall star-forming region. (NASA, ESA AND WOLFGANG BRANDNER (MPIA), BOYKE ROCHAU (MPIA) AND ANDREA STOLTE (UNIVERSITY OF COLOGNE))

    Within our own Milky Way, massive star-forming regions, like NGC 3603, house many stars over 100 times our Sun’s mass.

    The star at the center of the Heart Nebula (IC 1805) is known as HD 15558, which is a massive O-class star that is also a member of a binary system. With a directly-measured mass of 152 solar masses, it is the most massive star we know of whose value is determined directly, rather than through evolutionary inferences. (S58Y / FLICKR)

    As a member of a binary system, HD 15558 A is the most massive star with a definitive value: 152 solar masses.

    The Large Magellanic Cloud, the fourth largest galaxy in our local group, with the giant star-forming region of the Tarantula Nebula (30 Doradus) just to the right and below the main galaxy. It is the largest star-forming region contained within our Local Group. (NASA, FROM WIKIMEDIA COMMONS USER ALFA PYXISDIS)

    However, all stellar mass records originate from the star forming region 30 Doradus in the Large Magellanic Cloud.

    A large section of the Tarantula Nebula, the largest star-forming region in the Local Group, imaged by the Ciel Austral team. At top, you can see the presence of hydrogen, sulfur, and oxygen, which reveals the rich gas and plasma structure of the LMC, while the lower view shows an RGB color composite, revealing reflection and emission nebulae. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    Known as the Tarantula Nebula, it has a mass of ~450,000 Suns and contains over 10,000 stars.

    The star forming region 30 Doradus, in the Tarantula Nebula in one of the Milky Way’s satellite galaxies, contains the largest, highest-mass stars known to humanity. The largest collection of bright, blue stars shown here is the ultra-dense star cluster R136, which contains nearly 100 stars that are approximately 100 solar masses or greater. Many of them have brightnesses that exceed a million solar luminosities. (NASA, ESA, AND E. SABBI (ESA/STSCI); ACKNOWLEDGMENT: R. O’CONNELL (UNIVERSITY OF VIRGINIA) AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)

    The central star cluster, R136, contains 72 of the brightest, most massive classes of star.

    The cluster RMC 136 (R136) in the Tarantula Nebula in the Large Magellanic Cloud, is home to the most massive stars known. R136a1, the greatest of them all, is over 250 times the mass of the Sun. While professional telescopes are ideal for teasing out high-resolution details such as these stars in the Tarantula Nebula, wide-field views are better with the types of long-exposure times only available to amateurs. (EUROPEAN SOUTHERN OBSERVATORY/P. CROWTHER/C.J. EVANS)

    The record-holder is R136a1, some 260 times our Sun’s mass and 8,700,000 times as bright.

    An ultraviolet image and a spectrographic pseudo-image of the hottest, bluest stars at the core of R136. In this small component of the Tarantula Nebula alone, nine stars over 100 solar masses and dozens over 50 are identified through these measurements. The most massive star of all in here, R136a1, exceeds 250 solar masses, and is a candidate, later in its life, for photodisintegration. (ESA/HUBBLE, NASA, K.A. BOSTROEM (STSCI/UC DAVIS))

    Stars such as this cannot be individually resolved beyond our Local Group.

    An illustration of the first stars turning on in the Universe. Without metals to cool down the stars, only the largest clumps within a large-mass cloud can become stars. Until enough time has passes for gravity to affect larger scales, only the small-scales can form structure early on. Without heavy elements to facilitate cooling, stars are expected to routinely exceed the mass thresholds of the most massive ones known today. (NASA)

    With NASA’s upcoming James Webb Space Telescope, we may discover Population III stars, which could reach thousands of solar masses.

    NASA/ESA/CSA Webb Telescope annotated


    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:26 am on June 17, 2019 Permalink | Reply
    Tags: "How Did This Black Hole Get So Big So Fast?", , , , , Ethan Siegel   

    From Ethan Siegel: “How Did This Black Hole Get So Big So Fast?” 

    From Ethan Siegel
    June 17, 2019

    This image of ULAS J1120+0641, a very distant quasar powered by a black hole with a mass two billion times that of the Sun, was created from images taken from surveys made by both the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. The quasar appears as a faint red dot close to the centre. This quasar was the most distant one known from 2011 until 2017, and is seen as it was just 770 million years after the Big Bang. Its black hole is so massive it poses a challenge to modern cosmological theories of black hole growth and formation.(ESO/UKIDSS/SDSS)

    It’s not impossible according to physics, but we truly don’t know how this object came to exist.

    Out in the extremities of the distant Universe, the earliest quasars can be found.

    HE0435–1223, located in the centre of this wide-field image, is among the five best lensed quasars discovered to date, where the lensing phenomenon magnifies the light from distant objecst. This effect enables us to see quasars whose light was emitted when the Universe was less than 10% of its current age. The foreground galaxy creates four almost evenly distributed images of the distant quasar around it. (ESA/HUBBLE, NASA, SUYU ET AL.)

    Supermassive black holes at the centers of young galaxies accelerate matter to tremendous speeds, causing them to emit jets of radiation.

    While distant host galaxies for quasars and active galactic nuclei can often be imaged in visible/infrared light, the jets themselves and the surrounding emission is best viewed in both the X-ray and the radio, as illustrated here for the galaxy Hercules A. (NASA, ESA, S. BAUM AND C. O’DEA (RIT), R. PERLEY AND W. COTTON (NRAO/AUI/NSF), AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    What we observe enables us to reconstruct the mass of the central black hole, and explore the ultra-distant Universe.

    The farther away we look, the closer in time we’re seeing towards the Big Bang. The current record-holder for quasars comes from a time when the Universe was just 690 million years old. (ROBIN DIENEL/CARNEGIE INSTITUTION FOR SCIENCE)

    Recently, a new black hole, J1342+0928, was discovered to originate from 13.1 billion years ago: when the Universe was 690 million years old, just 5% of its current age.

    As viewed with our most powerful telescopes, such as Hubble, advances in camera technology and imaging techniques have enabled us to better probe and understand the physics and properties of distant quasars, including their central black hole’s properties. (NASA AND J. BAHCALL (IAS) (L); NASA, A. MARTEL (JHU), H. FORD (JHU), M. CLAMPIN (STSCI), G. HARTIG (STSCI), G. ILLINGWORTH (UCO/LICK OBSERVATORY), THE ACS SCIENCE TEAM AND ESA (R))

    It has a mass of 800 million Suns, an exceedingly high figure for such early times.

    This artist’s rendering shows a galaxy being cleared of interstellar gas, the building blocks of new stars. Winds driven by a central black hole are responsible for this, and may be at the heart of what’s driving this active ultra-distant galaxy behind this newly discovered quasar. (ESA/ATG MEDIALAB)

    Even if black holes formed from the very first stars, they’d have to accrete matter and grow at the maximum rate possible — the Eddington limit — to reach this size so rapidly.

    The active galaxy IRAS F11119+3257 shows, when viewed up close, outflows that may be consistent with a major merger. Supermassive black holes may only be visible when they’re ‘turned on’ by an active feeding mechanism, explaining why we can see these ultra-distant black holes at all. (NASA’S GODDARD SPACE FLIGHT CENTER/SDSS/S. VEILLEUX)

    Fortunately, other methods may also grow a supermassive black hole.

    When new bursts of star formation occur, enormous quantities of massive stars are created.

    The visible/near-IR photos from Hubble show a massive star, about 25 times the mass of the Sun, that has winked out of existence, with no supernova or other explanation. Direct collapse is the only reasonable candidate explanation, demonstrating that not all stars need to go supernova or experience a stellar cataclysm to form a black hole.(NASA/ESA/C. KOCHANEK (OSU))

    These can either directly collapse or go supernova, creating large numbers of massive black holes which then merge and grow.

    Simulations of various gas-rich processes, such as galaxy mergers, indicate that the formation of direct collapse black holes should be possible. A combination of direct collapse, supernovae, and merging stars and stellar remnants could produce a young black hole this massive. Complementarily, present LIGO results indicate that black holes merge every 5 minutes somewhere in the Universe. (L. MAYER ET AL. (2014), VIA ARXIV.ORG/ABS/1411.5683)

    Only ~20 black holes this large should exist so early in the Universe.

    An ultra-distant quasar showing plenty of evidence for a supermassive black hole at its center. How these black holes got so massive so quickly is a topic of contentious scientific debate, but may have an answer that fits within our standard theories. We are uncertain whether that’s true or not at this juncture. (X-RAY: NASA/CXC/UNIV OF MICHIGAN/R.C.REIS ET AL; OPTICAL: NASA/STSCI)

    Is this problematic for cosmology? More data will eventually decide.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 2:18 pm on June 16, 2019 Permalink | Reply
    Tags: "What’s The Real Story Behind This Dark Matter-Free Galaxy?", , , , , Ethan Siegel, KCWI at Keck   

    From Ethan Siegel: “What’s The Real Story Behind This Dark Matter-Free Galaxy?” 

    From Ethan Siegel
    June 15, 2019

    This large, fuzzy-looking galaxy is so diffuse that astronomers call it a “see-through” galaxy because they can clearly see distant galaxies behind it. The ghostly object, catalogued as NGC 1052-DF2, doesn’t have a noticeable central region, or even spiral arms and a disk, typical features of a spiral galaxy. But it doesn’t look like an elliptical galaxy, either, as its velocity dispersion is all wrong. Even its globular clusters are oddballs: they are twice as large as typical stellar groupings seen in other galaxies. All of these oddities pale in comparison to the weirdest aspect of this galaxy: NGC 1052-DF2 is very controversial because of its apparent lack of dark matter. This could solve an enormous cosmic puzzle. (NASA, ESA, AND P. VAN DOKKUM (YALE UNIVERSITY))

    Has the mystery really been solved? Doubtful. The real science goes much deeper.

    For perhaps the last year or so, a small galaxy located not too far away has captivated the attention of astronomers. The galaxy NGC 1052-DF2, a satellite of the larger NGC 1052, appears to be the first galaxy ever discovered that shows no evidence of dark matter. Paradoxically, that has been reported as indisputable evidence that dark matter must exist! Now, a new team has come out with a result that claims this galaxy cannot be devoid of dark matter, and Yann Guidon wants to know what’s really going on, asking:

    I read a study that said the mystery of a galaxy with no dark matter has been solved. But I thought that this anomalous galaxy was previously touted as evidence FOR dark matter? What’s really going on here, Ethan?

    We have to be extremely careful here, and dissect the findings of the different teams with all the implications correctly synthesized. Let’s get started.

    The full Dragonfly field, approximately 11 square degrees, centred on NGC 1052. The zoom-in shows the immediate surroundings of NGC 1052, with NGC1052–DF2 highlighted in the inset. This is Extended Data Figure 1 from the publication announcing the discovery of DF2. (P. VAN DOKKUM ET AL., NATURE VOLUME 555, PAGES 629–632 (29 MARCH 2018))

    U Toronta Dragon Fly Telescope Array housed in New Mexico

    Whenever you have a galaxy in the Universe and you want to know how much mass is inside, you have two ways of approaching the problem. The first way is to rely on astronomy to give you the answer.

    Astronomically, there are a slew of observations we can make to teach us about the matter content of a galaxy. We can look in a myriad of wavelengths of light to determine the total amount of starlight that’s present, and infer the amount of mass that’s present in stars. We can similarly make additional observations of gas, dust, and the absorption and emission of radiation in order to infer the total amount of normal matter that’s present. We’ve done this for enough galaxies for long enough that simply measuring some basic properties can lead us to infer the total baryonic (made of protons, neutrons, and electrons) matter within a galaxy.

    The extended rotation curve of M33, the Triangulum galaxy. These rotation curves of spiral galaxies ushered in the modern astrophysics concept of dark matter to the general field. The dashed curve would correspond to a galaxy without dark matter, which represents less than 1% of galaxies. While initial observations of the velocity dispersion, via globular clusters, indicated that NGC 1052-DF2 was one of them, newer observations throw that conclusion into doubt. (WIKIMEDIA COMMONS USER STEFANIA.DELUCA)

    On the other hand, there are additional gravitational measurements we can make that will teach us about the total amount of mass that’s present within a galaxy, irrespective of the type of matter (normal, baryonic matter or dark matter) that we see. By measuring the motions of the stars inside, either through direct line-broadening at different radii or through the velocity dispersion of the entire galaxy, we can get a specific value for the total mass. In addition, we can look at the velocity dispersion of the globular clusters orbiting a galaxy to obtain a second, complementary, independent measurement of total mass.

    In most galaxies, the two values for the measured/inferred matter content differ by about a factor of 5-to-6, indicating the presence of substantial amounts of dark matter. But some galaxies are special.

    According to models and simulations, all galaxies should be embedded in dark matter halos, whose densities peak at the galactic centers. On long enough timescales, of perhaps a billion years, a single dark matter particle from the outskirts of the halo will complete one orbit. The effects of gas, feedback, star formation, supernovae, and radiation all complicate this environment, making it extremely difficult to extract universal dark matter predictions. (NASA, ESA, AND T. BROWN AND J. TUMLINSON (STSCI))

    From a theoretical perspective, we know how galaxies should form. We know that the Universe ought to start out governed by General Relativity, our law of gravity. It should have approximately a 5-to-1 mix of dark matter to normal matter, and should begin almost perfectly uniform, with underdense and overdense regions appearing at about the 1-part-in-30,000 level. Give the Universe time, and let it evolve, and you’ll form structures where the overdense regions were on small, medium and large scales, with vast cosmic voids forming between them, in the initially underdense regions.

    In large galaxies, comparable to the Milky Way’s size or larger, very little is going to be capable of changing that dark matter to normal matter ratio. The total amount of gravity is generally going to be too great for any type of matter to escape, unless it speeds rapidly through a gas-rich medium capable of stripping the normal matter away.

    A Hubble (visible light) and Chandra (X-ray) composite of galaxy ESO 137–001 as it speeds through the intergalactic medium in a rich galaxy cluster, becoming stripped of stars and gas, while its dark matter remains intact. (NASA, ESA, CXC)

    But for smaller galaxies, there are interesting processes that can occur that are vitally important to this ratio of normal matter (which determines the astronomical properties) to dark matter (which, combined with the normal matter, determines the gravitational properties).

    When most small, low-mass galaxies form, the act of forming stars is an act of violence against all the other matter inside. Ultraviolet radiation, stellar cataclysms (like supernovae), and stellar winds all heat up the normal matter. If the heating is severe enough and the mass of the galaxy is low enough, enormous quantities of normal matter (in the form of gas and plasma) can get ejected from the galaxy. As a result, many low-mass galaxies will exhibit dark matter to normal matter ratios far in excess of 5-to-1, with some of the lowest-mass galaxies achieving ratios of hundreds-to-1.

    Only approximately 1000 stars are present in the entirety of dwarf galaxies Segue 1 and Segue 3, which has a gravitational mass of 600,000 Suns. The stars making up the dwarf satellite Segue 1 are circled here. If new research is correct, then dark matter will obey a different distribution depending on how star formation, over the galaxy’s history, has heated it. The dark matter-to-normal matter ratio of nearly 1000-to-1 is the greatest ratio ever seen in the dark matter-favoring direction. (MARLA GEHA AND KECK OBSERVATORIES)

    But there’s another process that can arise, on rare occasion, to produce galaxies with either very small or even, in theory, no amounts of dark matter. When larger galaxies merge together, they can produce an extreme phenomenon known as a starburst: where the entire galaxy becomes an enormous star-forming region.

    The merger process, coupled with this star-formation, can impart enormous tidal forces and velocities to some of the normal matter that’s present. In theory, this could be powerful enough to rip substantial quantities of normal matter out of the main, merging galaxies, forming smaller galaxies that will have far less dark matter than the typical 5-to-1 dark matter-to-normal matter ratio. In some extreme cases, this might even create galaxies made of normal matter alone. Around large, dark matter-dominated galaxies, there might be smaller ones that are entirely dark matter-free.

    A decade ago, there were a small number of scientists who claimed that the observed lack of these dark matter-free galaxies was a clear falsification of the dark matter paradigm. The overwhelming majority of scientists countered with claims that these galaxies should be rare, faint, and that it was no surprise we hadn’t observed them yet. With more data, better observations, and superior instrumentation and techniques, small galaxies with either small amounts of dark matter, or even none at all, ought to emerge.

    Last year, a team of Yale researchers announced the discovery of the galaxy NGC 1052-DF2 (DF2 for short), a satellite galaxy of the large galaxy NGC 1052, that appeared to have no dark matter at all. When the scientists looked at the globular clusters orbiting DF2, they found the velocity dispersion was extremely small: at least a factor of 3 below the predicted speeds of ±30 km/s, which would have corresponded to this typical 5-to-1 ratio.

    The KCWI spectrum of the galaxy DF2 (in black), as taken directly from the new paper at arXiv:1901.03711, with the earlier results from a competing team using MUSE superimposed in red. You can clearly see that the MUSE data is lower resolution, smeared out, and artificially inflated compared to the KCWI data. The result is an artificially large velocity dispersion inferred by the prior researchers. (SHANY DANIELI (PRIVATE COMMUNICATION))



    Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, 4,207 m (13,802 ft)

    ESO MUSE on the VLT on Yepun (UT4)

    About 8 months later, another team, using a different instrument (rather than the unique Dragonfly instrument used by the Yale team), argued that the stars, rather than the globular clusters, should be used to determine the galaxy’s mass. Using their new data, they found an equivalent velocity dispersion of ±17 km/s, about twice as great as the Yale team had measured.

    Undaunted, the Yale team made an even more precise measurement of the stars in DF2 using the upgraded KCWI instrument, and went back and measured the motions of the globular clusters orbiting it once again.

    KCWI instrument at Keck

    With a superior instrument, they got a result with much smaller error bars, and both techniques agreed. From the stellar velocity dispersion, they got a value of ±8.4 km/s, with the globulars giving ±7.8 km/s. For the first time, it looked like we truly had found a dark matter-free galaxy.

    The predictions (vertical bars) for what the velocity dispersions ought to be if the galaxy contained a typical amount of dark matter (right) versus no dark matter at all (left). The Emsellem et al. result was taken with the insufficient MUSE instrument; the latest data from Danieli et al. was taken with the KCWI instrument, and provides the best evidence yet that this really is a galaxy with no dark matter at all. (DANIELI ET AL. (2019), ARXIV:1901.03711)

    But perhaps something was flawed. When scientists are truly engaging in good science, they’ll try to take any hypothesis, novel result, or unexpected find and poke holes in it. They’ll try to knock it down, discredit it, or find a fatal flaw with the result whenever possible. Only the most robust, well-scrutinized results will stand up and become accepted; controversies are at their hottest when a new result threatens to decide the issue once and for all.

    The latest attempt to knock the DF2 results down come from a group at the Instituto de Astrofísica de Canarias (IAC) led by Ignacio Trujillo. Using a new measurement of DF2, his team claims that the galaxy is actually closer than previously thought: 42 million light-years instead of 64 million. This would mean it isn’t a satellite of NGC 1052 after all, but rather a galaxy some 22 million light-years closer, in the cosmic foreground.

    The ultra-diffuse galaxy KKS2000]04 (NGC1052-DF2), towards the constellation of Cetus, was considered to be a galaxy completely devoid of dark matter. The results of Trujillo et al. dispute that, claiming that the galaxy is much closer, and therefore has a different mass-to-luminosity ratio (and a different velocity dispersion) than was previously thought. This is extremely controversial. (TRUJILLO ET AL. (2019))

    This could change the story dramatically. The distance to a galaxy is extremely important to the intrinsic brightness you infer, which in turn tells you how much matter must be present in the form of stars. If the galaxy is much closer than previously thought, then there’s actually more mass present, and the inferred velocity dispersion will be higher, indicating the need for dark matter, after all.

    Case closed, right?

    Not even close. First off, DF2 isn’t the only galaxy that exhibits this effect anymore; there’s another satellite of NGC 1052 (known as DF4) that exhibits the same dark matter-free nature, so both would have to have their distances mis-estimated. Second, even if they are at the closer distance preferred by the Trujillo et al. team, that still renders DF2 and DF4 both extremely low-dark matter galaxies, which still necessitates a mechanism to separate normal matter from dark matter. And third, the Yale team had previously (in August) published a calibration-free distance measurement to the galaxy, from surface-brightness-fluctuations, inconsistent at 3.5 sigma with Trujillo’s results.

    The galaxy NGC 1052-DF2 was imaged in great detail by the KCWI spectrograph instrument aboard the W.M. Keck telescope on Mauna Kea, enabling scientists to detect the motions of stars and globular clusters inside the galaxy to unprecedented precisions. (DANIELI ET AL. (2019), ARXIV:1901.03711)

    In other words, even if the distance estimates by Trujillo et al. are correct, which they probably aren’t, these galaxies are extremely low in dark matter, with DF4 possibly still even being dark matter-free. Neither team has yet observed this galaxy with the Hubble Space Telescope, but that will provide the most unambiguous distance estimate at all. Subsequent observations of DF4 with Hubble are slated for later in 2019, which should help clarify this ambiguity.

    A short distance for these galaxies does not actually resolve the central issue: that they have much less dark matter, no matter how you massage it, than a naive, conventional dark matter-to-normal matter ratio would indicate. Only if dark matter is real, and experiences different physics in star-forming and collisional environments than normal matter, can galaxies like DF2 or DF4 exist at all.

    Many nearby galaxies, including all the galaxies of the local group (mostly clustered at the extreme left), display a relationship between their mass and velocity dispersion that indicates the presence of dark matter. NGC 1052-DF2 is the first known galaxy that appears to be made of normal matter alone, and was later joined by DF4 earlier in 2019. (DANIELI ET AL. (2019), ARXIV:1901.03711)

    The one takeaway, if you learn nothing else, is this: this new result resolves nothing. Stay tuned, because more and better data is coming. These galaxies are likely extremely low in dark matter, and possibly entirely free of dark matter. If the Yale team’s initial results hold up, these galaxies must be fundamentally different in composition from all the other galaxies we’ve ever found.

    If all galaxies follow the same underlying rules, only their compositions can differ. The discovery of a dark matter-free galaxy, if that result holds up, is an extremely strong piece of evidence for a dark matter-rich Universe. Keep your eyes open for more news on DF2 and DF4, because this story is far from over.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:45 pm on June 14, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel, Scientists Discover Space’s Largest Intergalactic Bridge, Solving A Huge Dark Matter Puzzle   

    From Ethan Siegel: “Scientists Discover Space’s Largest Intergalactic Bridge, Solving A Huge Dark Matter Puzzle” 

    From Ethan Siegel
    Jun 13, 2019

    This image shows a composite of optical, X-ray, Microwave and radio data of the regions between the colliding galaxy clusters Abell 399 and Abell 401. The X-rays are concentrated near where the clusters are, but there’s a clear radio bridge between them (in blue). (M. MURGIA / INAF, BASED ON F. GOVONI ET AL., 2019, SCIENCE)

    Dark matter’s naysayers latched onto one tiny puzzle. This new find may have solved it completely.

    Imagine the largest cosmic smashup you can. Take the largest gravitationally bound structures we know of — enormous galaxy clusters that might contain thousands of Milky Way-sized galaxies — and allow them to attract and merge. With individual galaxies, stars, gas, dust, black holes, dark matter and more inside, there are bound to not only be fireworks, but novel astrophysical phenomena that might not show up elsewhere in the Universe.

    The gas within these clusters can heat up, interact, and develop shocks, causing the emission of spectacularly energetic radiation. Dark matter can pass through everything else, separating its gravitational effects from the majority of the normal matter. And, in theory, charged particles can accelerate tremendously, creating coherent magnetic fields that could span millions of light-years. For the first time, such an intergalactic bridge between two colliding clusters has been discovered, with tremendous implications for our Universe.

    This Chandra image shows a large-scale view of the galaxy cluster MACSJ0717, where the white box shows the field-of-view of an available Chandra/HST composite image. The green line shows the approximate position of the large-scale filament leading into the cluster, suggesting a connection between the great cosmic web and the galaxy clusters that populate our Universe. (NASA/CXC/IFA/C. MA ET AL.)

    In our cosmos, astronomical structures aren’t all created equal. Planets are dwarfed by stars, which themselves are far smaller in scale than Solar Systems. Collections of many hundreds of billions of these systems are required to make up a large galaxy like the Milky Way, while galactic groups and clusters might contain thousands of Milky Way-sized galaxies. On the largest scales of all, these enormous galaxy clusters can collide and merge.

    Back in 2004, two sets of observations came in concerning a pair of galaxy clusters in close proximity: 1E 0657–558, more commonly known as the Bullet Cluster. From an optical image alone, two dense collections of galaxies — the two independent clusters — can clearly be identified.

    The Bullet cluster, the first classic example of two colliding galaxy clusters where the key effect was observed. In the optical, the presence of two nearby clusters (left and right) can be clearly discerned.(NASA/STSCI; MAGELLAN/U.ARIZONA/D.CLOWE ET AL.)

    There are then two additional things you can do to tease out additional information about what’s going on in this system. One physically interesting measurement you can make is to look at the light from all the galaxies you can see in the image, and identify which ones are behind (background galaxies) the clusters versus which ones are in front (foreground galaxies) of them.

    When you look at the foreground galaxies, their orientations should be random: they should be circular or elliptical or disk-like with no average distortion skewed to favor any particular direction. But if there’s a large mass in front of the light, there should be gravitational lensing effects that distort the background images. The statistical differences in shape between the background and foreground galaxies can tell you information about how much mass is located at various positions in space, at least from our point of view.

    Gravitational Lensing NASA/ESA


    Any configuration of background points of light, whether they be stars, galaxies or galaxy clusters, will be distorted due to the effects of foreground mass via weak gravitational lensing.

    Weak gravitational lensing NASA/ESA Hubble

    Even with random shape noise, the signature is unmistakable. By examining the difference between foreground (undistorted) and background (distorted) galaxies, we can reconstruct the mass distribution of massive extended objects, like galaxy clusters, in our Universe. (WIKIMEDIA COMMONS USER TALLJIMBO)

    The second thing you can do is to observe the exact same region of the sky in X-rays, using an advanced X-ray observatory in space. Observations that were conducted with NASA’s Chandra X-ray observatory were sufficient to do exactly that. What Chandra discovered was fascinating: two enormous clumps of gas were spotted, each one moving along with its home galaxy cluster. As expected, there’s an enormous amount of gas not only associated with each galaxy, but with the overall cluster as a whole.

    But what was unexpected was the finding that the gas, making up about 13–15% the overall cluster’s mass, was actually separated from the gravitational effects! Somehow, the normal matter and the gravitational effects were separated, as though the overall mass had simply passed straight through. This result was taken as overwhelming astrophysical evidence for the existence of dark matter.

    The gravitational lensing map (blue), overlayed over the optical and X-ray (pink) data of the Bullet cluster. The mismatch of the locations of the X-rays and the inferred mass is undeniable. (X-RAY: NASA/CXC/CFA/M.MARKEVITCH ET AL.; LENSING MAP: NASA/STSCI; ESO WFI; MAGELLAN/U.ARIZONA/D.CLOWE ET AL.; OPTICAL: NASA/STSCI; MAGELLAN/U.ARIZONA/D.CLOWE ET AL.)

    Since that time, more than a dozen other galaxy groups and clusters have been spotted colliding with one another, with each one demonstrating the same effect. Before a collision, if a cluster emits X-rays, those X-rays are associated with the cluster itself, and any gravitational distortion is found coincident with the location of the galaxies and the gas.

    But after a collision, the X-ray emitting gas is offset from the matter, implying that the same physics is at play. When the clusters collide:

    the galaxies take up only a small volume inside each cluster, and pass straight through,
    the intracluster gas interacts and heats up, emitting X-rays and slowing down,
    while the dark matter, expected to occupy an enormous halo surrounding each cluster, passes through as well, affected only by gravitation.

    In every colliding group and cluster we’ve observed, the same separation of X-ray gas and overall matter is seen.

    The X-ray (pink) and overall matter (blue) maps of various colliding galaxy clusters show a clear separation between normal matter and gravitational effects, some of the strongest evidence for dark matter. Although some of the simulations we perform indicate that a few clusters may be moving faster than expected, the simulations include gravitation alone, and other effects may also be important for the gas.(X-RAY: NASA/CXC/ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE, SWITZERLAND/D.HARVEY NASA/CXC/DURHAM UNIV/R.MASSEY; OPTICAL/LENSING MAP: NASA, ESA, D. HARVEY (ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE, SWITZERLAND) AND R. MASSEY (DURHAM UNIVERSITY, UK))

    You might think that this empirical proof of dark matter, seen in so many independent systems, would sway any reasonable skeptic. Alternative theories of gravity were concocted to try to explain the misalignment between the gravitational lensing signal and the presence of matter, postulating a non-local effect that resulted in a gravitational force that was offset from the matter. But any theory that worked for one particular alignment of colliding clusters failed to explain clusters in a pre-collisional state. 15 years later, alternatives still fail to explain both configurations.

    But a Universe with dark matter has a very high burden of proof: it has to explain every single observed property of these clusters. While many of these colliding groups and clusters have speeds that are predicted by a dark matter-rich Universe, the Bullet cluster — the original example — moves extremely quickly.

    The formation of cosmic structure, on both large scales and small scales, is highly dependent on how dark matter and normal matter interact. Despite the indirect evidence for dark matter, we’d love to be able to detect it directly, which is something that can only happen if there’s a non-zero cross-section between normal matter and dark matter. The structures that arise, however, including galaxy clusters and larger-scale filaments, are undisputed. (ILLUSTRIS COLLABORATION / ILLUSTRIS SIMULATION)

    When you know the ingredients of your Universe and the laws of physics that govern what’s in it, you can run simulations to predict what types of large-scale structure emerge. When we include simulations with gravitation alone, the fastest colliding clusters we predict move slower than the Bullet cluster does; the likelihood of having a single example like it in our Universe is less than 1-in-a-million.

    When we buck the cosmic odds like this, we demand an explanation. While it’s always possible that our Universe is simply a lottery-winner in terms of what’s present within it, this observation poses a legitimate problem. Either the observations were wrong, or something else — some physical mechanism — is causing this normal matter to accelerate beyond what the gravitational effects alone would indicate.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 2:11 pm on June 11, 2019 Permalink | Reply
    Tags: , , , Ethan Siegel, , Future Circular Collider (FCC), If we don’t push the frontiers of physics we’ll never learn what lies beyond our current understanding., , Lepton collider, New accelerators ecplored, , , , Proton collider,   

    From Ethan Siegel: “Does Particle Physics Have A Future On Earth?” 

    From Ethan Siegel
    Jun 11. 2019

    The inside of the LHC, where protons pass each other at 299,792,455 m/s, just 3 m/s shy of the speed of light. As powerful as the LHC is, the cancelled SSC could have been three times as powerful, and may have revealed secrets of nature that are inaccessible at the LHC. (CERN)

    If we don’t push the frontiers of physics, we’ll never learn what lies beyond our current understanding.

    At a fundamental level, what is our Universe made of? This question has driven physics forward for centuries. Even with all the advances we’ve made, we still don’t know it all. While the Large Hadron Collider discovered the Higgs boson and completed the Standard Model earlier this decade, the full suite of the particles we know of only make up 5% of the total energy in the Universe.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event

    Standard Model of Particle Physics

    We don’t know what dark matter is, but the indirect evidence for it is overwhelming.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)

    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    Same deal with dark energy.

    Dark Energy Survey

    Dark Energy Camera [DECam], built at FNAL

    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

    Or questions like why the fundamental particles have the masses they do, or why neutrinos aren’t massless, or why our Universe is made of matter and not antimatter. Our current tools and searches have not answered these great existential puzzles of modern physics. Particle physics now faces an incredible dilemma: try harder, or give up.

    The Standard Model of particle physics accounts for three of the four forces (excepting gravity), the full suite of discovered particles, and all of their interactions. Whether there are additional particles and/or interactions that are discoverable with colliders we can build on Earth is a debatable subject, but one we’ll only know the answer to if we explore past the known energy frontier. (CONTEMPORARY PHYSICS EDUCATION PROJECT / DOE / NSF / LBNL)

    The particles and interactions that we know of are all governed by the Standard Model of particle physics, plus gravity, dark matter, and dark energy. In particle physics experiments, however, it’s the Standard Model alone that matters. The six quarks, charged leptons and neutrinos, gluons, photon, gauge bosons and Higgs boson are all that it predicts, and each particle has been not only discovered, but their properties have been measured.

    As a result, the Standard Model is perhaps a victim of its own success. The masses, spins, lifetimes, interaction strengths, and decay ratios of every particle and antiparticle have all been measured, and they agree with the Standard Model’s predictions at every turn. There are enormous puzzles about our Universe, and particle physics has given us no experimental indications of where or how they might be solved.

    The particles and antiparticles of the Standard Model have now all been directly detected, with the last holdout, the Higgs Boson, falling at the LHC earlier this decade. All of these particles can be created at LHC energies, and the masses of the particles lead to fundamental constants that are absolutely necessary to describe them fully. These particles can be well-described by the physics of the quantum field theories underlying the Standard Model, but they do not describe everything, like dark matter. (E. SIEGEL / BEYOND THE GALAXY)

    It might be tempting, therefore, to presume that building a superior particle collider would be a fruitless endeavor. Indeed, this could be the case. The Standard Model of particle physics has explicit predictions for the couplings that occur between particles. While there are a number of parameters that remain poorly determined at present, it’s conceivable that there are no new particles that a next-generation collider could reveal.

    The heaviest Standard Model particle is the top quark, which takes roughly ~180 GeV of energy to create. While the Large Hadron Collider can reach energies of 14 TeV (about 80 times the energy needed to create a top quark), there might not be any new particles present to find unless we reach energies in excess of 1,000,000 times as great. This is the great fear of many: the possible existence of a so-called “energy desert” extending for many orders of magnitude.

    There is certainly new physics beyond the Standard Model, but it might not show up until energies far, far greater than what a terrestrial collider could ever reach. Still, whether this scenario is true or not, the only way we’ll know is to look. In the meantime, properties of the known particles can be better explored with a future collider than any other tool. The LHC has failed to reveal, thus far, anything beyond the known particles of the Standard Model. (UNIVERSE-REVIEW.CA)

    But it’s also possible that there is new physics present at a modest scale beyond where we’ve presently probed. There are many theoretical extensions to the Standard Model that are quite generic, where deviations from the Standard Model’s predictions can be detected by a next-generation collider.

    If we want to know what the truth about our Universe is, we have to look, and that means pushing the present frontiers of particle physics into uncharted territory. Right now, the community is debating between multiple approaches, with each one having its pros and cons. The nightmare scenario, however, isn’t that we’ll look and won’t find anything. It’s that infighting and a lack of unity will doom experimental physics forever, and that we won’t get a next-generation collider at all.

    A hypothetical new accelerator, either a long linear one or one inhabiting a large tunnel beneath the Earth, could dwarf the sensitivity to new particles that prior and current colliders can achieve. Even at that, there’s no guarantee we’ll find anything new, but we’re certain to find nothing new if we fail to try. (ILC COLLABORATION)

    When it comes to deciding what collider to build next, there are two generic approaches: a lepton collider (where electrons and positrons are accelerated and collided), and a proton collider (where protons are accelerated and collided). The lepton colliders have the advantages of:

    the fact that leptons are point particles, rather than composite particles,
    100% of the energy from electrons colliding with positrons can be converted into energy for new particles,
    the signal is clean and much easier to extracts,
    and the energy is controllable, meaning we can choose to tune the energy to a specific value and maximize the chance of creating a specific particle.

    Lepton colliders, in general, are great for precision studies, and we haven’t had a cutting-edge one since LEP was operational nearly 20 years ago.

    CERN LEP Collider

    At various center-of-mass energies in electron/positron (lepton) colliders, various Higgs production mechanisms can be reached at explicit energies. While a circular collider can achieve much greater collision rates and production rates of W, Z, H, and t particles, a long-enough linear collider can conceivably reach higher energies, enabling us to probe Higgs production mechanisms that a circular collider cannot reach. This is the main advantage that linear lepton colliders possess; if they are low-energy only (like the proposed ILC), there is no reason not to go circular. (H. ABRAMOWICZ ET AL., EUR. PHYS. J. C 77, 475 (2017))

    It’s very unlikely, unless nature is extremely kind, that a lepton collider will directly discover a new particle, but it may be the best bet for indirectly discovering evidence of particles beyond the Standard Model. We’ve already discovered particles like the W and Z bosons, the Higgs boson, and the top quark, but a lepton collider could both produce them in great abundances and through a variety of channels.

    The more events of interest we create, the more deeply we can probe the Standard Model. The Large Hadron Collider, for example, will be able to tell whether the Higgs behaves consistently with the Standard Model down to about the 1% level. In a wide series of extensions to the Standard Model, ~0.1% deviations are expected, and the right future lepton collider will get you the best physics constraints possible.

    The observed Higgs decay channels vs. the Standard Model agreement, with the latest data from ATLAS and CMS included. The agreement is astounding, and yet frustrating at the same time. By the 2030s, the LHC will have approximately 50 times as much data, but the precisions on many decay channels will still only be known to a few percent. A future collider could increase that precision by multiple orders of magnitude, revealing the existence of potential new particles.(ANDRÉ DAVID, VIA TWITTER)

    These precision studies could be incredibly sensitive to the presence of particles or interactions we haven’t yet discovered. When we create a particle, it has a certain set of branching ratios, or probabilities that it will decay in a variety of ways. The Standard Model makes explicit predictions for those ratios, so if we create a million, or a billion, or a trillion such particles, we can probe those branching ratios to unprecedented precisions.

    If you want better physics constraints, you need more data and better data. It isn’t just the technical considerations that should determine which collider comes next, but also where and how you can get the best personnel, the best infrastructure and support, and where you can build a (or take advantage of an already-existing) strong experimental and theoretical physics community.

    The idea of a linear lepton collider has been bandied about in the particle physics community as the ideal machine to explore post-LHC physics for many decades, but that was under the assumption that the LHC would find a new particle other than the Higgs. If we want to do precision testing of Standard Model particles to indirectly search for new physics, a linear collider may be an inferior option to a circular lepton collider. (REY HORI/KEK)

    There are two general classes proposals for a lepton collider: a circular collider and a linear collider. Linear colliders are simple: accelerate your particles in a straight line and collide them together in the center. With ideal accelerator technology, a linear collider 11 km long could reach energies of 380 GeV: enough to produce the W, Z, Higgs, or top in great abundance. With a 29 km linear collider, you could reach energies of 1.5 TeV, and with a 50 km collider, 3 TeV, although costs rise tremendously to accompany longer lengths.

    Linear colliders are slightly less expensive than circular colliders for the same energy, because you can dig a smaller tunnel to reach the same energies, and they don’t suffer energy losses due to synchrotron radiation, enabling them to reach potentially higher energies. However, the circular colliders offer an enormous advantage: they can produce much greater numbers of particles and collisions.

    Future Circular Collider (FCC)Larger LHC

    The Future Circular Collider is a proposal to build, for the 2030s, a successor to the LHC with a circumference of up to 100 km: nearly four times the size of the present underground tunnels. This will enable, with current magnet technology, the creation of a lepton collider that can produce ~1⁰⁴ times the number of W, Z, H, and t particles that have been produced by prior and current colliders. (CERN / FCC STUDY)

    While a linear collider might be able to produce 10 to 100 times as many collisions as a prior-generation lepton collider like LEP (dependent on energies), a circular version can surpass that easily: producing 10,000 times as many collisions at the energies required to create the Z boson.

    Although circular colliders have substantially higher event rates than linear colliders at the relevant energies that produce Higgs particles as well, they begin to lose their advantage at energies required to produce top quarks, and cannot reach beyond that at all, where linear colliders become dominant.

    Because all of the decay and production processes that occur in these heavy particles scales as either the number of collisions or the square root of the number of collisions, a circular collider has the potential to probe physics with many times the sensitivity of a linear collider.

    A number of the various lepton colliders, with their luminosity (a measure of the collision rate and the number of detections one can make) as a function of center-of-mass collision energy. Note that the red line, which is a circular collider option, offers many more collisions than the linear version, but gets less superior as energy increases. Beyond about 380 GeV, circular colliders cannot reach, and a linear collider like CLIC is the far superior option. (GRANADA STRATEGY MEETING SUMMARY SLIDES / LUCIE LINSSEN (PRIVATE COMMUNICATION))

    The proposed FCC-ee, or the lepton stage of the Future Circular Collider, would realistically discover indirect evidence for any new particles that coupled to the W, Z, Higgs, or top quark with masses up to 70 TeV: five times the maximum energy of the Large Hadron Collider.

    The flipside to a lepton collider is a proton collider, which — at these high energies — is essentially a gluon-gluon collider. This cannot be linear; it must be circular.

    The scale of the proposed Future Circular Collider (FCC), compared with the LHC presently at CERN and the Tevatron, formerly operational at Fermilab. The Future Circular Collider is perhaps the most ambitious proposal for a next-generation collider to date, including both lepton and proton options as various phases of its proposed scientific programme. (PCHARITO / WIKIMEDIA COMMONS)

    There is really only one suitable site for this: CERN, since it not only needs a new, enormous tunnel, but all the infrastructure of the prior stages, which only exist at CERN. (They could be built elsewhere, but the cost would be more expensive than a site where the infrastructure like the LHC and earlier colliders like SPS already exist.)

    The Super Proton Synchrotron (SPS), CERN’s second-largest accelerator.

    Just as the LHC is presently occupying the tunnel previously occupied by LEP, a circular lepton collider could be superseded by a next-generation circular proton collider, such as the proposed FCC-pp. However, you cannot run both an exploratory proton collider and a precision lepton collider simultaneously; you must decommission one to finish the other.

    The CMS detector at CERN, one of the two most powerful particle detectors ever assembled. Every 25 nanoseconds, on average, a new particle bunch collides at the center-point of this detector. A next-generation detector, whether for a lepton or proton collider, may be able to record even more data, faster, and with higher-precision than the CMS or ATLAS detectors can at present. (CERN)

    It’s very important to make the right decision, as we do not know what secrets nature holds beyond the already-explored frontiers. Going to higher energies unlocks the potential for new direct discoveries, while going to higher precisions and greater statistics could provide even stronger indirect evidence for the existence of new physics.

    The first-stage linear colliders are going to cost between 5 and 7 billion dollars, including the tunnel, while a proton collider of four times the LHC’s radius, with magnets twice as strong, 10 times the collision rate and next-generation computing and cryogenics might cost a total of up to $22 billion, offering as big a leap over the LHC as the LHC was over the Tevatron. Some money could be saved if we build the circular lepton and proton colliders one after the other in the same tunnel, which would essentially provide a future for experimental particle physics after the LHC is done running at the end of the 2030s.

    The Standard Model particles and their supersymmetric counterparts. Slightly under 50% of these particles have been discovered, and just over 50% have never showed a trace that they exist. Supersymmetry is an idea that hopes to improve on the Standard Model, but it has yet to make successful predictions about the Universe in attempting to supplant the prevailing theory. However, new colliders are not being proposed to find supersymmetry or dark matter, but to perform generic searches. Regardless of what they’ll find, we’ll learn something new about the Universe itself. (CLAIRE DAVID / CERN)

    The most important thing to remember in all of this is that we aren’t simply continuing to look for supersymmetry, dark matter, or any particular extension of the Standard Model. We have a slew of problems and puzzles that indicate that there must be new physics beyond what we currently understand, and our scientific curiosity compels us to look. In choosing what machine to build, it’s vital to choose the most performant machine: the ones with the highest numbers of collisions at the energies we’re interested in probing.

    Regardless of which specific projects the community chooses, there will be trade-offs. A linear lepton collider can always reach higher energies than a circular one, while a circular one can always create more collisions and go to higher precisions. It can gather just as much data in a tenth the time, and probe for more subtle effects, at the cost of a lower energy reach.

    Will it be successful? Regardless of what we find, that answer is unequivocally yes. In experimental physics, success does not equate to finding something, as some might erroneously believe. Instead, success means knowing something, post-experiment, that you did not know before you did the experiment. To push beyond the presently known frontiers, we’d ideally want both a lepton and a proton collider, at the highest energies and collision rates we can achieve.

    There is no doubt that new technologies and spinoffs will come from whichever collider or colliders come next, but that’s not why we do it. We are after the deepest secrets of nature, the ones that will remain elusive even after the Large Hadron Collider finishes. We have the technical capabilities, the personnel, and the expertise to build it right at our fingertips. All we need is the political and financial will, as a civilization, to seek the ultimate truths about nature.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 12:55 pm on June 6, 2019 Permalink | Reply
    Tags: Dark Matter seearch is so far fruitless, Ethan Siegel, This Is Why It’s Meaningless   

    From Ethan Siegel: “This Is Why It’s Meaningless That Dark Matter Experiments Haven’t Found Anything” 

    From Ethan Siegel
    Jun 6, 2019

    The XENON1T detector, with its low-background cryostat, is installed in the centre of a large water shield to protect the instrument against cosmic ray backgrounds. This setup enables the scientists working on the XENON1T experiment to greatly reduce their background noise, and more confidently discover the signals from processes they’re attempting to study. (XENON1T COLLABORATION)

    Let’s say you have an idea about how our physical reality might be different from how we currently conceptualize it. Perhaps you think there are additional particles or interactions present, and that this might hold the solution to some of the greatest puzzles facing the natural sciences today. So what do you do? You formulate a hypothesis, you develop it, and then you try and tease out what the observable, measurable consequences would be.

    Some of these consequences will be model-independent, meaning that there will be signatures that appear regardless of whether one specific model is right or not. Others will be extremely model-dependent, creating experimental or observational signatures that show up in some models but not others. Whenever a dark matter experiment comes up empty, it only tests the model-dependent assumptions, not the model-independent ones. Here’s why that doesn’t mean anything for the existence of dark matter.

    When you collide any two particles together, you probe the internal structure of the particles colliding. If one of them isn’t fundamental, but is rather a composite particle, these experiments can reveal its internal structure. Here, an experiment is designed to measure the dark matter/nucleon scattering signal. However, there are many mundane, background contributions that could give a similar result. This particular signal will show up in Germanium, liquid XENON and liquid ARGON detectors. (DARK MATTER OVERVIEW: COLLIDER, DIRECT AND INDIRECT DETECTION SEARCHES — QUEIROZ, FARINALDO S. ARXIV:1605.08788)

    You can’t get mad at a team for trying the improbable, hoping that nature cooperates. Some of the most famous discoveries of all time have come about thanks to nothing more than mere serendipity, and so if we can test something at low-cost with an insanely high reward, we tend to go for it. Believe it or not, that’s the mindset that’s driving the direct searches for dark matter.

    In order to understand how we might find dark matter, however, you have to first understand the full suite of what else we know. That’s the model-independent evidence we have to guide us towards the possibilities of direct detection. Of course, we haven’t yet directly found dark matter in the form of an interaction with another particle, but that’s okay. The indirect evidence all shows that it must be real.

    The particles and antiparticles of the Standard Model have now all been directly detected, with the last holdout, the Higgs Boson, falling at the LHC earlier this decade. All of these particles can be created at LHC energies, and the masses of the particles lead to fundamental constants that are absolutely necessary to describe them fully. These particles can be well-described by the physics of the quantum field theories underlying the Standard Model, but they do not describe everything, like dark matter. (E. SIEGEL / BEYOND THE GALAXY)

    It all starts with the germ of an idea. We can start with the undisputed basics: the Universe consists of all the protons, neutrons and electrons that make up our bodies, our planet and all the matter we’re familiar with, as well as some photons (light, radiation, etc.) thrown in there for good measure.

    Protons and neutrons can be broken up into even more fundamental particles — the quarks and gluons — and along with the other Standard Model particles, make up all the known matter in the Universe. The big idea of dark matter is that there’s something other than these known particles contributing in a significant way to the total amounts of matter in the Universe. It’s a revolutionary assumption, and one that might seem like an extraordinary leap.

    The very notion of it might compel you to ask, “why would we think such a thing?

    The motivation comes by looking at the Universe itself. Science has taught us a lot about what’s out there in the distant Universe, and much of it is completely undisputed. We know how stars work, for example, and we have an incredible understanding of how gravity works. If we look at galaxies, clusters of galaxies and go all the way up to the largest-scale structures in the Universe, there are two things we can extrapolate very well.

    1.How much mass there is in these structures at every level. We look at the motions of these objects, we look at the gravitational rules that govern orbiting bodies, whether something is bound or not, how it rotates, how structure forms, etc., and we get a number for how much matter there has to be in there.
    2.How much mass is present in the stars contained within these structures. we know how stars work, so as long as we can measure the starlight coming from these objects, we can know how much mass is there in stars.

    The two bright, large galaxies at the center of the Coma Cluster, NGC 4889 (left) and the slightly smaller NGC 4874 (right), each exceed a million light years in size. But the galaxies on the outskirts, zipping around so rapidly, point to the existence of a large halo of dark matter throughout the entire cluster. The mass of the normal matter alone is insufficient to explain this bound structure. (ADAM BLOCK/MOUNT LEMMON SKYCENTER/UNIVERSITY OF ARIZONA)

    U Arizona Mt Lemon Sky Center, in the Santa Catalina Mountains approximately 28 kilometers (17 mi) northeast of Tucson, Arizona (USA), 9,171 ft (2,795 m)

    These two numbers don’t match, and the mismatch between the values we obtain for them is spectacular in magnitude: they miss by a factor of approximately 50. There must be something more than just stars responsible for the vast majority of mass in the Universe. This is true for the stars within individual galaxies of all sizes all the way up to the largest clusters galaxies in the Universe, and beyond that, the entire cosmic web.

    That’s a big hint that there’s something more than just stars going on, but you might not be convinced that this requires a new type of matter. If that’s all we had to work with, scientists wouldn’t be convinced either! Fortunately, there’s an enormous suite of observations that — when we take it all together — compels us to consider the dark matter hypothesis as extraordinarily difficult to avoid.

    The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. The Universe is 75–76% hydrogen, 24–25% helium, a little bit of deuterium and helium-3, and a trace amount of lithium by mass. After tritium and beryllium decay away, this is what we’re left with, and this remains unchanged until stars form. Only about 1/6th of the Universe’s matter can be in the form of this normal (baryonic, or atom-like) matter. (NASA, WMAP SCIENCE TEAM AND GARY STEIGMAN)

    When we extrapolate the laws of physics all the way back to the earliest times in the Universe, we find that there was not only a time so early when the Universe was hot enough that neutral atoms couldn’t form, but there was a time where even nuclei couldn’t form! When they finally can form without immediately being blasted apart, that phase is where the lightest nuclei of all, including different isotopes of hydrogen and helium, originate from.

    The formation of the first elements in the Universe after the Big Bang — due to Big Bang Nucleosynthesis — tells us with very, very small errors how much total “normal matter” is there in the Universe. Although there is significantly more than what’s around in stars, it’s only about one-sixth of the total amount of matter we know is there from the gravitational effects. Not only stars, but normal matter in general, isn’t enough.

    The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s. This image encodes a huge amount of information about the early Universe, including its composition, age, and history. The fluctuations are only tens to hundreds of microkelvin in magnitude, but definitively point to the existence of both normal and dark matter in a 1:5 ratio. (ESA AND THE PLANCK COLLABORATION)


    NASA/ Cosmic Background Explorer COBE 1989 to 1993.

    Cosmic Microwave Background NASA/WMAP

    NASA/WMAP 2001 to 2010

    CMB per ESA/Planck

    ESA/Planck 2009 to 2013

    Additional evidence for dark matter comes to us from another early signal in the Universe: when neutral atoms form and the Big Bang’s leftover glow can travel, at last, unimpeded through the Universe. It’s very close to a uniform background of radiation that’s just a few degrees above absolute zero. But when we look at the temperatures on ~microkelvin scales, and on small angular (< 1°) scales, we see it’s not uniform at all.

    The fluctuations in the cosmic microwave background are particularly interesting. They tell us what fraction of the Universe is in the form of normal (protons+neutrons+electrons) matter, what fraction is in radiation, and what fraction is in non-normal, or dark matter, among other things. Again, they give us that same ratio: that dark matter is about five-sixths of all the matter in the Universe.

    The observations of baryon acoustic oscillations in the magnitude where they’re seen, on large scales, indicate that the Universe is made of mostly dark matter, with only a small percentage of normal matter causing these ‘wiggles’ in the graph above. (MICHAEL KUHLEN, MARK VOGELSBERGER, AND RAUL ANGULO)

    And finally, there’s the incontrovertible evidence found in the great cosmic web. When we look at the Universe on the largest scales, we know that gravitation is responsible, in the context of the Big Bang, for causing matter to clump and cluster together. Based on the initial fluctuations that begin as overdense and underdense regions, gravitation (and the interplay of the different types of matter with one another and radiation) determine what we’ll see throughout our cosmic history.

    This is particularly important, because we can not only see the ratio of normal-to-dark matter in the magnitude of the wiggles in the graph above, but we can tell that the dark matter is cold, or moving below a certain speed even when the Universe is very young. These pieces of knowledge lead to outstanding, precise theoretical predictions.

    According to models and simulations, all galaxies should be embedded in dark matter halos, whose densities peak at the galactic centers. On long enough timescales, of perhaps a billion years, a single dark matter particle from the outskirts of the halo will complete one orbit. The effects of gas, feedback, star formation, supernovae, and radiation all complicate this environment, making it extremely difficult to extract universal dark matter predictions. (NASA, ESA, AND T. BROWN AND J. TUMLINSON (STSCI))

    All together, they tell us that around every galaxy and cluster of galaxies, there should be an extremely large, diffuse halo of dark matter. This dark matter should have practically no collisional interactions with normal matter; upper limits indicate that it would take light-years of solid lead for a dark matter particle to have a 50/50 shot of interacting just once.

    However, there should be plenty of dark matter particles passing undetected through Earth, me and you every second. In addition, dark matter should also not collide or interact with itself, the way normal matter does. That makes direct detection difficult, to say the least. But thankfully, there are some indirect ways of detecting dark matter’s presence. The first is to study what’s called gravitational lensing.

    When there are bright, massive galaxies in the background of a cluster, their light will get stretched, magnified and distorted due to the general relativistic effects known as gravitational lensing. (NASA, ESA, AND JOHAN RICHARD (CALTECH, USA) ACKNOWLEDGEMENT: DAVIDE DE MARTIN & JAMES LONG (ESA / HUBBLE)NASA, ESA, AND J. LOTZ AND THE HFF TEAM, STSCI)

    By looking at how the background light gets distorted by the presence of intervening mass (solely from the laws of General Relativity), we can reconstruct how much mass is in that object. Again, it tells us that there must be about six times as much matter as is present in all types of normal (Standard Model-based) matter alone.

    There’s got to be dark matter in there, in quantities that are consistent with all the other observations. But occasionally, the Universe is kind, and gives us two clusters or groups of galaxies that collide with one another. When we examine these colliding clusters of galaxies, we learn something even more profound.

    Four colliding galaxy clusters, showing the separation between X-rays (pink) and gravitation (blue), indicative of dark matter. On large scales, cold dark matter is necessary, and no alternative or substitute will do. (X-RAY: NASA/CXC/UVIC./A.MAHDAVI ET AL. OPTICAL/LENSING: CFHT/UVIC./A. MAHDAVI ET AL. (TOP LEFT); X-RAY: NASA/CXC/UCDAVIS/W.DAWSON ET AL.; OPTICAL: NASA/ STSCI/UCDAVIS/ W.DAWSON ET AL. (TOP RIGHT); ESA/XMM-NEWTON/F. GASTALDELLO (INAF/ IASF, MILANO, ITALY)/CFHTLS (BOTTOM LEFT); X-RAY: NASA, ESA, CXC, M. BRADAC (UNIVERSITY OF CALIFORNIA, SANTA BARBARA), AND S. ALLEN (STANFORD UNIVERSITY) (BOTTOM RIGHT))

    The dark matter really does pass right through one another, and accounts for the vast majority of the mass; the normal matter in the form of gas creates shocks (in X-ray/pink, above), and only accounts for some 15% of the total mass in there. In other words, about five-sixths of that mass is dark matter! By looking at colliding galaxy clusters and monitoring how both the observable matter and the total gravitational mass behaves, we can come up with an astrophysical, empirical proof for the existence of dark matter. There is no modification to the law of gravity that can explain why:

    two clusters, pre-collision, will have their mass and gas aligned,
    but post-collision, will have their mass and gas separated.

    Still, despite all of this model-independent evidence, we’d still like to detect dark matter directly. It’s that step — and only that step — that we’ve failed to achieve.

    The spin-independent WIMP/nucleon cross-section now gets its most stringent limits from the XENON1T experiment, which has improved over all prior experiments, including LUX. While many may be disappointed that XENON1T didn’t robustly find dark matter, we mustn’t forget about the other physical processes that XENON1T is sensitive to. (E. APRILE ET AL., PHYS. REV. LETT. 121, 111302 (2018)).

    Unfortunately, we don’t know what’s beyond the Standard Model. We’ve never discovered a single particle that isn’t part of the Standard Model, and yet we know there must be more than what we’ve presently discovered out there. As far as dark matter goes, we don’t know what dark matter’s particle (or particles) properties should be, should look like, or how to find it. We don’t even know if it’s all one thing, or if it’s made up of a variety of different particles.

    All we can do is look for interactions down to a certain cross-section, but no lower. We can look for energy recoils down to a certain minimum energy, but no lower. We can look for photon or neutrino conversions, but all these mechanisms have limitations. At some point, background effects — natural radioactivity, cosmic neutrons, solar/cosmic neutrinos, etc. — make it impossible to extract a signal below a certain threshold.

    The cryogenic setup of one of the experiments looking to exploit the hypothetical interactions between dark matter and electromagnetism, focused on a low-mass candidate: the axion. Yet if dark matter doesn’t have the specific properties that current experiments are testing for, none of the ones we’ve even imagined will ever see it directly. (AXION DARK MATTER EXPERIMENT (ADMX) / LLNL’S FLICKR)

    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington

    To date, the direct detection efforts having to do with dark matter have come up empty. There are no interaction signals we’ve observed that require dark matter to explain them, or that aren’t consistent with Standard Model-only particles in our Universe. Direct detection efforts can disfavor or constrain specific dark matter particles or scenarios, but does not affect the enormous suite of indirect, astrophysical evidence that leaves dark matter as the only viable explanation.

    Many people are working tirelessly on alternatives, but unless they’re misrepresenting the facts about dark matter (and some do exactly that), they have an enormous suite of evidence they’re required to explain. When it comes to looking for the great cosmic unknowns, we might get lucky, and that’s why we try. But absence of evidence is not evidence of absence. When it comes to dark matter, don’t let yourself be fooled.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 8:42 am on June 2, 2019 Permalink | Reply
    Tags: 'What Is The Fine Structure Constant And Why Does It Matter?', , Ethan Siegel, ,   

    From Ethan Siegel: “Ask Ethan: ‘What Is The Fine Structure Constant And Why Does It Matter?'” 

    From Ethan Siegel
    Jun 1, 2019

    Forget the speed of light or the electron’s charge. This is the physical constant that really matters.

    The each s orbital (red), each of the p orbitals (yellow), the d orbitals (blue) and the f orbitals (green) can contain only two electrons apiece: one spin up and one spin down in each one. The effects of spin, of moving close to the speed of light, and of the inherently fluctuating nature of the quantum fields that permeate the Universe are all responsible for the fine structure that matter exhibits. (LIBRETEXTS LIBRARY / NSF / UC DAVIS)

    Why is our Universe the way it is, and not some other way? There are only three things that make it so: the laws of nature themselves, the fundamental constants governing reality, and the initial conditions our Universe was born with. If the fundamental constants had substantially different values, it would be impossible to form even simple structures like atoms, molecules, planets, or stars. Yet, in our Universe, the constants have the explicit values they do, and that specific combination yields the life-friendly cosmos we inhabit. One of those fundamental constants is known as the fine structure constant, and Sandra Rothfork wants to know what that’s all about, asking:

    “Can you please explain the fine structure constant as simply as possible?”

    Let’s start at the beginning: with the simple building blocks of matter that make up the Universe.

    The proton’s structure, modeled along with its attendant fields, show how even though it’s made out of point-like quarks and gluons, it has a finite, substantial size which arises from the interplay of the quantum forces and fields inside it. The proton, itself, is a composite, not fundamental, quantum particle. The quarks and gluons inside it, though, along with the electrons that orbit atomic nuclei, are believed to be truly fundamental and indivisible. (BROOKHAVEN NATIONAL LABORATORY)

    Our Universe, if we break it down into its smallest constituent parts, is made up of the particles of the Standard Model.

    Standard Model of Particle Physics

    Quarks and gluons, two types of these particles, bind together to form bound states like the proton and neutron, which themselves bind together into atomic nuclei. Electrons, another type of fundamental particle, are the lightest of the charged leptons. When electrons and atomic nuclei bind together, they form atoms: the building blocks of the normal matter that makes up everything in our day-to-day experience.

    Before humans even recognized how atoms were structured, we had determined many of their properties. In the 19th century, we discovered that the electric charge of the nucleus determined an atom’s chemical properties, and found out that every atom had its own unique spectrum of lines that it could emit and absorb. Experimentally, the evidence for a discrete, quantum Universe was known long before theorists put it all together.

    The visible light spectrum of the Sun, which helps us understand not only its temperature and ionization, but the abundances of the elements present. The long, thick lines are hydrogen and helium, but every other line is from a heavy element. Many of the absorption lines shown here are very close to one another, showing evidence of fine structure, which can split two degenerate energy levels into closely-spaced but distinct ones. (NIGEL SHARP, NOAO / NATIONAL SOLAR OBSERVATORY AT KITT PEAK / AURA / NSF)

    National Solar Observatory at Kitt Peak in Arizona, elevation 6,886 ft (2,099 m)

    In 1912, Niels Bohr proposed his now-famous model of the atom, where the electrons orbited around the atomic nucleus like planets orbited the Sun. The big difference between Bohr’s model and our Solar System, though, was that there were only certain particular states that were allowed for the atom, whereas planets could orbit with any combination of speed and radius that led to a stable orbit.

    Bohr recognized that the electron and nucleus were both very small, had opposite charges, and knew that the nucleus had practically all of the mass. His groundbreaking contribution was understanding that electrons can only occupy certain energy levels, which he termed “atomic orbitals.” The electron can orbit the nucleus only with particular properties, leading to the absorption and emission lines characteristic to each individual atom.

    When free electrons recombine with hydrogen nuclei, the electrons cascade down the energy levels, emitting photons as they go. In order for stable, neutral atoms to form in the early Universe, they have to reach the ground state without producing a potentially ionizing, ultraviolet photon. The Bohr model of the atom provides the course (or rough, or gross) structure of the energy levels, but this already was insufficient to describe what had been seen decades prior. (BRIGHTERORANGE & ENOCH LAU/WIKIMDIA COMMONS)

    This model, as brilliant and clever as it is, immediately failed to reproduce the decades-old experimental results from the 19th century. All the way back in 1887, Michelson and Morely had determined the atomic emission and absorption properties of hydrogen, and they didn’t quite match the predictions of the Bohr atom.

    The same scientists who determined that there was no difference in the speed of light whether it moved with, against, or perpendicular to the motion of the Earth had also measured the spectral lines of hydrogen more precisely than anyone ever before. While the Bohr model came close, Michelson and Morely’s results demonstrated small shifts and extra energy states that departed slightly but significantly from Bohr’s predictions. In particular, there were some energy levels that appeared to split into two, whereas Bohr’s model only predicted one.

    In the Bohr model of the hydrogen atom, only the orbiting angular momentum of the point-like electron contributes to the energy levels. Adding in relativistic effects and spin effects not only causes a shift in these energy levels, but causes degenerate levels to split into multiple states, revealing the fine structure of matter atop the coarse structure predicted by Bohr. (RÉGIS LACHAUME AND PIETER KUIPER / PUBLIC DOMAIN)

    Those additional energy levels, which were very close to one another and also close to Bohr’s predictions, were the first evidence of what we now call the fine structure of atoms. Bohr’s model, which simplistically modeled electrons as charged, spinless particles orbiting the nucleus at speeds much lower than the speed of light, successfully explained the coarse structure of atoms, but not this additional fine structure.

    That would require another advance, which came in 1916 when physicist Arnold Sommerfeld had a realization. If you modeled a hydrogen atom as Bohr did, but took the ratio of a ground-state electron’s velocity and compared it to the speed of light, you’d get a very specific value, which Sommerfeld called α: the fine structure constant. This constant, once you folded into Bohr’s equations properly, was able to precisely account for the energy difference between the coarse and fine structure predictions.

    A supercooled deuterium source, as shown here, doesn’t simply show discrete levels, but fringes that go atop of the standard constructive/destructive interference pattern. This additional fringe effect is a consequence of the fine structure of matter. (JOHNWALTON / WIKIMEDIA COMMONS)

    In terms of the other constants known at the time, α = e²/(4πε_0)ħc, where:

    e is the electron’s charge,
    ε_0 is the electromagnetic constant for the permittivity of free space,
    ħ is Planck’s constant,
    and c is the speed of light.

    Unlike these other constants, which have units associated with them, α is a truly dimensionless constant, which means it is simply a pure number, with no units associated with it at all. While the speed of light might be different if you measure it in meters per second, feet per year, miles per hour, or any other unit, α always has the same value. For this reason, it’s considered to be one of the fundamental constants that describes our Universe.

    The energy levels and electron wavefunctions that correspond to different states within a hydrogen atom, although the configurations are extremely similar for all atoms. The energy levels are quantized in multiples of Planck’s constant, but the sizes of the orbitals and atoms are determined by the ground-state energy and the electron’s mass. Additional effects may be subtle, but shift the energy levels in measurable, quantifiable fashions. (POORLENO OF WIKIMEDIA COMMONS)

    An atom’s energy levels cannot be accounted for properly without including these fine structure effects, a fact which resurfaced a decade after Bohr when the Schrödinger equation came onto the scene. Just as the Bohr model failed to reproduce the hydrogen atom’s energy levels properly, so did the Schrödinger equation. It was quickly discovered that there were three reasons for this.

    The Schrödinger equation is fundamentally non-relativistic, but electrons and other quantum particles can move close to the speed of light, and that effect must be included.
    Electrons don’t simply orbit atoms, but they also have an intrinsic angular momentum inherent to them: spin, with a value of ħ/2, that can either be aligned or anti-aligned with the rest of the atom’s angular momentum.
    Electrons also exhibit an inherent set of quantum fluctuations to their motion, known as zitterbewegung; this also contributes to the fine structure of atoms.

    When you include all of these effects, you can successfully reproduce both the gross and fine structure of matter.

    In the absence of a magnetic field, the energy levels of various states within an atomic orbital are identical (L). If a magnetic field is applied, however (R), the states split according to the Zeeman effect. Here we see the Zeeman splitting of a P-S doublet transition. Other types of splitting occur owing to spin-orbit interactions, relativistic effects, and interactions with the nuclear spin, leading to the fine and hyperfine structure of matter. (EVGENY AT ENGLISH WIKIPEDIA)

    The reason these corrections are so small is because the value of the fine structure constant, α, is also very small. According to our best modern measurements, the value of α = 0.007297352569, where only the last digit is uncertain. This is very close to being an exact number: α = 1/137. It was once considered possible that this exact figure could be accounted for somehow, but better theoretical and experimental research has demonstrated that the relation is inexact, and that α = 1/137.0359991, where again only the last digit is uncertain.

    The 21-centimeter hydrogen line comes about when a hydrogen atom containing a proton/electron combination with aligned spins (top) flips to have anti-aligned spins (bottom), emitting one particular photon of a very characteristic wavelength. The opposite-spin configuration in the n=1 energy level represents the ground state of hydrogen, but its zero-point-energy is a finite, non-zero value. This transition is part of the hyperfine structure of matter, going even beyond the fine structure we more commonly experience. (TILTEC OF WIKIMEDIA COMMONS)

    Even including all of these effects, though, doesn’t get you everything about atoms. Not only is there the coarse structure (from electrons orbiting a nucleus) and fine structure (from relativistic effects, the electron’s spin, and the electron’s quantum fluctuations), but there’s hyperfine structure: the interaction of the electron with the nuclear spin. The spin-flip transition of the hydrogen atom, for example, is the narrowest spectral line known in physics, and it’s due to this hyperfine effect that goes beyond even fine structure.

    The light from ultra-distant quasars provide cosmic laboratories for measuring not only the gas clouds they encounter along the way, but for the intergalactic medium that contains warm-and-hot plasmas outside of clusters, galaxies, and filaments. Because the exact properties of the emission or absorption lines are dependent on the fine structure constant, this is one of the top methods for probing the Universe for time or spatial variations in the fine structure constant. (ED JANSSEN, ESO)

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and />•YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    But the fine structure constant, α, is of tremendous interest to physics. Some have investigated whether it might not be perfectly constant. Various measurements have indicated, at various points in our scientific history, that α might either vary with time or from location to location in the Universe. Measurements of the spectral lines of hydrogen and deuterium, in some cases, have indicated that perhaps α changes by ~0.0001% through space or time.

    These initial results, however, have failed to hold up to independent verification, and are treated as dubious by the greater physics community. If we did ever robustly observe such variation, it would teach us that something that we observe to be unchanging in the Universe — like the electron charge, Planck’s constant, or the speed of light — might actually not be a constant through space or time.

    A Feynman diagram representing electron-electron scattering, which requires summing over all the possible histories of the particle-particle interactions. The idea that a positron is an electron moving backwards in time grew out of the collaboration between Feynman and Wheeler, but the strength of the scattering interaction is energy-dependent and is governed by the fine structure constant describing the electromagnetic interactions. (DMITRI FEDOROV)

    A different type of variation, though, has actually been reproduced: α changes as a function of the energy conditions under which you perform your experiments.

    Let’s think about why this must be so by imagining a different way of looking at the fine structure of the Universe: take two electrons and hold them a specific distance apart from one another. The fine structure constant, α, can be thought of as the ratio between the energy needed to overcome the electrostatic repulsion driving these electrons apart and the energy of a single photon whose wavelength is 2π multiplied by the separation between those electrons.

    In a quantum Universe, though, there are always particle-antiparticle pairs (or quantum fluctuations) that populate even completely empty space. At higher energies, this changes the strength of the electrostatic repulsion between two electrons.

    A visualization of QCD illustrates how particle/antiparticle pairs pop out of the quantum vacuum for very small amounts of time as a consequence of Heisenberg uncertainty. The quantum vacuum is interesting because it demands that empty space itself isn’t so empty, but is filled with all the particles, antiparticles and fields in various states that are demanded by the quantum field theory that describes our Universe. (DEREK B. LEINWEBER)

    The reason why is actually straightforward: the lightest charged particles in the Standard Model are electrons and positrons, and at low energies, the virtual contributions from electron-positron pairs are the only quantum effects that matter in terms of the strength of the electrostatic force. But at higher energies, it not only becomes easier to make electron-positron pairs, giving you a larger contribution, but you start getting additional contributions from heavier particle-antiparticle combinations.

    At the (mundane) low energies we have in our Universe today, α is approximately 1/137. But at the electroweak scale, where you find the heaviest particles like the W, Z, Higgs boson and top quark, α is somewhat greater: more like 1/128. Effectively, owing to these quantum contributions, it’s as though the electron’s charge increases in strength.

    Through a herculean effort on the part of theoretical physicists, the muon magnetic moment has been calculated up to five-loop order. The theoretical uncertainties are now at the level of just one part in two billion. This is a tremendous achievement that can only be made in the context of quantum field theory, and is heavily reliant on the fine structure constant and its applications. (2012 AMERICAN PHYSICAL SOCIETY)

    The fine structure constant, α, also plays a major role in one of the most important experiments going on in modern physics today: the effort to measure the intrinsic magnetic moment of fundamental particles. For a point particle like the electron or muon, there are only a few things that determine its magnetic moment:

    1.the electric charge of the particle (which it’s directly proportional to),
    2.the spin of the particle (which it’s directly proportional to),
    3.the mass of the particle (which it’s inversely proportional to),
    4.and a constant, known as g, which is a purely quantum mechanical effect.

    While the first three are exquisitely known, g is only known to a little better than one part per billion. That might sound like a supremely good measurement, but we’re attempting to measure it to an even greater precision for a very good reason.

    Back in 1930, we thought that g would be 2, exactly, as derived by Dirac. But that ignores the quantum exchange of particles (or the contribution of loop diagrams), which only begins to show up in quantum field theory. The first-order correction was derived by Julian Schwinger in 1948, who states that g = 2 + α/π. As of today, we’ve computed all the contributions to 5th order, meaning we know all of the (α/π) terms, plus the (α/π)², (α/π)³, (α/π)⁴, and (α/π)⁵ terms.

    We can measure g experimentally and calculate it theoretically, and what we find, very curiously, is that they don’t quite match. The differences between g from experiment and theory are very, very small: 0.0000000058, with a combined uncertainty of ±0.0000000016: a 3.5-sigma difference. If improved experimental and theoretical results reach the 5-sigma threshold, we just might be on the verge of new, beyond-the-Standard-Model physics.

    The Muon g-2 electromagnet at Fermilab, ready to receive a beam of muon particles. This experiment began in 2017 and will take data for a total of 3 years, reducing the uncertainties significantly. While a total of 5-sigma significance may be reached, the theoretical calculations must account for every effect and interaction of matter that’s possible in order to ensure we’re measuring a robust difference between theory and experiment. (REIDAR HAHN / FERMILAB)

    When we do our best to measure the Universe — to greater precisions, at higher energies, under extraordinary pressures, at lower temperatures, etc. — we often find details that are intricate, rich, and puzzling. It’s not the devil that’s in those details, though, but rather that’s where the deepest secrets of reality lie.

    The particles in our Universe aren’t just points that attract, repel, and bind together with one another; they interact through every subtle means that the laws of nature permit. As we reach greater precisions in our measurements, we start uncovering these subtle effects, including intricacies to the structure of matter that are easy to miss at low precisions. Fine structure is a vital part of that, but learning where even our best predictions of fine structure break down might be where the next great revolution in particle physics comes from. Doing the right experiment is the only way we’ll ever know.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:38 pm on May 27, 2019 Permalink | Reply
    Tags: "This Is How Amateur Astronomers Can Image What Professionals Cannot", CIEL AUSTRAL: JEAN CLAUDE CANONNE; PHILIPPE BERNHARD; DIDIER CHAPLAIN; NICOLAS OUTTERS AND LAURENT BOURGON, Ethan Siegel   

    From Ethan Siegel: “This Is How Amateur Astronomers Can Image What Professionals Cannot” 

    From Ethan Siegel
    May 27, 2019

    This gorgeous image of the Large Magellanic Cloud is a wide-field view that’s superior to any professional image or mosaic taken of the same region of sky. With a total of 1060 hours of observational time, the details and extent of the gas revealed supersedes any professional view of the same entire region of sky. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    A 1060-hour amateur exposure of a nearby galaxy does what even Hubble couldn’t do.

    The Universe is full of astronomical wonders, but it’s up to humanity to observe and analyze them.

    The Large (top right) and Small (lower left) Magellanic Clouds are visible in the southern skies, and helped guide Magellan on his famous voyage some 500 years ago. In reality, the LMC is located some 160–165,000 light-years away, with the SMC slightly farther at 198,000 light-years. (ESO/S. BRUNIER)

    The key factors determining what we can reveal are resolution, light-gathering power, and the wavelengths filters we choose.

    This photo of the Hubble Space telescope being deployed, on April 25. 1990, was taken by the IMAX Cargo Bay Camera (ICBC) mounted aboard the space shuttle Discovery. It has been operational for 29 years, and has not been serviced since 2009. With a 2.4-meter diameter mirror, it gathers as much light in 1 minute as a 160-mm (6.3″) telescope would require 3 hours and 45 minutes to gather. (NASA/SMITHSONIAN INSTITUTION/LOCKHEED CORPORATION)

    Professionals have larger, more powerful telescopes with superior instruments, but amateurs have the advantage of time.

    This view of the Large Magellanic Cloud (LMC) was taken by the Digitized Sky Survey: a professional survey using a variety of telescopes comprising the entire sky. The small, high-resolution inset is a view of a particular globular cluster’s stars that is itself a satellite of the LMC. This professional image has less information and fewer details than the amateur mosaic composed by the Ciel Austral team. (NASA, ESA, A. RIESS (STSCI/JHU), AND PALOMAR DIGITIZED SKY SURVEY)

    Observing an object for four times as long gathers as much light as a telescope twice as large.

    The cluster RMC 136 (R136) in the Tarantula Nebula in the Large Magellanic Cloud, is home to the most massive stars known. R136a1, the greatest of them all, is over 250 times the mass of the Sun. While professional telescopes are ideal for teasing out high-resolution details such as these stars in the Tarantula Nebula, wide-field views are better with the types of long-exposure times only available to amateurs.(EUROPEAN SOUTHERN OBSERVATORY/P. CROWTHER/C.J. EVANS)

    This is the Large Magellanic Cloud (LMC): the closest large galaxy to our own.

    The red-green-blue color version of the 1060-hour observation taken by the Ciel Austral team of amateur astronomers. To gather the same amount of light as is contained in this image, the Hubble Space Telescope would require almost 5 hours of observing time, and could never (with its current setup and instrumentation) obtain a wide-field view such as this one. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    It’s the local group’s 4th largest galaxy, located just 160,000 light-years away.

    Our Local Group of galaxies is dominated by Andromeda and the Milky Way, but there’s no denying that Andromeda is the biggest, the Milky Way is #2, Triangulum is #3, and the LMC is #4. At just 160,000 light-years away, it’s by far the closest among the top 10+ galaxies to our own. (ANDREW Z. COLVIN)

    It’s huge from our perspective, spanning 5° across: 10 times the full Moon’s diameter.

    The Large Magellanic Cloud is home to the closest supernova of the last century. The pink regions here are not artificial, but are signals of ionized hydrogen and active star formation, likely triggered by gravitational interactions and tidal forces. Note how much detail is absent from a typical long-exposure amateur image like this, as compared with the level of detail teased out of the Ciel Austral team’s work. (JESÚS PELÁEZ AGUADO)

    Equipped with a 160-mm (6.3″) telescope, a team of amateur astronomers constructed a record 204,000,000 pixel image of the LMC.

    This tiny region of the large mosaic constructed by the Ciel Austral team focuses in on the central area of the LMC, but encompasses only 0.5% of the entire mosaic. Note how much detail is still visible here, in the red-green-blue color space. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    With a total of 1060 hours of observation time, 620 GB of data were synthesized in creating this mosaic.

    An overlapping region of space to the last one shown, this is nearly the same field-of-view (0.5% of the entire mosaic) but with a set of filters that highlights the presence of hydrogen, sulfur, and ionized oxygen. Note the gas and plasma of the LMC extends far beyond where the visible stars are located. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    The narrow-wavelength filters allowed the identification of hydrogen, sulfur, and oxygen, plus red/green/blue color.

    A large section of the Tarantula Nebula, the largest star-forming region in the Local Group, imaged by the Ciel Austral team. At top, you can see the presence of hydrogen, sulfur, and oxygen, which reveals the rich gas and plasma structure of the LMC, while the lower view shows an RGB color composite, revealing reflection and emission nebulae amidst the young, newly-formed stars. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    The mosaic includes the Tarantula Nebula: the largest star-forming region in the entire local group.

    Even far away from the main plane of the galaxy, where the greatest numbers of stars are located, the element filters (top) and the RGB colors (bottom) still reveal gas, dust, reflection and emission features, as well as a variety of elements present. The LMC is one of the most actively star-forming galaxies in the nearby Universe, and regions such as this put that star-formation on display. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    Ciel Austral now holds the longest-exposure amateur astronomy image record.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:42 am on May 16, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel,   

    From Ethan Siegel: “We Have Now Reached The Limits Of The Hubble Space Telescope” 

    From Ethan Siegel
    May 16, 2019

    The Hubble Space Telescope, as imaged during its last and final servicing mission. The only way it can point itself is from the internal spinning devices that allow it to change its orientation and hold a stable position. But what it can see is determined by its instruments, mirror, and design limitations. It has reached those ultimate limits; to go beyond them, we’ll need a better telescope. (NASA)

    The world’s greatest observatory can go no further with its current instrument set.

    The Hubble Space Telescope has provided humanity with our deepest views of the Universe ever. It has revealed fainter, younger, less-evolved, and more distant stars, galaxies, and galaxy clusters than any other observatory. More than 29 years after its launch, Hubble is still the greatest tool we have for exploring the farthest reaches of the Universe. Wherever astrophysical objects emit starlight, no observatory is better equipped to study them than Hubble.

    But there are limits to what any observatory can see, even Hubble. It’s limited by the size of its mirror, the quality of its instruments, its temperature and wavelength range, and the most universal limiting factor inherent to any astronomical observation: time. Over the past few years, Hubble has released some of the greatest images humanity has ever seen. But it’s unlikely to ever do better; it’s reached its absolute limit. Here’s the story.

    The Hubble Space Telescope (left) is our greatest flagship observatory in astrophysics history, but is much smaller and less powerful than the upcoming James Webb (center). Of the four proposed flagship missions for the 2030s, LUVOIR (right) is by far the most ambitious. By probing the Universe to fainter objects, higher resolution, and across a wider range of wavelengths, we can improve our understanding of the cosmos in unprecedented ways. (MATT MOUNTAIN / AURA)

    NASA/ESA/CSA Webb Telescope annotated

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    From its location in space, approximately 540 kilometers (336 mi) up, the Hubble Space Telescope has an enormous advantage over ground-based telescopes: it doesn’t have to contend with Earth’s atmosphere. The moving particles making up Earth’s atmosphere provide a turbulent medium that distorts the path of any incoming light, while simultaneously containing molecules that prevent certain wavelengths of light from passing through it entirely.

    While ground-based telescopes at the time could achieve practical resolutions no better than 0.5–1.0 arcseconds, where 1 arcsecond is 1/3600th of a degree, Hubble — once the flaw with its primary mirror was corrected — immediately delivered resolutions down to the theoretical diffraction limit for a telescope of its size: 0.05 arcseconds. Almost instantly, our views of the Universe were sharper than ever before.

    This composite image of a region of the distant Universe (upper left) uses optical (upper right) and near-infrared (lower left) data from Hubble, along with far-infrared (lower right) data from Spitzer. The Spitzer Space Telescope is nearly as large as Hubble: more than a third of its diameter, but the wavelengths it probes are so much longer that its resolution is far worse. The number of wavelengths that fit across the diameter of the primary mirror is what determines the resolution.(NASA/JPL-CALTECH/ESA)

    Sharpness, or resolution, is one of the most important factors in discovering what’s out there in the distant Universe. But there are three others that are just as essential:

    the amount of light-gathering power you have, needed to view the faintest objects possible,
    the field-of-view of your telescope, enabling you to observe a larger number of objects,
    and the wavelength range you’re capable of probing, as the observed light’s wavelength depends the object’s distance from you.

    Hubble may be great at all of these, but it also possesses fundamental limits for all four.

    When you look at a region of the sky with an instrument like the Hubble Space Telescope, you are not simply viewing the light from distant objects as it was when that light was emitted, but also as the light is affected by all the intervening material and the expansion of space, that it experiences along its journey. Although Hubble has taken us farther back than any other observatory to date, there are fundamental limits to it, and reasons why it will be incapable of going farther. (NASA, ESA, AND Z. LEVAY, F. SUMMERS (STSCI))

    The resolution of any telescope is determined by the number of wavelengths of light that can fit across its primary mirror. Hubble’s 2.4 meter (7.9 foot) mirror enables it to obtain that diffraction-limited resolution of 0.05 arcseconds. This is so good that only in the past few years have Earth’s most powerful telescopes, often more than four times as large and equipped with state-of-the-art adaptive optics systems, been able to compete.

    To improve upon the resolution of Hubble, there are really only two options available:

    1. use shorter wavelengths of light, so that a greater number of wavelengths can fit across a mirror of the same size,
    2. or build a larger telescope, which will also enable a greater number of wavelengths to fit across your mirror.

    Hubble’s optics are designed to view ultraviolet light, visible light, and near-infrared light, with sensitivities ranging from approximately 100 nanometers to 1.8 microns in wavelength. It can do no better with its current instruments, which were installed during the final servicing mission back in 2009.

    This image shows Hubble servicing Mission 4 astronauts practice on a Hubble model underwater at the Neutral Buoyancy Lab in Houston under the watchful eyes of NASA engineers and safety divers. The final servicing mission on Hubble was successfully completed 10 years ago; Hubble has not had its equipment or instruments upgraded since, and is now running up against its fundamental limitations. (NASA)

    Light-gathering power is simply about collecting more and more light over a greater period of time, and Hubble has been mind-blowing in that regard. Without the atmosphere to contend with or the Earth’s rotation to worry about, Hubble can simply point to an interesting spot in the sky, apply whichever color/wavelength filter is desired, and take an observation. These observations can then be stacked — or added together — to produce a deep, long-exposure image.

    Using this technique, we can see the distant Universe to unprecedented depths and faintnesses. The Hubble Deep Field was the first demonstration of this technique, revealing thousands of galaxies in a region of space where zero were previously known. At present, the eXtreme Deep Field (XDF) is the deepest ultraviolet-visible-infrared composite, revealing some 5,500 galaxies in a region covering just 1/32,000,000th of the full sky.

    The Hubble eXtreme Deep Field (XDF) may have observed a region of sky just 1/32,000,000th of the total, but was able to uncover a whopping 5,500 galaxies within it: an estimated 10% of the total number of galaxies actually contained in this pencil-beam-style slice. The remaining 90% of galaxies are either too faint or too red or too obscured for Hubble to reveal, and observing for longer periods of time won’t improve this issue by very much. Hubble has reached its limits. (HUDF09 AND HXDF12 TEAMS / E. SIEGEL (PROCESSING))

    Of course, it took 23 days of total data taking to collect the information contained within the XDF. To reveal objects with half the brightness as the faintest objects seen in the XDF, we’d have to continue observing for a total of 92 days: four times as long. There’s a severe trade-off if we were to do this, as it would tie up the telescope for months and would only teach us marginally more about the distant Universe.

    Instead, an alternative strategy for learning more about the distant Universe is to survey a targeted, wide-field area of the sky. Individual galaxies and larger structures like galaxy clusters can be probed with deep but large-area views, revealing a tremendous level of detail about what’s present at the greatest distances of all. Instead of using our observing time to go deeper, we can still go very deep, but cast a much wider net.

    This, too, comes with a tremendous cost. The deepest, widest view of the Universe ever assembled by Hubble took over 250 days of telescope time, and was stitched together from nearly 7,500 individual exposures. While this new Hubble Legacy Field is great for extragalactic astronomy, it still only reveals 265,000 galaxies over a region of sky smaller than that covered by the full Moon.

    Hubble was designed to go deep, but not to go wide. Its field of view is extremely narrow, which makes a larger, more comprehensive survey of the distant Universe all but prohibitive. It’s truly remarkable how far Hubble has taken us in terms of resolution, survey depth, and field-of-view, but Hubble has truly reached its limit on those fronts.

    In the big image at left, the many galaxies of a massive cluster called MACS J1149+2223 dominate the scene. Gravitational lensing by the giant cluster brightened the light from the newfound galaxy, known as MACS 1149-JD, some 15 times. At upper right, a partial zoom-in shows MACS 1149-JD in more detail, and a deeper zoom appears to the lower right. This is correct and consistent with General Relativity, and independent of how we visualize (or whether we visualize) space. (NASA/ESA/STSCI/JHU)

    Finally, there are the wavelength limits as well. Stars emits a wide variety of light, from the ultraviolet through the optical and into the infrared. It’s no coincidence that this is what Hubble was designed for: to look for light that’s of the same variety and wavelengths that we know stars emit.

    But this, too, is fundamentally limiting. You see, as light travels through the Universe, the fabric of space itself is expanding. This causes the light, even if it’s emitted with intrinsically short wavelengths, to have its wavelength stretched by the expansion of space. By the time it arrives at our eyes, it’s redshifted by a particular factor that’s determined by the expansion rate of the Universe and the object’s distance from us.

    Hubble’s wavelength range sets a fundamental limit to how far back we can see: to when the Universe is around 400 million years old, but no earlier.

    The most distant galaxy ever discovered in the known Universe, GN-z11, has its light come to us from 13.4 billion years ago: when the Universe was only 3% its current age: 407 million years old. But there are even more distant galaxies out there, and we all hope that the James Webb Space Telescope will discover them. (NASA, ESA, AND G. BACON (STSCI))

    The most distant galaxy ever discovered by Hubble, GN-z11, is right at this limit. Discovered in one of the deep-field images, it has everything imaginable going for it.

    It was observed across all the different wavelength ranges Hubble is capable of, with only its ultraviolet-emitted light showing up in the longest-wavelength infrared filters Hubble can measure.
    It was gravitationally lensed by a nearby galaxy, magnifying its brightness to raise it above Hubble’s naturally-limiting faintness threshold.
    It happens to be located along a line-of-sight that experienced a high (and statistically-unlikely) level of star-formation at early times, providing a clear path for the emitted light to travel along without being blocked.

    No other galaxy has been discovered and confirmed at even close to the same distance as this object.

    Only because this distant galaxy, GN-z11, is located in a region where the intergalactic medium is mostly reionized, can Hubble reveal it to us at the present time. To see further, we require a better observatory, optimized for these kinds of detection, than Hubble. (NASA, ESA, AND A. FEILD (STSCI))

    Hubble may have reached its limits, but future observatories will take us far beyond what Hubble’s limits are. The James Webb Space Telescope is not only larger — with a primary mirror diameter of 6.5 meters (as opposed to Hubble’s 2.4 meters) — but operates at far cooler temperatures, enabling it to view longer wavelengths.

    At these longer wavelengths, up to 30 microns (as opposed to Hubble’s 1.8), James Webb will be able to see through the light-blocking dust that hampers Hubble’s view of most of the Universe. Additionally, it will be able to see objects with much greater redshifts and earlier lookback times: seeing the Universe when it was a mere 200 million years old. While Hubble might reveal some extremely early galaxies, James Webb might reveal them as they’re in the process of forming for the very first time.

    The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds, and provide the best constraints on how dark energy evolves over cosmic time. (NASA / GODDARD / WFIRST)


    Other observatories will take us to other frontiers in realms where Hubble is only scratching the surface. NASA’s proposed flagship of the 2020s, WFIRST, will be very similar to Hubble, but will have 50 times the field-of-view, making it ideal for large surveys. Telescopes like the LSST will cover nearly the entire sky, with resolutions comparable to what Hubble achieves, albeit with shorter observing times. And future ground-based observatories like GMT or ELT, which will usher in the era of 30-meter-class telescopes, might finally surpass Hubble in terms of practical resolution.

    At the limits of what Hubble is capable of, it’s still extending our views into the distant Universe, and providing the data that enables astronomers to push the frontiers of what is known. But to truly go farther, we need better tools. If we truly value learning the secrets of the Universe, including what it’s made of, how it came to be the way it is today, and what its fate is, there’s no substitute for the next generation of observatories.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: