Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:31 pm on January 7, 2021 Permalink | Reply
    Tags: , Ethan Siegel,   

    From Ethan Siegel: “Not All Particles And Antiparticles Are Either Matter Or Antimatter” 

    From Ethan Siegel
    Jan 5, 2021

    Going to smaller and smaller distance scales reveals more fundamental views of nature, which means if we can understand and describe the smallest scales, we can build our way to an understanding of the largest ones. We do not know whether there is a lower limit to how small ‘chunks of space’ can be. Credit: PERIMETER INSTITUTE.

    If you think ‘particles are matter’ and ‘antiparticles are antimatter,’ think again.

    In this Universe, there are certain rules that have never been observed to be broken. Some of these rules we expect have never been broken. Nothing can move faster than the speed of light [in a vacuum]; when two quanta interact, energy is always conserved; linear and angular momentum can never be created or destroyed, etc. But some of these rules, even though we’ve never seen them violated, must have been broken at some point in the past.

    One such rule is a particular symmetry between matter and antimatter: every interaction that creates or destroys a matter particle also creates or destroys an equal number of their antimatter counterparts, which we typically think of as antiparticles. Given that our Universe is made up almost entirely of matter with virtually no antimatter — there are no antimatter stars, galaxies, or stable cosmic structures in our Universe — clearly this was violated at some point in the past. But how that occurred is a mystery: the puzzle of the matter/antimatter asymmetry remains one of physics’ greatest open questions.

    Additionally, we commonly say “particles” to mean things that make up matter, and “antiparticles” to mean things that compose antimatter, but that’s not exactly true. Particles aren’t always matter, and antiparticles aren’t always antimatter. Here’s the science behind this counterintuitive truth about our Universe.

    From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known, but we do understand the Universe from large, cosmic scales down to tiny, subatomic ones. There are nearly 1⁰²⁸ atoms making up each human body, in total. Credit: MAGDALENA KOWALSKA / CERN / ISOLDE TEAM.

    CERN ISOLDE Looking down into the ISOLDE experimental hall.

    When you think about the material we find here on Earth, you probably think that absolutely 100% of it is made of matter. This is approximately true, as practically our entire planet consists of matter made of protons, neutrons, and electrons, all of which are, in fact, matter particles. Protons and neutrons are composite particles, made of up-and-down quarks which bind together by exchanging gluons to form atomic nuclei. Those atomic nuclei, in turn, have electrons bound to them so that the total electric charge of each atom is zero, with the electrons remaining bound through the electromagnetic force: an exchange of photons.

    Every once in a while, however, one of the particles inside an atomic nucleus will undergo a radioactive decay. A typical example is beta decay: where one of the neutrons will decay to a proton, also emitting an electron and an anti-electron neutrino. If we look at the properties of the various particles and antiparticles that participate in this decay process, we can learn a lot about how our Universe works.

    Schematic illustration of nuclear beta decay in a massive atomic nucleus. Beta decay is a decay that proceeds through the weak interactions, converting a neutron into a proton, electron, and an anti-electron neutrino. Before the neutrino was known or detected, it appeared that both energy and momentum were not conserved in beta decays. Credit: WIKIMEDIA COMMONS USER INDUCTIVELOAD.

    The neutron, which we started with, has the following properties:

    it’s electrically neutral, with no net electric charge,
    it’s made of three quarks: two down quarks (each with electric charge -⅓) and one up quark (with electric charge +⅔),
    and it contains a total amount of about 939 MeV of energy, all in the form of its rest mass.

    The particles that it decays into, the proton, the electron, and the antielectron neutrino, also have their own unique particle properties.

    The proton has an electric charge of +1, is made of one down quark and two up quarks, and contains about 938 MeV of energy in its rest mass.
    The electron has an electric charge of -1, is a fundamentally indivisible particle, and contains about 0.5 MeV of energy in its rest mass.
    And the anti-electron neutrino has no electric charge, is fundamentally indivisible, and has an unknown but non-zero rest mass that’s no more than about 0.0000001 MeV worth of energy.

    All our mandatory conservation rules are intact. Energy is conserved, with the little bit of “extra” energy that was in the neutron getting converted to kinetic energy in the product particles. Momentum is conserved, as the sum of the momenta of the product particles always equals the initial momentum of the neutron. But we don’t just want to examine what we start with and what we wind up with; we want to know how it happens.

    While neutrons remain free, they are unstable. After a half-life of 10.3 minutes, they will radioactively decay into protons, electrons, and anti-electron neutrinos. If we swapped a neutron out for an anti-neutron, all of the particles would be swapped for their antiparticle counterparts, which means that matter would be replaced with antimatter, but any antimatter would be replaced with matter. Credit: E. SIEGEL / BEYOND THE GALAXY.

    For a decay to occur in quantum theory, there has to be a particle that mediates it. In the theory that describes it — the quantum theory of the weak interactions — the particle responsible is the W- boson, which acts on one of the neutron’s down quarks. Think about, in detail, what’s happening here to the fundamental particles.

    One of the down quarks in the neutron emits a (virtual) W- boson, causing it to transform into an up quark. The number of quarks is conserved in this part of the interaction.

    The (virtual) W- boson could decay into a lot of different things, but is restricted by the conservation of energy: its end products have to be no more energetic than the difference in the rest mass between the neutron and the proton.

    Because of this, the primary pathway that occurs is a decay into an electron (to carry the negative charge) and an anti-electron neutrino. On rare occasion, you’ll get what’s known as a radiative decay, where an additional photon is produced. You could, in principle, have a W- boson decay into a quark-antiquark combination (like a down quark and an anti-up quark), but that requires too much energy: more energy than is available during a neutron decaying into a proton plus additional products.

    Under normal. low-energy conditions, a free neutron will decay into a proton by a weak interaction, where time flows in the upward direction, as shown here. At high enough energies, there’s a chance this reaction can run backwards: where a proton and either a positron or a neutrino can interact to produce a neutron, meaning that a proton-proton interaction has a chance to produce a deuteron. This is how that first critical step takes place for fusion inside the Sun. Credit: JOEL HOLDSWORTH.

    Now, let’s flip the script: from matter to antimatter. Instead of a neutron decaying, let’s imagine we’ve got an anti-neutron decaying instead. An anti-neutron has very similar properties to the neutron we mentioned earlier, but with some key differences:

    it’s electrically neutral, with no net electric charge,
    it’s made of three antiquarks: two anti-down quarks (each with electric charge +⅓) and one anti-up quark (with electric charge -⅔),
    and it contains a total amount of about 939 MeV of energy, all in the form of its rest mass.

    All we did, to go from matter to antimatter, was replace all of the particles in play with their antiparticle counterparts. Their masses remained the same, their composition (except for the “anti” part) remained the same, but the electric charge of everything flipped. Even though both the neutron and the anti-neutron are electrically neutral, their individual components flipped sign.

    This is measurable, by the way! Even though it’s neutral, the neutron has what’s known as a magnetic moment: something that requires both spin and electric charge. We’ve been able to measure its magnetic moment to be -1.91 Bohr magnetons, and similarly, the magnetic moment of the anti-neutron is +1.91 Bohr magnetons. The “charged stuff” inside of it, that makes it up, must be the exact opposite for antimatter as it is for matter.

    A better understanding of the internal structure of a nucleon like a proton or neutron, including how the “sea” quarks and gluons are distributed, has been achieved through both experimental improvements and new theoretical developments in tandem. These help explain the majority of a baryon’s mass, and also their non-trivial magnetic moments. Credit:BROOKHAVEN NATIONAL LABORATORY.

    When it decays, an anti-down quark emits a W+ boson, the antimatter counterpart of the W- boson, transforming the anti-down quark into an anti-up quark. Just as before, the W+ boson is virtual — meaning it’s unobservable, as there isn’t enough available mass/energy to create a “real” one — but its decay products are visible: a positron and an electron neutrino. (And yes, you can have radiative effects too, where a small fraction of the time, one or more photons join those decay products.) Everything is flipped from before, where every matter particle is replaced with its antimatter counterpart, and every antimatter particle (like the anti-electron neutrino) is replaced with its matter counterpart.

    When you think about what we have here on Earth, almost everything is made of matter: protons, neutrons, and electrons. A small fraction of those neutrons are decaying, meaning that we also have W- bosons, additional protons and electrons (and photons), and a few anti-electron neutrinos. Everything that we know of is described extremely well by the Standard Model, with nothing more than the particles and antiparticles we know of required to describe them.

    Standard Model of Particle Physics, Quantum Diaries.

    Within the Standard Model, we can identify which particles exist in our reality, and what the antiparticle counterpart of each particle is. Although our Universe is made overwhelmingly of matter with a trace amount of antimatter, not every particle in our Universe is either matter or antimatter; some are neither. Credit: CONTEMPORARY PHYSICS EDUCATION PROJECT / DOE / NSF / LBNL.

    If we swapped out Earth for an imagined antimatter version of ourselves, an “anti-Earth” of sorts, we could just exchange every particle for its antiparticle counterpart. Instead of protons and neutrons (made of quarks and gluons), we’d have antiprotons and antineutrons (made of antiquarks, but still those same 8 gluons). Instead of a neutron decaying through a W- boson, we’d have an antineutron decaying through a W+ boson. Instead of producing an electron and an anti-electron neutrino (and sometimes a photon), you produce a positron and an electron neutrino (and sometimes a photon).

    The particles that make up the normal matter in our Universe are the quarks and leptons: the quarks make up protons and neutrons (and baryons, in general), while the leptons include the electron and its heavier cousins, as well as the three regular neutrinos. On the flipside, there are antiparticles that make up the antimatter that exists in our Universe: the antiquarks and antileptons. Through natural decays that involve a number of pathways that leverage both W- and W+ bosons, there’s a tiny bit of antimatter in the form of positrons and anti-electron neutrinos. This would persist even if we somehow manage to “turn off” the outside Universe, including the Sun, cosmic rays, and any other sources of particles or energy.

    The particles and antiparticles of the Standard Model are predicted to exist as a consequence of the laws of physics. The quarks and leptons are fermions and matter; the anti-quarks and anti-leptons are anti-fermions and antimatter, but the bosons are neither matter nor antimatter. Credit: E. SIEGEL / BEYOND THE GALAXY.

    But what about the other particles and antiparticles? When we talk about matter and antimatter, we’re talking only about the fermions in our Universe: the quarks and leptons. But there are bosons as well:

    the 1 photon, which mediates the electromagnetic force,
    the 8 gluons, which mediate the strong nuclear force,
    the 3 weak bosons, the W+, W-, and Z⁰, which mediate the weak force and weak decays,
    and the Higgs boson, which is entirely unique when compared to the others.

    CERN CMS Higgs Event May 27, 2012.

    CERN ATLAS Higgs Event
    June 12, 2012.

    Some of these particles are their own antiparticles, like the photon, the Z0, and the Higgs. The W+ is the antiparticle counterpart of the W-, and you can match up three pairs of gluons as clearly being the antiparticle counterparts of one another. (The gluons are a little complicated when it comes to the fourth pair.)

    If you collide a particle with its antiparticle counterpart, they annihilate away, and can produce anything that’s energetically allowed, so long as all the quantum conservation rules — energy, momentum, angular momentum, electric charge, baryon number, lepton number, lepton family number, etc. — are all obeyed. This includes particles that are their own particles, just as equally as particles that have distinct antiparticle counterparts.

    An equally-symmetric collection of matter and antimatter (of X and Y, and anti-X and anti-Y) bosons could, with the right GUT properties, give rise to the matter/antimatter asymmetry we find in our Universe today. Note that even though we classify these X and Y particles as bosons due to their spin, they couple to both quarks and leptons, and carry a net baryon+lepton number. Credit: E. SIEGEL / BEYOND THE GALAXY.

    What’s remarkable about this is where the idea of “matter” versus “antimatter” comes in. If you have a positive baryon or lepton number, you’re matter. If you have a negative baryon or lepton number, you’re antimatter. And if you don’t have either baryon or lepton number… well, you’re neither matter nor antimatter! Even though there are two types of particles — fermions (which include quarks and leptons) and bosons (which include everything else) — it’s only the fermions in our Universe that can be either matter (for the normal fermions) or antimatter (for the anti-fermions).

    (Note that if neutrinos turn out to be Majorana fermions, this will need to be revised, as Majorana fermions can indeed be their own antiparticle.)

    That means composite particles, like pions or other mesons that are made of quark-antiquark combinations, are neither matter nor antimatter; they’re equal amounts of both. Positronium, which is an electron and a positron bound together, is neither matter nor antimatter. If leptoquarks or the superheavy X or Y bosons that arise in Grand Unified Theories exist, they’d be examples of hypothetical particles with both baryon and lepton numbers; there would be both matter and antimatter versions of them. And it means that, if supersymmetry were correct, we could have fermions like the supersymmetric counterpart of the photon — the photino — that are neither matter nor antimatter. Possibly, we could even have supersymmetric bosons, like squarks, whose particle and antiparticle versions really are matter and antimatter.

    The Standard Model particles and their supersymmetric counterparts. Slightly under 50% of these particles have been discovered, and just over 50% have never showed a trace that they exist. Supersymmetry is an idea that hopes to improve on the Standard Model, but it has yet to make successful predictions about the Universe. Credit: CLAIRE DAVID / CERN.

    It’s such a simple idea to think that there are particles in our Universe, and that’s what matter is, and that the antiparticle counterparts of these particles would make up antimatter. This is partly true, as if we chopped up the particles that exist in our Universe, most of them would be made of constituent particles that we consider matter. Similarly, if we swapped all of those particles for their antiparticle counterparts, we’d wind up with what we consider antimatter. This works for every quark (with baryon number +⅓ each), every lepton (with lepton number +1 each), as well as every antiquark (with baryon number -⅓ each) and every antilepton (with lepton number -1 each).

    But everything else in the Universe — all of the bosons, which carry neither lepton nor baryon number, and all of the composite particles with a net baryon and lepton number of zero — lives in a nebulous area where they’re neither matter nor antimatter. It isn’t fair to designate one type as a “particle” and another type as an “antiparticle” in this case. Sure, W+ and W- might annihilate just like all particle-antiparticle pairs do, but neither one has any more of a claim to be “matter” or “antimatter” than any other boson, which is to say, they have no claim to that status. Asking “which one is matter and which one is antimatter” has no meaning; they’re simply one another’s antiparticle, with neither one having properties of matter or antimatter at all.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 7:39 pm on January 3, 2021 Permalink | Reply
    Tags: , , , , Does The Expanding Universe Break The Speed Of Light?", Ethan Siegel, , Theory of Special Relativity   

    From Ethan Siegel: “Ask Ethan: Does The Expanding Universe Break The Speed Of Light?” 

    From Ethan Siegel

    Jan 2, 2021

    In a Universe governed by General Relativity, filled with matter-and-energy, a static solution is not possible. That Universe must either expand or contract, with measurements revealing very quickly and decisively that expansion was correct. Since its discovery in the late 1920s, there have been no serious challenges to this paradigm of the expanding Universe. Credit: NASA / GSFC.

    If there’s one rule that people know about how fast things can move, it’s that there’s a cosmic speed limit: the speed of light in a vacuum. If you have any amount of mass at all — like anything made of atoms — you can’t even reach that limit; you can only approach it. Meanwhile, if you have no mass and you’re traveling through completely empty space, there’s no other speed you’re allowed to move at; you must move at the speed of light. And yet, if you think about how big the observable Universe is, we know it’s grown to 92 billion light-years in diameter in just 13.8 billion years. Moreover, by the time just one second elapsed since the Big Bang, the Universe was already multiple light-years across! How is this possible without breaking the laws of physics? That’s what Roberto Cánovas’s son Lucas wants to know, inquiring:

    “If the Universe grew more than 300,000km in a fraction of a second that means all these things had to travel faster than the speed of light during that tiny amount of time thus breaking the rule that nothing can travel faster than light.”

    If you want to understand what’s going on, you’re going to have to bend your brain a little bit, because both things are simultaneously true: the Universe really does grow in this fashion, and yet nothing can travel faster than light. Let’s unpack how this happens.

    Let’s start with the rule you know: that nothing can travel faster than light. Although this rule is normally attributed to Albert Einstein — it’s a cornerstone of Special Relativity — it was actually known, or at least strongly suspected, to be true for more than a decade before him.

    If you have an object at rest, and you apply a force to it, it’s going to accelerate. That’s Newton’s famous F = ma, which says that force equals mass times acceleration. If you apply a force to any massive object, it’s going to accelerate, which means it’s going to speed up in a particular direction.

    But that can’t be strictly true all the time. Imagine you accelerate something so that it gets faster by 1 kilometer-per-second with each second that goes by. If you start from rest, it would only take 299,793 seconds (about 3½ days) before you reached and then exceeded the speed of light! Instead, there must be different rules at play when you get near that speed, and we figured out those rules back in the late 1800s, back when Einstein was still a child.

    One revolutionary aspect of relativistic motion, put forth by Einstein but previously built up by Lorentz, Fitzgerald, and others, that rapidly moving objects appeared to contract in space and dilate in time. The faster you move relative to someone at rest, the greater your lengths appear to be contracted, while the more time appears to dilate for the outside world. This picture, of relativistic mechanics, replaced the old Newtonian view of classical mechanics, but also carries tremendous implications for theories that aren’t relativistically invariant, like Newtonian gravity. Credit: Curt Renshaw.

    People like George FitzGerald and Hendrik Lorentz, working in the 19th century, derived something spectacular: that when you got close to the speed of light, the Universe you observed appeared to play by different rules. Normally, we’re used to a ruler being a good way to measure distances, and clocks being a good way to measure time. If you were to take your ruler and measure a moving object, you’d expect to measure the same value as if the object were stationary, or if someone on board that object used their own ruler. Similarly, if you used your watch to measure how much time elapsed between two events while someone on the moving object used theirs, you’d expect that everyone would get the same results.

    But you don’t get the same results! If you, at rest, measure the length of the moving object, you’d see it was shorter: lengths contract when you move, and they contract by more when you get close to the speed of light.

    Similarly, if you, at rest, measured how fast the person in motion’s clock was going, you’d see their clock running slower compared to yours. We call these two phenomena “length contraction” and “time dilation,” and they were discovered back when Einstein was just a small child.

    Time dilation (L) and length contraction (R) show how time appears to run slower and distances appear to get smaller the closer you move to the speed of light. As you approach the speed of light, clocks dilate towards time not passing at all, while distances contract down to infinitesimal amounts. WIKIMEDIA COMMONS USERS ZAYANI (L) AND JROBBINS59 (R).

    So what did Einstein do that was so important? His spectacular realization was that, no matter whether you’re stationary or you’re on that moving object, when you look at a beam of light, you’re always going to see it moving at the same speed. Imagine you shine a flashlight pointed away from you. If you’re stationary, light moves at the speed of light, and your clock runs at its normal speed with your ruler reading its normal length. But what happens if you’re in motion, straight ahead, and you shine that flashlight in front of you?

    From someone stationary’s perspective, they’ll see light moving away from you at a slower speed: whatever your speed is subtracted from the speed of light. But they’d also see that you’re compressed in the direction that you’re moving: your distances and your rulers have contracted. Additionally, they’ll see your clocks running slower.

    And these effects combine in such a way that, if you’re the one moving, you’ll see that your rulers appear normal, your clocks appear normal, and light moves away from you at the speed of light. All of these effects exactly cancel out for all observers; everyone in the Universe, regardless of how you’re moving, sees light move at exactly the same speed: the speed of light.

    A light-clock, formed by a photon bouncing between two mirrors, will define time for any observer. Although the two observers may not agree with one another on how much time is passing, they will agree on the laws of physics and on the constants of the Universe, such as the speed of light. A stationary observer will see time pass normally, but an observer moving rapidly through space will have their clock run slower relative to the stationary observer. Credit: John D. Norton.

    This has a terrific consequence: it means that the equation F = ma isn’t right when we talk about relativity! If you were moving at 99% the speed of light, and you applied a force that theoretically would accelerate you that extra 1% of the way there, you wouldn’t reach 100% the speed of light. In fact, you’d find that you’re only going 99.02% the speed of light. Even though you applied a force that should accelerate you by 1% the speed of light, because you’re already moving at 99% the speed of light, it only increases your speed by 0.02% the speed of light instead.

    What’s happening is that, instead of going into your speed, that force is changing your momentum and your kinetic energy, not according to Newton’s classic laws, but according to the laws of relativity. Time dilation and length contraction come along for the ride, and it’s why unstable, short-lived particles that live for minuscule amounts of time can travel farther than non-relativistic physics can account for. If you hold out your hand, you’ll find that one unstable cosmic particle — a muon — passes through it each second. Even though these are created by cosmic rays more than 100 kilometers up, and the muon’s lifetime is only 2.2 microseconds, these particles can actually make it all the way down to Earth’s surface, despite the fact that 2.2 microseconds at the speed of light won’t even take you 1 kilometer.

    The V-shaped track in the center of the image arises from a muon decaying to an electron and two neutrinos. The high-energy track with a kink in it is evidence of a mid-air particle decay. By colliding positrons and electrons at a specific, tunable energy, muon-antimuon pairs could be produced at will. However, muons are also produced by cosmic rays in the upper atmosphere, many of which arrive at Earth’s surface despite only having a lifetime of 2.2 microseconds and being created ~100 km up. Credit: The Scottish Science & Technology Roadshow.

    All of this analysis, though, was for Einstein’s Special Relativity. In our Universe, particularly on cosmic scales, we have to use General Relativity.

    What’s the difference?

    They’re both theories of relativity: where your motion through space is relative to your motion through time, and everyone who has a different position and velocity has their own unique frame of reference. But Special Relativity is a “special, specific case” of General Relativity. In Special Relativity, there are no gravitational effects. There are no masses curving space; there are no gravitational waves passing through your location; there is no expansion or contraction of the Universe allowed. Space, for lack of a better term, is flat, rather than curved.

    But in General Relativity, not only is space allowed to be curved, but if you have any masses or any forms of energy in your Universe at all, it must be curved. The presence of matter and energy tells space how to curve, and that curved space tells matter and energy how to move. We’ve detected the effects of this curvature — around the Sun, around Earth, and even in the great cosmic laboratory of outer space — and it always seems to agree with Einstein’s (and General Relativity’s) predictions.

    Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. The curvature of space due to the gravitational effects of Earth is one visualization of gravitation, and is a fundamental way that General Relativity differs from Special Relativity. Credit: CHRISTOPHER VITALE OF NETWORKOLOGIES AND THE PRATT INSTITUTE.

    In every case, where we were talking about things being limited by the speed of light, we were talking about a special case: about objects moving around and (possibly) accelerating through space, but where space itself wasn’t fundamentally changing. In a Universe where the only type of relativity is Special Relativity, this is fine. But we live in a Universe that’s full of matter and energy, and where gravitation is real. We can’t use Special Relativity except as an approximation: where things like the curvature of space and the expansion of the Universe are negligible. That might be fine here on Earth, but it’s not fine when it comes to the expanding Universe.

    Here’s the difference. Imagine that your Universe is a ball of dough, and that there are raisins located all throughout it. In Special Relativity, the raisins can all move through the dough a little bit: all limited by the speed of light and the laws of relativity (and relative motion) that you’re familiar with. No raisin moves through the dough faster than the speed of light, and and two raisins will calculate and measure their relative speeds to be below the speed of light.

    But now, in General Relativity, there’s one major difference: the dough itself can expand.

    If you view the Universe as a ball of dough with raisins all throughout it, the raisins are like individual objects throughout the Universe, like galaxies, while the dough is like the fabric of space. As the dough expands, individual raisins perceive that more distant raisins are speeding away from them faster and faster, but what’s actually happening is that the raisins are mostly stationary. Only the space between them is expanding. Credit: NASA / WMAP science team.

    The dough isn’t something you can observe, detect, or measure; it’s simply the nothingness of empty space. But even this nothingness has physical properties. It determines what distances are, what trajectories objects will follow, how time flows, and many other properties. All you can see, though, are the individual particles and waves — the quanta of energy — that exist in what we call “spacetime.” Spacetime itself is the dough; the particles in the dough, from atoms to galaxies, are like the raisins.

    Now, this dough is expanding, just like you’d imagine a ball of dough would expand if you left it to leaven in a place with no gravity, like aboard the International Space Station. As the dough expands, any particular raisin can represent you, the observer.

    The raisins that are close by you will appear to expand away from you slowly; the ones that are far away will appear to expand away from you quickly. But in reality, this isn’t because the raisins are moving through space; it’s because space itself is expanding, and the raisins themselves only move through that space slower than light.

    This simplified animation shows how light redshifts and how distances between unbound objects change over time in the expanding Universe. Note that the objects start off closer than the amount of time it takes light to travel between them, the light redshifts due to the expansion of space, and the two galaxies wind up much farther apart than the light-travel path taken by the photon exchanged between them. Credit: Rob Knop.

    It also means that it takes a long time for the light coming from those objects to arrive at our eyes; the farther away we look, we see objects as they were earlier and earlier in the Universe’s history. There’s actually a limit to how far away we can see, because the Big Bang occurred a finite amount of time ago, 13.8 billion years ago, to be precise. If the Universe hadn’t expanded at all — if we lived in a Special Relativity Universe instead of a General Relativity Universe — we’d only be able to see 13.8 billion light-years in all directions, for a diameter of ~27.6 billion light-years.

    But our Universe is expanding, and has been expanding for all that time. It actually expanded faster in the past, because there was more matter-and-energy in a given region of space before the Universe expanded by such a great amount. With the combination we have of matter, radiation, and dark energy in our Universe, the light that’s arriving today comes to us after a 13.8 billion year journey, but those objects are now 46 billion light-years away. The Universe didn’t expand faster than light, though; every object in the Universe always moved at or below the speed of light. It’s just that the fabric of space itself — what you might consider “nothing” to be — expands between the numerous galaxies.

    A graph of the size/scale of the observable Universe vs. the passage of cosmic time. This is displayed on a log-log scale, with a few major size/time milestones identified. Note the early radiation-dominated era, the recent matter-dominated era, and the current-and-future exponentially-expanding era. Credit: E. Siegel.

    It’s very hard to think about a Universe where space itself is changing over time. Conventionally, we look out at an object in the Universe and measure it with the tools and techniques we have here at our disposal. We’re used to interpreting certain measurements in a specific way. Measure how faint something looks or how small it appears, and based on its actual brightness or known size, you can say, “it must be this distance away.” Measure how its light has shifted from when it was emitted to when we observe it, and you can say, “this is how fast it’s receding from us.” And if you look at the different objects at different distances, you’ll notice that an object more than 18 billion light-years away will never have the light its emitting right now reach us, as the Universe’s expansion will prevent it from reaching us, even at the speed of light.

    Our first instinct is to say nothing can travel faster than light, meaning that no object can move through space faster than the speed that light can move through a vacuum. But it’s also correct to say, “nothing can travel faster than light,” as the fabric of empty space — nothingness itself — possesses neither a limit to the rate of its expansion nor a limit to the distances over which the expansion applies. The Universe grew to be about 50 light-years in size by the time it was just 1 second old, and yet not a single particle in that Universe traveled through space faster than light. The nothingness of space simply expanded, and that’s the simplest and most consistent explanation for what we observe.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 4:16 pm on February 22, 2020 Permalink | Reply
    Tags: Although these ripples in spacetime carry more energy than any other cataclysmic event the interactions are so weak that they barely affect us., , , , , Ethan Siegel, From Ethan Siegel: "Ask Ethan: Could Gravitational Waves Ever Cause Damage On Earth?", From even the distance of the nearest star gravitational waves would pass through us almost completely unnoticed., Perhaps the most remarkable fact of all is that we’ve actually learned how to successfully detect them.   

    From Ethan Siegel: “Ask Ethan: Could Gravitational Waves Ever Cause Damage On Earth?” 

    From Ethan Siegel
    Feb 22, 2020

    Illustration of two black holes merging, of comparable mass to what LIGO first saw. At the centers of some galaxies, supermassive binary black holes may exist, creating a signal far stronger than this illustration shows, but with a frequency that LIGO is not sensitive to. If the black holes were close enough, they could in principle impart enough energy on Earth to cause noticeable effects. (SXS, THE SIMULATING EXTREME SPACETIMES (SXS) PROJECT (http://WWW.BLACK-HOLES.ORG))

    Black hole mergers are some of the most energetic events in the Universe. Could the gravitational waves they produce ever harm us?

    The Universe is not a static, stable place. Out of a vast collection of simple atoms, gas clouds collapse to form stars and planets, which then undergo their own individual life cycles. The most massive stars will die in cataclysmic events such as supernovae, producing stellar remnants such as neutron stars and black holes. Many of these neutron stars and black holes will then inspiral and merge, releasing a tremendous amount of energy in the form of gravitational waves. The light and particles produced in this way are capable of causing damage here on Earth, but what about the gravitational waves themselves? That’s Brian Brettschneider’s question, as he asks:

    The gravitational waves detected on Earth by LIGO traveled great distances and were quite weak per unit volume of space by the time they arrived. If they originated much closer to Earth, they would be more energetic from our perspective. What would the effect of energetic gravitational waves created locally be on nearby objects. I’m thinking of binary ~30 solar mass black holes merging. Would the gravitational waves be noticeable? Could they cause damage?

    It’s a great question that’s stymied even some of history’s greatest minds.

    An animated look at how spacetime responds as a mass moves through it helps showcase exactly how, qualitatively, it isn’t merely a sheet of fabric but all of 3D space itself gets curved by the presence and properties of the matter and energy within the Universe. Multiple masses in orbit around one another will cause the emission of gravitational waves. (LUCASVB)

    General Relativity, our current theory of gravity, was first put forth by Albert Einstein in 1915. The very next year, 1916, Einstein himself derived an unexpected property of his theory: it allowed the propagation of a new type of radiation that was purely gravitational in nature. This radiation, today known as gravitational waves, had some properties that were easy to extract: they had no mass and traveled at the speed of gravity, which ought to equal the speed of light.

    But what wasn’t apparent, at least not right away, was whether these waves were real, physical, energy-carrying phenomena, or whether they were a pure mathematical artifact that didn’t have any physical meaning. In 1936, Einstein and Nathan Rosen (of Einstein-Rosen bridge and EPR paradox fame) wrote a paper called, “Do gravitational waves exist?” In the paper, submitted to the journal Physical Review, they argued that no, they do not.

    When a gravitational wave passes through a location in space, it causes an expansion and a compression at alternate times in alternate directions, causing laser arm-lengths to change in mutually perpendicular orientations. Exploiting this physical change is how we developed successful gravitational wave detectors such as LIGO and Virgo. (ESA–C.CARREAU)

    They contended that these gravitational waves were mathematical and didn’t physically exist, the same way that the “0” we infer to be on the end of a ruler doesn’t physically exist. Fortunately, the paper was rejected on the recommendation of the anonymous referee, who turned out to be the physicist Howard Robertson, whom cosmology fans might recognize as the “R” in the Friedmann-Lemaitre-Robertson-Walker metric.

    Robertson, also based at Princeton, surreptitiously pointed out to Einstein the correct way to handle the error he had made, which flipped the conclusion. The gravitational waves that appeared in the resubmitted version, which was accepted in 1937 with a different title in a different journal [Journal of the Franklin Institute], predicted physically real waves. Just as electromagnetism had light, a massless form of radiation that carried real energy, gravitation has a completely analogous phenomena: gravitational waves.

    When you have two gravitational sources (i.e., masses) inspiraling and eventually merging, this motion causes the emission of gravitational waves. Although it might not be intuitive, a gravitational wave detector will be sensitive to these waves as a function of 1/r, not as 1/r², and will see those waves in all directions, regardless of whether they’re face-on or edge-on, or anywhere in between. (NASA, ESA, AND A. FEILD (STSCI))

    If these waves exist, are physically real, and also carry energy, then the important question becomes whether they can transfer that energy into matter, and if so, by what process. In 1957, the first American conference on General Relativity, now known as GR1, took place in Chapel Hill, North Carolina. In attendance were some titanic figures in the world of physics, including Bryce DeWitt, John Archibald Wheeler, Joseph Weber, Hermann Bondi, Cécile DeWitt-Morette, and Richard Feynman.

    Although Bondi would quickly popularize a particular argument that arose from the conference, it was Feynman who came up with the line of reasoning we now call the sticky bead argument. If you imagine that you have a thin rod with two beads on it, where one is fixed but one can slide, the distance between the beads will change if a gravitational wave passes through it perpendicular to the rod’s direction.

    The argument by Feynman was that gravitational waves would move masses along a rod, just as electromagnetic waves moved charges along an antenna. This motion would cause heating due to friction, demonstrating that gravitational waves carry energy. The principle of the sticky-bead argument would later form the basis of the design of LIGO. (P. HALPERN)

    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA

    So long as the bead-and-rod are frictionless, there’s no heat produced, and the end state of the system consisting of the rod-and-beads is no different than before the gravitational wave passed through. But if there is friction between the rod and the bead that’s free to slide along it, that motion generates friction, which generates heat, which is a form of energy. Not only does Feynman’s argument demonstrate that gravitational waves do carry energy, but it shows how to extract that energy from the waves and put it into a real, physical system.

    When a gravitational wave passes through the Earth, the same effects that it had on the bead-rod system would be at play. As the wave passed through the Earth, it would cause the directions perpendicular to the wave’s propagation to stretch-and-compress, alternately and in an oscillatory fashion, at 90 degree angles to one another.

    Anything that was on the Earth that would be energetically affected by this motion of the space that it occupied would absorb that relevant amount of energy from the waves itself, and transform that energy into real, physical energy that would then be present on our world.

    If we consider the first gravitational wave ever seen by LIGO — observed on September 14, 2015 but announced almost exactly 4 years ago today (on February 11, 2016) — it consisted of two black holes of 36 and 29 solar masses, respectively, that merged to produce a black hole of 62 solar masses. If you do the math, you’ll notice that 36 + 29 does not equal 62. In order to balance that equation, the remaining three solar masses, corresponding to approximately 10% of the mass of the smaller black hole, needed to get converted into pure energy, via Einstein’s E = mc². That energy travels through space in the form of gravitational waves.

    When the two arms are of exactly equal length and there is no gravitational wave passing through, the signal is null and the interference pattern is constant. As the arm lengths change, the signal is real and oscillatory, and the interference pattern changes with time in a predictable fashion. (NASA’S SPACE PLACE)

    After a journey of about 1.3 billion light-years, the signal from those merging black holes arrived on Earth, where they passed through our planet. A tiny, tiny fraction of that energy was deposited into the twin LIGO detectors at Hanford, WA, and Livingston, LA, causing the lever arms that house the mirrors and laser cavities to alternately increase-and-decrease in length. That tiny bit of energy, extracted by an apparatus that humans built, was enough to detect our first gravitational waves.

    There is an enormous amount of energy emitted when two black holes of masses comparable to these merge; converting three solar masses worth of material into pure energy over a timescale of just 200 milliseconds is more energy than all the stars in the Universe give off, combined, over that same amount of time. All told, that first gravitational wave contained 5.3 × 10⁴⁷ J of energy, with a peak emission, in the final milliseconds, of 3.6 × 10⁴⁹ W.

    The inspiral and merger of the first pair of black holes ever directly observed. The total signal, along with the noise (top) clearly matches the gravitational wave template from merging and inspiraling black holes of a particular mass (middle). Note how the strength of the signal reaches a maximum in the final few orbits before the exact moment of the merger. (B. P. ABBOTT ET AL. (LIGO SCIENTIFIC COLLABORATION AND VIRGO COLLABORATION))

    But from over a billion light-years away, we saw only a tiny, minuscule fraction of that energy. Even if we consider all of the energy received by the entire planet Earth from this gravitational wave, it only comes out to 36 billion J, the same as the amount of energy released by:

    burning through six barrels (about 1000 L) of crude oil,
    sunlight shining on the island of Manhattan for a duration of 0.7 seconds,
    10,000 kWh of electricity, the average annual electricity consumption of an American household.

    The energy emitted from a source in space always spreads out like the surface of a sphere, meaning that if you were to halve the distance between yourself and these merging black holes, the energy you’d receive would quadruple.

    The brightness distance relationship, and how the flux from a light source falls off as one over the distance squared. Gravitational waves emitted from a point spread out the same way in terms of energy, but their amplitude falls off only linearly with the distance, rather than as the distance squared like energy does. (E. SIEGEL / BEYOND THE GALAXY)

    If instead of 1.3 billion light-years, these black holes merged just 1 light-year away, the strength of these gravitational waves that hit Earth would equate to about 70 octillion (7 × 10²⁸) joules of energy: as much energy as the Sun produces every three minutes.

    But there’s one important way that gravitational waves and electromagnetic radiation (like sunlight) differ. Light is easily absorbed by normal matter, and imparts energy into it based on the interactions of its quanta (photons) with the quanta we’re made out of (protons, neutrons and electrons). But gravitational waves mostly pass right through normal matter. Yes, they cause it to alternately expand-and-contract in mutually perpendicular directions, but the wave largely passes through the Earth unaffected. Only a small amount of energy gets deposited, and there’s a subtle reason why.

    Ripples in spacetime are what gravitational waves are, and they travel through space at the speed of light in all directions. Although the energy from a gravitational wave spreads out like a sphere, the same way that electromagnetic energy spreads out, the amplitude of a gravitational wave only drops in direct proportion to the distance. (EUROPEAN GRAVITATIONAL OBSERVATORY, LIONEL BRET/EUROLIOS)

    European Gravitational Observatory

    When a gravitational wave gets emitted, its energy spreads out proportional to the distance squared. But the amplitude of a gravitational wave — the thing that determines by how much matter will expand-and-contract — only falls off linearly with the distance. When the first black hole-black hole merger we ever saw the gravitational waves from passed through Earth, our planet contracted-and-expanded by about the width of a dozen protons, all lined up together.

    If those same black holes had merged at a distance of 1 light-year, Earth would have stretched-and-compressed by about 20 microns. If they had merged at the same distance Earth is from the Sun, the entire planet would have stretched-and-compressed by about 1 meter (3 feet). For comparison, that’s about the same amount of stretching-and-compressing that happens every day due to the tidal forces created by the Moon. The biggest difference is that it would happen much faster: with stretching-and-compressing on the timescale of milliseconds, rather than ~12 hours.

    The Moon exerts a tidal force on the Earth, which not only causes our tides, but causes braking of the Earth’s rotation, and a subsequent lengthening of the day. In order for a gravitational wave to have the same amplitude on the planet as the Moon’s tidal forces do, a black hole-black hole merger would need to occur at approximately the same distance that the Sun is from Earth. (WIKIMEDIA COMMONS USER WIKIKLAAS AND E. SIEGEL)

    There are some ways that a large-enough amplitude gravitational wave could meaningfully impart energy to Earth. Crystals packed in intricate lattices would heat up all throughout the Earth’s interior, potentially cracking or shattering if the gravitational wave is strong enough. Earthquakes would ripple throughout our planet, cascading and overlapping, causing worldwide damage on our surface. Geysers would erupt spectacularly and irregularly, and it’s possible that volcanic eruptions would be triggered. Even the oceans would produce global tsunamis, disproportionately affecting coastal areas.

    But a black hole-black hole merger would need to take place within our Solar System for that to happen. From even the distance of the nearest star, gravitational waves would pass through us almost completely unnoticed. Although these ripples in spacetime carry more energy than any other cataclysmic event, the interactions are so weak that they barely affect us. Perhaps the most remarkable fact of all is that we’ve actually learned how to successfully detect them.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 7:29 pm on February 11, 2020 Permalink | Reply
    Tags: "Our Home Supercluster Laniakea Is Dissolving Before Our Eyes", , , , , Ethan Siegel   

    From Ethan Siegel: “Our Home Supercluster, Laniakea, Is Dissolving Before Our Eyes” 

    From Ethan Siegel
    Feb 11, 2019

    If galaxies are cities in the Universe, how unfortunate that our ‘cosmic country’ is dissolving.

    Laniakea supercluster. From Nature The Laniakea supercluster of galaxies R. Brent Tully, Hélène Courtois, Yehuda Hoffman & Daniel Pomarède at http://www.nature.com/nature/journal/v513/n7516/full/nature13674.html. Milky Way is the red dot.

    This visualization of the Laniakea supercluster, which represents a collection of more than 100,000 estimated galaxies spanning a volume of over 100 million light-years, shows the distribution of dark matter (shadowy purple) and individual galaxies (bright orange/yellow) together. Despite the relatively recent identification of Laniakea as the supercluster which contains the Milky Way and much more, it’s not a gravitationally bound structure and will not hold together as the Universe continues to expand. (TSAGHKYAN / WIKIMEDIA COMMONS)

    On the largest cosmic scales of all, planet Earth appears to be anything but special. Like hundreds of billions of other planets in our galaxy, we orbit our parent star; like hundreds of billions of solar systems, we revolve around the galaxy; like the majority of galaxies in the Universe, we’re bound together in either a group or cluster of galaxies. And, like most galactic groups and clusters, we’re a small part of a larger structure containing over 100,000 galaxies: a supercluster. Ours is named Laniakea: the Hawaiian word for “immense heaven.”

    Superclusters have been found and charted throughout our observable Universe, where they’re more than ten times as rich as the largest known clusters of galaxies. Unfortunately, owing to the presence of dark energy in the Universe, these superclusters ⁠ — including our own ⁠ — are only apparent structures. In reality, they’re mere phantasms, in the process of dissolving before our very eyes.

    Dark Energy Survey

    Dark Energy Camera [DECam], built at FNAL

    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Timeline of the Inflationary Universe WMAP

    The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

    According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

    DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

    Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

    The cosmic web is driven by dark matter, which could arise from particles created in the early stage of the Universe that do not decay away, but rather remain stable until the present day.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)

    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background [CMB]hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    [caption id="attachment_73741" align="alignnone" width="632"] CMB per ESA/Planck

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ Dark Matter project at SURF, Lead, SD, USA

    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington. Axion Dark Matter Experiment

    The smallest scales collapse first, while larger scales require longer cosmic times to become overdense enough to form structure. The voids in between the interconnected filaments seen here still contain matter: normal matter, dark matter and neutrinos, all of which gravitate. (RALF KAEHLER, OLIVER HAHN AND TOM ABEL (KIPAC))

    The Universe as we know it began some 13.8 billion years ago with the Big Bang. It was filled with matter, antimatter, radiation, etc.; all the particles and fields that we know of today, and possibly even more. From the earliest instants of the hot Big Bang, however, it wasn’t simply a uniform sea of these energetic quanta. Instead, there were tiny imperfections ⁠ — at about the 0.003% level ⁠ — on all scales, where some regions had slightly more or slightly less matter-and-energy than average.

    In each one of these regions, a great cosmic race ensued. The race was between two competing phenomena:

    the expanding Universe, on one hand, which works to drive all the matter and energy apart,
    and gravitation, which works to pull all forms of energy together, causing massive material to clump and cluster together.

    The growth of the cosmic web and the large-scale structure in the Universe, shown here with the expansion itself scaled out, results in the Universe becoming more clustered and clumpier as time goes on. Initially small density fluctuations will grow to form a cosmic web with great voids separating them, but what appear to be the largest wall-like and supercluster-like structures may not be true, bound structures after all. (VOLKER SPRINGEL)

    With both normal matter and dark matter populating our Universe ⁠ — but not in sufficient quantities to cause the entire Universe to recollapse ⁠ — our Universe first forms stars and star clusters, with the first ones appearing when less than 200 million years have passed since the Big Bang. Over the next few hundred million years, structure begins to appear on larger scales, with the first galaxies forming, star clusters merging together, and even galaxies growing to attract matter from the lower-density regions nearby.

    As time continues to pass, and we cross from hundreds of millions of years to billions of years in our measurement of time since the Big Bang, galaxies gravitate together to form the Universe’s first galaxy clusters. With up to thousands of Milky Way-sized galaxies in them, massive mergers form giant elliptical behemoths at the cores of these clusters. At the modern extremes, galaxies like IC 1101 can grow to quadrillions of solar masses.

    The giant galaxy cluster, Abell 2029, houses galaxy IC 1101 at its core. At 5.5 million light years across, over 100 trillion stars and the mass of nearly a quadrillion suns, it’s the largest known galaxy of all. As massive and impressive as this galaxy cluster is, it’s unfortunately difficult for the Universe to make something significantly larger. (DIGITIZED SKY SURVEY 2, NASA)

    On even larger spatial scales and even longer timescales, the cosmic web begins to take shape, with filaments of dark matter tracing out a series of interconnecting lines. The dark matter drives the gravitational growth of the Universe, while the normal matter interacts through forces other than gravity as well, leading to the formation of gas clumps, new stars, and even new galaxies on long enough timescales.

    Meanwhile, the space between the filaments ⁠ — the underdense regions of the Universe ⁠ — give up their matter to the surrounding structures, becoming great cosmic voids. Galaxies dot the filaments, and fall into the larger cosmic structures where multiple filaments intersect. On long enough timescales, the most spectacular nexuses of matter even begin attracting one another, causing galaxy groups and clusters to begin forming even larger structures: galactic superclusters.

    Our local supercluster, Laniakea, contains the Milky Way, our local group, the Virgo cluster, and many smaller groups and clusters on the outskirts. However, each group and cluster is bound only to itself, and will be driven apart from the others due to dark energy and our expanding Universe. After 100 billion years, even the nearest galaxy beyond our own local group will be approximately a billion light years away, making it many thousands, and potentially millions of times fainter than the nearest galaxies appear today. (ANDREW Z. COLVIN / WIKIMEDIA COMMONS)

    Superclusters are collections of:

    individual, isolated galaxies,
    galactic groups,
    and large galaxy clusters,

    all connected by great cosmic filaments that trace out the cosmic web. Their gravitation mutually attract these components towards a common center-of-mass, where these large structures span hundreds of millions of light-years and contain upwards of 100,000 galaxies apiece.

    If all that we had in the Universe were dark matter, normal matter, black holes, neutrinos and radiation ⁠ — where the combined gravitational effects of these components fought against the Universe’s expansion ⁠ — superclusters would eventually reign supreme. Given enough time, these enormous structures would mutually attract to the point where they all merged together, creating one enormous, bound cosmic structure of unparalleled proportions.

    The flows of nearby galaxies and galaxy clusters (as shown by the ‘lines’ of flows) are mapped out with the mass field nearby. The greatest overdensities (in red) and underdensities (in black) came about from very small gravitational differences in the early Universe. (HELENE M. COURTOIS, DANIEL POMAREDE, R. BRENT TULLY, YEHUDA HOFFMAN, DENIS COURTOIS, FROM “COSMOGRAPHY OF THE LOCAL UNIVERSE” (2013))

    In our own local corner of the Universe, the Milky Way can be found in a small neighborhood we call our local group. Andromeda is our local group’s largest galaxy, followed by the Milky Way at #2, the Triangulum galaxy at #3, and perhaps 60 significantly smaller dwarf galaxies strewn out over a volume spanning a few million light-years in three dimensions. Our local group is one of many small-ish groups in our vicinity, along with the M81 group, the Sculptor group, and the Maffei group.

    Larger groups ⁠ — like the Leo I group or the Canes II group ⁠ — are also abundant in our nearby surroundings, containing around a dozen large galaxies apiece. But the most dominant nearby structure is the Virgo Cluster of galaxies, containing more than a thousand galaxies comparable in size/mass to the Milky Way, and located just 50–60 million light-years away. The Virgo cluster is the main source of mass in our nearby Universe.

    The Laniakea supercluster [see first image], containing the Milky Way (red dot), is home to our Local Group and so much more. Our location lies on the outskirts of the Virgo Cluster (large white collection near the Milky Way). Despite the deceptive looks of the image, this isn’t a real structure, as dark energy will drive most of these clumps apart, fragmenting them as time goes on. (TULLY, R. B., COURTOIS, H., HOFFMAN, Y & POMARÈDE, D. NATURE 513, 71–73 (2014))

    But the Virgo cluster itself is just one of a large number of galaxy clusters, themselves collections of hundreds to thousands of large galaxies, that have been mapped out in the nearby Universe. The Centaurus cluster, the Perseus-Pisces cluster, the Norma cluster and the Antlia cluster represent some of the densest and largest concentrations of mass close to the Milky Way.

    They conform very well to this idea of the cosmic web, where “strings” of galaxies and groups exist along the filaments connecting these large clusters, and with giant voids in space separating these mass-containing regions from one another. These voids are tremendously underdense, while the nexuses of these filaments are excessively overdense; it’s very clear that on cosmic timescales, the underdense regions have given up the majority of their matter to the denser, galaxy-rich clusters.

    The relative attractive and repulsive effects of overdense and underdense regions on the Milky Way are mapped out here on distance scales of hundreds of millions of light-years. Overdense and underdense regions both pull and push on matter, giving it speeds of hundreds or even thousands of kilometers in excess of what we’d expect from redshift measurements and the Hubble flow alone. These giant collections of galaxies can be divided up into superclusters, but the structures themselves are not gravitationally stable. (YEHUDA HOFFMAN, DANIEL POMARÈDE, R. BRENT TULLY, AND

    In our larger galactic neighborhood, going out for around one or two hundred million light-years, all of these clusters (excepting Perseus-Pisces, which lies on the other side of a nearby void) appear to have filaments with galaxies and galactic groups between them. It appears to make up a much larger structure, and if you sum up every galaxy in it ⁠ — large and small ones alike ⁠ — we fully anticipate that the total number should exceed 100,000.

    This is the collection of matter that we refer to as Laniakea: our local supercluster. It links up our own massive cluster, the Virgo cluster, with the Centaurus cluster, the Great Attractor, the Norma Cluster and many others. It’s a beautiful idea that represents structures on scales larger than a visual inspection would reveal. But there’s a problem with the idea of Laniakea in particular and with superclusters in general: these are not real, bound structures, but only apparent structures that are currently in the process of dissolving away entirely.

    In between the great clusters and filaments of the Universe are great cosmic voids, some of which can span hundreds of millions of light-years in diameter. The long-held idea that the Universe is held together by structures spanning many hundreds of millions of light-years, these ultra-large superclusters, has now been settled, and these enormous web-like features are destined to be torn apart by the Universe’s expansion. (ANDREW Z. COLVIN (CROPPED BY ZERYPHEX) / WIKIMEDIA COMMONS)

    Our Universe isn’t just a race between an initial expansion and the counteracting gravitational force caused by matter and radiation. In addition, there’s also a positive form of energy that’s inherent to space itself: dark energy. It causes the recession of distant galaxies to speed up as time goes on. And ⁠ — perhaps most importantly ⁠ — it gets more important on larger scales and at later times, which is particularly relevant for the existence of superclusters.

    If there were no dark energy, Laniakea would most certainly be real. Over time, its galaxies and clusters would all mutually mutually attract, leading to an enormous grouping of 100,000+ galaxies, the likes of which our Universe has never seen. Unfortunately, dark energy became the dominant factor in our Universe’s evolution approximately 6 billion years ago, and the various components of the Laniakea supercluster are already accelerating away from one another. Every component of Laniakea, including every independent group and cluster mentioned in this article, is not gravitationally bound to any other.

    The impressively huge galaxy cluster MACS J1149.5+223, whose light took over 5 billion years to reach us, is among the largest bound structures in all the Universe. On larger scales, nearby galaxies, groups, and clusters may appear to be associated with it, but are being driven apart from this cluster due to dark energy; superclusters are only apparent structures. (NASA, ESA, S. RODNEY (JOHN HOPKINS UNIVERSITY, USA) AND THE FRONTIERSN TEAM; T. TREU (UNIVERSITY OF CALIFORNIA LOS ANGELES, USA), P. KELLY (UNIVERSITY OF CALIFORNIA BERKELEY, USA) AND THE GLASS TEAM; J. LOTZ (STSCI) AND THE FRONTIER FIELDS TEAM; M. POSTMAN (STSCI) AND THE CLASH TEAM; AND Z. LEVAY (STSCI))

    Every supercluster that we’ve ever identified are not only gravitationally unbound from one another, but they themselves are not gravitationally bound structures. The individual groups and clusters within a supercluster are unbound, meaning that as time goes on, each structure presently identified as a supercluster will eventually dissociate. For our own corner of the Universe, the Local Group will never merge with the Virgo cluster, the Leo I group, or any structure larger than our own.

    On the largest cosmic scales, enormous collections of galaxies spanning vast volumes of space appear to be real ⁠ — the Universe’s superclusters ⁠ — but these apparent structures are ephemeral and transient. They are not bound together, and they will never become so. In fact, if a structure had not already accumulated enough mass 6 billion years ago to become bound, when dark energy first dominated the Universe’s expansion, it never will. Billions of years from now, the individual supercluster components will be torn apart by the Universe’s expansion, forever adrift as lonesome islands in the great cosmic ocean.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 3:37 pm on February 10, 2020 Permalink | Reply
    Tags: "This 9-Gigapixel Zoomable Image Is Humanity’s Best All-Time View Of The Galactic Center", , , , , ESO’s VISTA telescope at Cerro Paranal-the world’s most powerful wide-field and high-resolution infrared astronomy telescope., Ethan Siegel   

    From Ethan Siegel: “This 9-Gigapixel Zoomable Image Is Humanity’s Best All-Time View Of The Galactic Center” 

    From Ethan Siegel
    Feb 10, 2020

    This image, focusing on the central few degrees of the Milky Way in infrared light, is the culmination of years of observations taken with the ESO’s VISTA instrument, the world’s most powerful wide-field and high-resolution infrared astronomy telescope. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    Part of ESO’s Paranal Observatory, the VLT Survey Telescope (VISTA) observes the brilliantly clear skies above the Atacama Desert of Chile. It is the largest survey telescope in the world in visible light.
    with an elevation of 2,635 metres (8,645 ft) above sea level

    The center of the galaxy is mostly obscure in visible light. But thanks to the world’s most powerful infrared telescope, we can see inside.

    Throughout history, the sight of the Milky Way has fascinated and mystified skywatchers worldwide.

    This image is a single projection of Gaia’s all-sky view of our Milky Way Galaxy and neighboring galaxies, based on measurements of nearly 1.7 billion stars. The map shows the total brightness and color of stars observed by the ESA satellite in each portion of the sky between July 2014 and May 2016. However, even with Gaia, the galactic center remains largely obscured, as it cannot penetrate the dust lanes of our galaxy in optical wavelengths. (ESA/GAIA/DPAC)

    ESA/GAIA satellite

    In visible light, the dark dust lanes redden and obscure billions of stars lurking behind them.

    The all-sky infrared map of the sky from NASA’s WISE spacecraft. As spectacular as this image is, it cannot achieve the resolutions or exposure times or cover as many independent wavelengths as the ground-based VISTA observatory can. (NASA / JPL-CALTECH / UCLA, FOR THE WISE COLLABORATION)


    Space-based observatories, like NASA’s Wise and Spitzer, have seen through the dust, revealing hidden stars and gas.

    NASA/Spitzer Infrared Telescope. No longer in service.

    This infrared view of the plane of the Milky Way, taken from space by NASA’s Spitzer as part of the GLIMPSE galactic survey, is one of the most ambitious observing projects ever undertaken, taking a decade to complete. At longer wavelengths than are visible from the ground, the gas of different temperatures from our galaxy is highlighted as never before. (NASA/JPL-CALTECH/UNIVERSITY OF WISCONSIN)


    NASA’s Spitzer, in particular, constructed the most comprehensive map of the galactic plane ever seen.

    This four-panel view shows the Milky Way’s central region in four different wavelengths of light, with the longer (submillimeter) wavelengths at top, going through the far-and-near infrared (2nd and 3rd) and ending in a visible-light view of the Milky Way. Note that the dust lanes and foreground stars obscure the center in visible light, but not so much in the infrared. (ESO/ATLASGAL CONSORTIUM/NASA/GLIMPSE CONSORTIUM/VVV SURVEY/ESA/PLANCK/D. MINNITI/S. GUISARD ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    But the most spectacular mosaic of the galactic center itself comes courtesy of the ground-based VISTA telescope.

    This wide-field image of the VISTA telescope, just one year prior to its ‘first light’ on the night sky, shows the infrared camera equipped and ready for action. The VISTA telescope, the most powerful infrared telescope in history, was built entirely by a variety of UK entities as part of the ESO, which has assured the world that the UK’s membership will not be affected by Brexit.(?) (ESO)

    VISTA, the ESO’s Visible and Infrared Survey Telescope for Astronomy, assembled a whopping 9-gigapixel image of our galaxy’s innermost few degrees.

    This infrared view of the central part of the Milky Way from the VVV VISTA survey has been labelled to show a selection of the many nebulae and clusters in this part of the sky. Messier 8 (the Lagoon Nebula), Messier 20 (the Trifid Nebula), NGC 6357 (the War and Peace Nebula) and NGC 6334 (the Cat’s Paw Nebula) are all easily seen at low-resolution, while the others can be found by zooming in to the full 9-gigapixel mosaic. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    Dusty star-forming regions, like the Lagoon Nebula, are only faintly identifiable in the infrared.

    The Lagoon Nebula, part of a larger molecular cloud complex that extends across the frame of the image (but is concentrated in the upper-right), is a large star-forming region in the galactic plane of the Milky Way. In the infrared, it looks enormously different from its bright red visible light appearance. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    The great dark cloud known as Barnard 78 appears as barely a wisp.

    Sometimes known as the ‘Bowl of the Pipe’ portion of an even larger structure known as the Pipe Nebula, the dark nebula Barnard 78 is a molecular cloud that reduces the brightness of stars behind it by approximately 5 astronomical magnitudes. In the infrared, however, it barely appears as a series of wisps. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    The Trifid Nebula, famously two-toned in visible light, shows a dusty, blue tinge on the actively star-forming side only.

    The Trifid Nebula, which normally appears with a blue tone on the left (a reflection nebula) and a red tone on the right (an emission nebula), shows only bright stars on the left side (typically due to red giants or supergiants in the infrared) and a bluish tone on the right, perhaps indicative of either younger stars or large amounts of infrared (heat) radiation coming from the gas in that region. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    Molecular clouds and ionized, shocked regions look wildly unfamiliar in the infrared.

    Shown here, the ‘War and Peace’ Nebula (left, NGC 6357) and the ‘Heart and Soul’ Nebula (right, NGC 6334), two regions of active star formation near the galactic center, take on wildly different appearances from how they look in the optical. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    The exact center itself, meanwhile, reveals millions of stars that are completely invisible in the optical.

    The inner galactic center, as viewed in infrared light, shows what appears to be an interwoven web of dust surrounding a yellowish core. In the galactic center, the stars are not necessarily intrinsically yellow, but rather are reddened preferentially by the foreground matter that scatters away the bluer light, similar to how sunsets appear red as our atmosphere scatters the blue light away. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    Where the dust is thickest and densest, even infrared light cannot penetrate.

    A zoom into the innermost region of the galactic center reveals an enormously dense plethora of stars, just a few of the nearly 100 million contained in the entire mosaic, but also rich lanes of dust that even the long-wavelength infrared light cannot fully penetrate. (ESO/VVV SURVEY/D. MINNITI; ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 10:49 am on February 3, 2020 Permalink | Reply
    Tags: "How Can We See 46.1 Billion Light-Years Away In A 13.8 Billion Year Old Universe?", , , , , Ethan Siegel   

    From Ethan Siegel: “Ask Ethan: “How Can We See 46.1 Billion Light-Years Away In A 13.8 Billion Year Old Universe?” 

    From Ethan Siegel
    Feb 1, 2020

    In General Relativity, the fabric of space doesn’t remain static over time. Everything else depends on the details we measure.

    After the Big Bang, the Universe was almost perfectly uniform, and full of matter, energy and radiation in a rapidly expanding state. As time goes on, the Universe not only forms elements, atoms, and clumps and clusters together that lead to stars and galaxies, but expands and cools the entire time. The Universe continues to expand even today, growing at a rate of 6.5 light-years in all directions per year as time goes on. (NASA / GSFC)

    If there’s one thing we’ve experimentally determined to be a constant in the Universe, it’s the speed of light in a vacuum, c. No matter where, when, or in which direction light travels, it moves at 299,792,458 meters-per-second, traveling a distance of 1 light-year (about 9 trillion km) every year. It’s been 13.8 billion years since the Big Bang, which might lead you to expect that the farthest objects we can possibly see are 13.8 billion light-years away. But not only isn’t that true, the farthest distance we can see is more than three times as remote: 46.1 billion light-years. How can we see so far away? That’s what Anton Scheepers and Jere Singleton want to know, asking:

    If the age of the universe is 13.8 billion years, how can we detect any signal that is more than 13.8 billion light-years away?

    Timeline of the Inflationary Universe WMAP

    It’s a good question, and one that you need a little bit of physics to answer.

    We often visualize space as a 3D grid, even though this is a frame-dependent oversimplification when we consider the concept of spacetime. In reality, spacetime is curved by the presence of matter-and-energy, and distances are not fixed but rather can evolve as the Universe expands or contracts. (REUNMEDIA / STORYBLOCKS)

    We can start by imagining a Universe where the most distant objects we could see really were 13.8 billion light-years away. For that to be the case, you’d have to have a Universe where:

    objects remained at the same, fixed distance from one another over time,
    where the fabric of space remained static and neither expanded nor contracted over time,
    and where light propagated through the Universe in a straight line between any two points, never being diverted or affected by the effects of matter, energy, spatial curvature, or anything else.

    If you imagine your Universe to be a three-dimensional grid — with an x, y, and z axis — where space itself is fixed and unchanging, this would actually be possible. Objects would emit light in the distant past, that light would travel through the Universe until it arrived at our eyes, and we’d receive it the same number of “years” later as the number of “light-years” the light traveled.

    In a static, unchanging Universe, all objects would emit light in all directions, and that light would propagate through the Universe at the speed of light. After a time of 13.8 billion years had passed, the maximum amount of distance that the light could have traveled would be 13.8 billion light-years. (ANDREW Z. COLVIN OF WIKIMEDIA COMMONS)

    Unfortunately for us, all three of those assumptions are incorrect. For starters, objects don’t remain at a constant, fixed distance from one another, but rather are free to move through the space that they occupy. The mutual gravitational effects of all the massive and energy-containing objects in the Universe cause them to move around and accelerate, clumping masses together into structures like galaxies and clusters of galaxies, while other regions become devoid of matter.

    These forces can get extremely complex, kicking stars and gas out of galaxies, creating ultra-fast hypervelocity objects, and creating all sorts of accelerations. The light that we perceive will be redshifted or blueshifted dependent on our relative velocity to the object we’re observing, and the light-travel time won’t necessarily be the same as the actual present-day distance between any two objects.

    A light-emitting object moving relative to an observer will have the light that it emits appear shifted dependent on the location of an observer. Someone on the left will see the source moving away from it, and hence the light will be redshifted; someone to the right of the source will see it blueshifted, or shifted to higher frequencies, as the source moves towards it. (WIKIMEDIA COMMONS USER TXALIEN)

    This last point is very important, because even in a Universe where space is static, fixed, and unchanging, objects could still move through it. We can even imagine an extreme case: an object that was located 13.8 billion light-years away some 13.8 billion years ago, but was moving away from us at a velocity very close to the speed of light.

    That light will still propagate towards us at the speed of light, traversing 13.8 billion light-years in a timespan of 13.8 billion years. But when that light arrives at the present day, the object can be up to twice as far away: up to 27.6 billion light-years away if it moved away from us arbitrarily close to the speed of light. Even if the fabric of space didn’t change over time, there are plenty of objects we can see today that could be farther away than 13.8 billion light-years.

    The only catch is that their light could travel for 13.8 billion light-years at most; how the objects move after emitting that light is irrelevant.

    Light, in a vacuum, always appears to move at the same speed, the speed of light, regardless of the observer’s velocity. If a distant object emitted light and then moved quickly away from us, it could be just about as far away today as double the light-travel distance. (PIXABAY USER MELMAK)

    But the fabric of space isn’t constant, either. This was the big revelation of Einstein that led him to formulate the General theory of Relativity: that neither space nor time were static or fixed, but instead formed a fabric known as spacetime, whose properties were dependent on the matter and energy present within the Universe.

    If you were to take a Universe that was, on average, filled relatively evenly with some form of matter or energy — irrespective of whether it were normal matter, dark matter, photons, neutrinos, gravitational waves, black holes, dark energy, cosmic strings, or any combination thereof — you would find that the fabric of space itself is unstable: it cannot remain static and unchanging. Instead, it must either expand or contract; the great cosmic distances between objects must change over time.

    First noted by Vesto Slipher back in 1917, some of the objects we observe show the spectral signatures of absorption or emission of particular atoms, ions, or molecules, but with a systematic shift towards either the red or blue end of the light spectrum. When combined with the distance measurements of Hubble, this data gave rise to the initial idea of the expanding Universe: the farther away a galaxy is, the greater its light is redshifted. (VESTO SLIPHER, (1917): PROC. AMER. PHIL. SOC., 56, 403)

    Beginning in the 1910s and 1920s, observations began to confirm this picture. We discovered that the spiral and elliptical nebulae in the sky were galaxies beyond our own; we measured the distance to them; we discovered that the farther away they were, the greater their light was redshifted.

    In the context of Einstein’s General Relativity, this led to a surefire conclusion: the Universe was expanding.

    This is even more profound than people typically realize. The fabric of space itself does not remain constant over time, but rather expands, pushing objects that aren’t gravitationally bound together apart from one another. It’s as if individual galaxies and groups/clusters of galaxies were raisins embedded in a sea of invisible (space-like) dough, and that as the dough leavened, the raisins were pushed apart. The space between these objects expands, and that causes individual objects to appear to recede from one another.

    The ‘raisin bread’ model of the expanding Universe, where relative distances increase as the space (dough) expands. The farther away any two raisin are from one another, the greater the observed redshift will be by time the light is received. The redshift-distance relation predicted by the expanding Universe is borne out in observations, and has been consistent with what’s been known all the way back since the 1920s. (NASA / WMAP SCIENCE TEAM)

    This has enormous implications for the meaning behind our observations. When we observe a distant object, we don’t just see the light that it emitted, nor do we merely see the light shifted by the relative velocity of the source and the observer. Instead, we see how the expanding Universe has affected that light from the cumulative effects of the expanding space that occurred at every point along its journey.

    If we want to probe the absolute limits of how far back we’re able to see, we’d look for light that was emitted as close to 13.8 billion years ago as possible, that was just arriving at our eyes today. We’d calculate, based on the light we see now:

    how much time the light has been traveling for,
    how the Universe has expanded between then and now,
    what all the different forms of energy present in the Universe must be to account for it,
    and how far away the object must be today, given everything we know about the expanding Universe.

    This simplified animation shows how light redshifts and how distances between unbound objects change over time in the expanding Universe. Note that the objects start off closer than the amount of time it takes light to travel between them, the light redshifts due to the expansion of space, and the two galaxies wind up much farther apart than the light-travel path taken by the photon exchanged between them. (ROB KNOP)

    We haven’t just done this for a handful of objects at this point, but for literally millions of them, ranging in distance from our own cosmic backyard out to objects more than 30 billion light-years away.

    How can the objects be more than 30 billion light-years away, you ask?

    It’s because the space between any two points — like us and the object we’re observing — expands with time. The farthest object we’ve ever seen has had its light travel towards us for 13.4 billion years; we’re seeing it as it was just 407 million years after the Big Bang, or 3% of the Universe’s present age. The light we observe is redshifted by about a factor of 12, as the observed light’s wavelength is 1210% as long as it was compared to when it was emitted. And after that 13.4 billion year journey, that object is now some 32.1 billion light-years away, consistent with an expanding Universe.

    The most distant galaxy ever discovered in the known Universe, GN-z11, has its light come to us from 13.4 billion years ago: when the Universe was only 3% its current age: 407 million years old. The distance from this galaxy to us, taking the expanding Universe into account, is an incredible 32.1 billion light-years. (NASA, ESA, AND G. BACON (STSCI))

    Based on the full suite of observations we’ve taken — measuring not just redshifts and distances of objects but also the leftover glow from the Big Bang (the cosmic microwave background), the clustering of galaxies and features in the large-scale structure of the Universe, gravitational lenses, colliding clusters of galaxies, the abundances of the light elements created before any stars were formed, etc. — we can determine what the Universe is made of, and in what ratios.

    The distance/redshift relation, including the most distant objects of all, seen from their type Ia supernovae. The data strongly favors an accelerating Universe. Note how these lines are all different from one another, as they correspond to Universes made of different ingredients. (NED WRIGHT, BASED ON THE LATEST DATA FROM BETOULE ET AL.)

    Today, our best estimates are that we live in a Universe made up of:

    0.01% radiation in the form of photons,
    0.1% neutrinos, which have a small but non-zero mass,
    4.9% normal matter, made of protons, neutrons and electrons,
    27% dark matter,
    and 68% dark energy.

    This fits all the data we have, and leads to a unique expansion history dating from the moment of the Big Bang. From this, we can extract one unique value for the size of the visible Universe: 46.1 billion light-years in all directions.

    The size of our visible Universe (yellow), along with the amount we can reach (magenta). The limit of the visible Universe is 46.1 billion light-years, as that’s the limit of how far away an object that emitted light that would just be reaching us today would be after expanding away from us for 13.8 billion years. (E. SIEGEL, BASED ON WORK BY WIKIMEDIA COMMONS USERS AZCOLVIN 429 AND FRÉDÉRIC MICHEL)

    If the limit of what we could see in a 13.8 billion year old Universe were truly 13.8 billion light-years, it would be extraordinary evidence that both General Relativity was wrong and that objects could not move from one location to a more distant location in the Universe over time. The observational evidence overwhelming indicates that objects do move, that General Relativity is correct, and that the Universe is expanding and dominated by a mix of dark matter and dark energy.

    When you take the full suite of what’s known into account, we discover a Universe that began with a hot Big Bang some 13.8 billion years ago, has been expanding ever since, and whose most distant light can come to us from an object presently located 46.1 billion light-years away. The space between ourselves and the distant, unbound objects we observe continues to expand at a rate of 6.5 light-years per year at the most distant cosmic frontier. As time goes on, the distant reaches of the Universe will further recede from our grasp.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 9:29 am on February 3, 2020 Permalink | Reply
    Tags: , , , , Ethan Siegel, SN 1000+0216, ,   

    From Ethan Siegel: “The Brightest Supernovae Of All Have A Suspiciously Common Explanation” 

    From Ethan Siegel
    Jan 31, 2020

    All supernovae are not created equal. After a 14 year investigation, the brightest ones have a surprising explanation.

    This illustration of superluminous supernova SN 1000+0216, the most distant supernova ever observed at a redshift of z=3.90, from when the Universe was just 1.6 billion years old, is the current record-holder for individual supernovae. (ADRIAN MALEC AND MARIE MARTIG (SWINBURNE UNIVERSITY))

    In 2006, astronomers witnessed a supernova that defied conventional explanation. Typically, supernovae arise either from the collapse of a massive star’s core (type II) or from a white dwarf that’s accumulated too much mass (type Ia), where in either case they can reach a peak brightness that’s some 10 billion times as luminous as our own Sun. But this one, known as SN 2006gy, was superluminous, radiating 100 times more energy than normal.

    For more than a decade, the leading explanation was thought to be the pair-instability mechanism, where energies inside the star rise so high that matter-antimatter pairs are spontaneously produced. But a new detailed analysis, published in the January 24, 2020 issue of Science magazine, scientists reached a shocking conclusion: this was probably a fairly typical type Ia supernova simply occurring under odd conditions. Here’s how they got there.

    Many strange transient events, such as AT2018cow, involve a combination of some type of supernova interacting with a spherical cloud of matter previously blown off by the star or otherwise existing in the surrounding material around a central explosion. (BILL SAXTON, NRAO/AUI/NSF)

    Although stars might seem like they’re incredibly complicated objects, with gravity, nuclear fusion, complex fluid flow, energy transport, and magnetized plasmas all playing a role, their life cycles and fates typically boil down to just one major factor: the mass they’re born with. When a cloud of gas that’s collapsed under its own gravity becomes dense, hot, and massive enough, it ignites nuclear fusion in its core, beginning with a chain reaction that fuses hydrogen into helium.

    The more massive a star is, the larger and hotter the region of the core where fusion occurs will be. It’s no surprise, then, that the coolest, lowest-mass stars in the Universe, including red dwarfs like Proxima Centauri, emit less than 0.2% the light of our Sun and can take trillions of years to burn through their fuel. On the other end of the spectrum, the most massive known stars, hundreds of times as massive as our Sun, can be millions of times as luminous and will burn through their core’s hydrogen in just 1 or 2 million years.

    The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. Our Sun is a G-class star, producing light with an effective temperature of around 5800 K and a brightness of 1 solar luminosity. Stars can be as low in mass as 8% the mass of our Sun, where they’ll burn with ~0.01% our Sun’s brightness and live for more than 1000 times as long, but they can also rise to hundreds of times our Sun’s mass, with millions of times our Sun’s luminosity and lifetimes of just a few million years. The first generation of stars should consist of O-type and B-type stars almost exclusively. (WIKIMEDIA COMMONS USER LUCASVB, ADDITIONS BY E. SIEGEL)

    When the core of a star runs out of hydrogen, the radiation pressure that was produced by fusion begins to drop. This is bad news for the star in some sense, as all that radiation was necessary to hold the star up against gravitational collapse. Based on how quickly the star contracts for its mass, and how slowly the heat is able to escape through the outer layers, contraction makes the core heat up, where — if it crosses a particular threshold — new elements can begin fusing.

    Red dwarf stars never get hot enough to fuse anything beyond hydrogen, but Sun-like stars will heat up to fuse helium in their core, while the outer layers are pushed outward to turn the star into a red giant. When Sun-like stars, which represent all stars between about 40% and 800% the mass of our Sun, run out of helium fuel, their cores will contract down into white dwarfs made largely of carbon and oxygen, while their outer layers get blown off into the interstellar medium.

    The planetary nebula NGC 6369’s blue-green ring marks the location where energetic ultraviolet light has stripped electrons from oxygen atoms in the gas. Our Sun, being a single star that rotates on the slow end of stars, is very likely going to wind up looking akin to this nebula after perhaps another 7 billion years. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    NASA/ESA Hubble Telescope

    Meanwhile, the most massive stars will have their cores contract down to such high temperatures that carbon — the end result of helium fusion — can begin fusing into heavier elements still. In a sequence, carbon fusion will give way to stars fusing neon, oxygen, and eventually silicon and sulfur, leading to a core that’s rich in iron, nickel, and cobalt. Those elements are the end of the line, and when silicon and sulfur fusion end, the core collapses and a type II supernova occurs.

    On the other hand, stars that end their lives as white dwarfs will get a second chance: if they either accrete enough mass or merge with another object, they can cross a critical threshold that will also lead to a different class of supernova known as a type Ia supernova. All supernovae are thought to arise from one of these two mechanism, with the only differences dependent on which elements are either present, absent, or were once present but were later stripped from the star at some point in the past.

    Two different ways to make a Type Ia supernova: the accretion scenario (L) and the merger scenario (R). Without a binary companion, our Sun could never go supernova by accreting matter, but we could potentially merge with another white dwarf in the galaxy, which could lead us to revitalize in a Type Ia supernova explosion after all. When a white dwarf crosses a critical (1.4 solar mass) threshold, nuclear fusion will spontaneously occur between adjacent atomic nuclei in the core. (NASA / CXC / M. WEISS)

    When it comes to the specific case of superluminous supernovae, such as SN 2006gy, many scenarios have been envisioned to explain them. Initially touted as the brightest stellar explosion ever seen, numerous others seen this century have rivaled or even exceeded it, but it was still classified as a type II supernova due to the hydrogen spectral lines observed in its light. At just 238 million light-years away, SN 2006gy is the closest superluminous supernova ever seen.

    Prior ideas all involved a very massive star that had already experienced eruptive events that created a large amount of material around the star, similar to what’s occurring in our own galaxy with Eta Carinae. A luminous blue variable could have ejected such material, as could a star that pulses due to an intrinsic variation. But traditionally, the most conventional explanation for a cataclysm like this has been the pair-instability mechanism.

    This diagram illustrates the pair production process that astronomers once thought triggered the hypernova event known as SN 2006gy. When high-enough-energy photons are produced, they will create electron/positron pairs, causing a pressure drop and a runaway reaction that destroys the star. This event is known as a pair-instability supernova. Peak luminosities of a hypernova, also known as a superluminous supernova, are many times greater than that of any other, ‘normal’ supernova. (NASA/CXC/M. WEISS)

    The idea of the pair-instability mechanism is that the energies inside the core of a star rise so high that individual photons and collisions between particles are great enough that there’s enough energy, E, for new particle-antiparticle pairs of electrons and positrons (of combined mass m) to get produced through Einstein’s famous mass-energy equivalence relation: E = mc².

    When particle-antiparticle pairs get produced, the radiation pressure drops, causing the core to contract and heat up further, which in turn causes more particle-antiparticle pairs to get produced, which drops the pressure further, etc. In short order, a runaway fusion reaction occurs, and the entire star is torn apart in an enormous explosion.

    Until this year, the pair-instability mechanism was the leading idea for explaining superluminous supernovae. But in a new paper [link is above], Anders Jerkstrand, Keiichi Maeda, and Koji S. Kawabata showed that the pair instability mechanism would have led to a light-curve that failed to match the actual observations.

    The various pair-instability models for a ~90 solar mass core made mostly of helium undergoing a pair-instability collapse (solid lines), as compared with the actual light-curve of superluminous supernova SN 2006gy. Under no circumstances does this model fit the data. (ANDERS JERKSTRAND, KEIICHI MAEDA, AND KOJI KAWABATA (2020), SUPPLEMENTARY MATERIALS)

    What the authors noted, though, was remarkable: a little more than a year after the initial explosion, when the light had dimmed to be just a fraction of the brightness of one of the more typical supernovae, about half a solar mass’s worth of radioactive nickel had decayed into iron, and that enormous amount of iron was showing up in the spectral light of the supernova remnant at around 800 nanometers in wavelength.

    Such an emission feature had never been seen before, and certainly wasn’t anticipated. A detailed breakdown of the spectrum revealed not only iron, but also the heavy elements sulfur and calcium, indicating that a large amount of mass was needed to exist in the region of space surrounding the star before it went supernova. Something must have ejected a large amount of this heavy element in its unionized state, which seems to fit the idea of an earlier, recent phase of silicon-burning.

    The combined effects of a type Ia supernova and a halo of circumstellar material consisting of large portions of iron seems to be what’s required for reproducing the spectral properties of this superluminous supernova more than a year after the cataclysm first occurred. (ANDERS JERKSTRAND, KEIICHI MAEDA, AND KOJI KAWABATA (2020), SCIENCE, 367, 6476, P. 416)

    The fact that there’s no neutral oxygen, coupled with the insufficiency of a pair-instability solution to match the light-curve, leaves only one viable possibility left: a type Ia supernova, ignited by a white dwarf star, could have exploded and broken through a shroud of enriched circumstellar material.

    Although these spectral features, on their own, could be explained either by an exploding white dwarf or a pair-instability supernova surrounded by a large amount of circumstellar material, the combination of this data with the observed light curve in its earlier phases rules out the pair-instability scenario, leaving only a detonating white dwarf as the culprit.

    As the authors note, the idea that a type Ia supernova could have detonated and been responsible for SN 2006gy is a very old one, but simply fell out of fashion as ultra-massive progenitor stars were what most analyses chose to focus on.

    The ultra-massive star Wolf-Rayet 124, shown with its surrounding nebula, is one of thousands of Milky Way stars that could be our galaxy’s next supernova. Note the extraordinary amount of ejecta around it, which could provide a similar environment to the one that the type Ia supernova at the heart of SN 2006gy collided with. (HUBBLE LEGACY ARCHIVE / A. MOFFAT / JUDY SCHMIDT)

    If the authors’ conclusion is correct, it means that this material surrounding the superluminous supernova was ejected between one decade and two centuries before the supernova explosion, and that the very massive star at the core of this system — likely a giant or supergiant star — must have had a white dwarf companion, which could only have been created if it entered the giant phase first, and had its outer material stripped away by its massive partner.

    What still isn’t understood is how the two cores of the two separate stars merge and explode. As the authors note:

    “…These steps are rarely explored in inspiral simulations, because of computational difficulties, although some results have shown that less-evolved giants merge more easily. Material may also form a disk around the two cores that could drive the final stages of merging….”

    Whatever cataclysm occurred at the center of this massive ejecta of circumstellar material, it must produce enough energy, match the observed spectrum, and reproduce the light-curve of superluminous supernovae to be responsible for what we’ve seen. So far, only a merger scenario involving a white dwarf core fits the bill. (ISTOCK)

    Either way, this represents a new step forward towards understanding the most energetic stellar cataclysms in the Universe: superluminous supernovae. Even though hydrogen was present in narrow lines, leading to an initial classification as a type IIn supernova, the full suite of data is better fit by a white dwarf core merging with a giant or supergiant’s core, with the supernova’s ejecta crashing into a large amount of circumstellar material that had been previously ejected.

    While there’s a whole lot we’ve learned from SN 2006gy, the closest superluminous supernova, many others have been seen with similarities, but none were close enough to detect iron lines so long after the initial explosion took place. Is a white dwarf merging with a giant or supergiant core the way all superluminous supernovae are created? Or is SN 2006gy rare, or do we perhaps even have it wrong after all? Whatever the case, we’re one step closer to understanding what causes the most energetic stellar cataclysms ever seen in the Universe.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 4:29 pm on January 6, 2020 Permalink | Reply
    Tags: "This Gorgeous Nebula In Space Reveals How The Stars Came To Be", , , , , Ethan Siegel, Pillars of Creation part of the Eagle Nebula   

    From Ethan Siegel: “This Gorgeous Nebula In Space Reveals How The Stars Came To Be” 

    From Ethan Siegel
    Jan 6 , 2019

    The 2015 view of the Pillars of Creation showcases a combination of visible and infrared data, a wide field-of-view, spectral lines that indicate the presence of a variety of heavy elements, and that showcase subtle changes over time from the earlier, 1995 image. The Pillars of Creation represent just one small part, albeit the most famous part, of a larger star-forming region: the Eagle Nebula. (NASA, ESA/HUBBLE AND THE HUBBLE HERITAGE TEAM; ACKNOWLEDGEMENT: P. SCOWEN (ARIZONA STATE UNIVERSITY, USA) AND J. HESTER (FORMERLY OF ARIZONA STATE UNIVERSITY, USA))

    Eagle Nebula NASA/ESA Hubble Public Domain

    NASA/ESA Hubble Telescope

    The Eagle Nebula, complete with the Pillars of Creation, tells a mini-version of the story of how all the Universe’s stars formed.

    This color-composite image of the Eagle Nebula displays a number of iconic features, including the ‘Eagle head/wings’ at top, the Pillars of Creation at the center, the enormous star cluster to the upper right, and the ‘fairy’ at left. The entire nebula is 70 light-years by 55 light-years, and is located approximately 7,000 light-years away. (ESO / LA SILLA OBSERVATORY)

    ESO/Cerro LaSilla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    An enormous molecular gas cloud, spanning 70 light-years across, provides the raw material for star-formation.

    Deep inside, gravitational collapse causes different regions to collapse at different rates.

    The open star cluster NGC 6611, found in the Eagle Nebula, consists largely of hot, young, blue stars that will go supernova in the next few million years. All told, approximately 8,100 new stars can be found in the Eagle Nebula, no more than 1–2 million years of age. (ESA/HUBBLE AND NASA)

    The first stars to form inside did so 1–2 million years ago, creating a cluster of about 8,000 new stars.

    X-ray astronomers discovered 20% of those young stars contain protoplanetary disks, but found zero supernova remnants.

    The Herschel Space Observatory captured this image of the Eagle nebula, with its intensely cold gas and dust. The “Pillars of Creation,” made famous by NASA’S Hubble Space Telescope in 1995, are seen inside the circle. The different colors represent gas that’s extremely cool: between 10 and 40 K. (ESA/HERSCHEL/PACS/SPIRE/HILL, MOTTE, HOBYS KEY PROGRAMME CONSORTIUM)

    ESA/Herschel spacecraft active from 2009 to 2013

    The ultraviolet light from new stars carves gaps in the nebula, but the persisting clumps continue to form stars.

    Although the Pillars of Creation are a prominent feature of Messier 16, they are relatively small compared to the entire nebula. This video begins with a ground-based image of the sky near Serpens and zooms into Hubble’s iconic image.
    Credits: NASA, ESA, G. Bacon (STScI); Acknowledgment: NASA, ESA, Hubble Heritage Team (STScI/AURA), Digitized Sky Survey ((DSS), STScI/AURA, Palomar/Caltech, UKSTU/AAO), T.A. Rector (NRAO/AUI/NSF, NOAO/AURA/NSF), B.A. Wolpa (NOAO/AURA/NSF), A. Fujii

    This haunting spire, captured by Hubble in visible and infrared light, is composed of cold gas and dust within Messier 16. Stretching 9.5 light-years, this tower spans more than twice the distance from our sun to its nearest star. Radiation from the hot young stars in the top half of the image are illuminating and eroding the structure, commonly known as the ‘fairy.’ (NASA, ESA AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    The largest dust structure is known as the “fairy,” spanning 9.5 light-years in extent but evaporating rapidly.

    This image compares two views of the Eagle Nebula’s Pillars of Creation taken with Hubble 20 years apart. The newer image, on the left, captures almost exactly the same region as in the 1995, on the right. However, the newer image uses Hubble’s Wide Field Camera 3, installed in 2009, to capture light from glowing oxygen, hydrogen, and sulphur with greater clarity. [the view on the right is from Wide Field Planetary Camera 2, (WFPC2)].

    NASA/Hubble WFPC2. No longer in service.

    NASA/ESA Hubble WFC3

    Having both images allows astronomers to study how the structure of the pillars is changing over time, and showcases one of the finest examples of what we can learn by doing astronomy in space. (WFC3: NASA, ESA/HUBBLE AND THE HUBBLE HERITAGE TEAM WFPC2: NASA, ESA/HUBBLE, STSCI, J. HESTER AND P. SCOWEN (ARIZONA STATE UNIVERSITY))

    The Pillars illustrate an ongoing race: between evaporative radiation and gravitational collapse.

    The rate of evaporation can be measured and is slow: it will take 100,000+ years for the pillars to evaporate.

    In the meantime, star-formation continues, resulting in large numbers of red dwarfs and even failed stars.

    Stars (in blue), ionized hydrogen (in red), and neutral, light-blocking gas (in black) all abound throughout the Eagle Nebula, providing a wide-field view of one of the Milky Way’s hotbeds of new star formation. Some 4.56 billion years ago, our Sun formed in a similar region, while the stars forming here will get spread throughout the galaxy as the next, post-solar generation of stars. (GÖRAN NILSSON & THE LIVERPOOL TELESCOPE)

    2-metre Liverpool Telescope at at the Observatorio del Roque de los Muchachos La Palma in the Canary Islands, Altitude 2,363 m (7,753 ft)

    2-metre Liverpool Telescope at the Observatorio del Roque de los Muchachos, at La Palma in the Canary Islands,altitude 2,363 m (7,753 ft)

    This nebula and cluster will soon dissipate, seeding the galaxy with the next generation of stars.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:28 pm on January 4, 2020 Permalink | Reply
    Tags: "Ask Ethan- Did God Create The Universe?", , , , , Ethan Siegel   

    From Ethan Siegel: “Ask Ethan- Did God Create The Universe?” 

    From Ethan Siegel
    Jan 4, 2019

    It’s not a question we know enough to know the answer to, but to dismiss the possibility is scientifically baseless.

    A composite image in X-ray light, with data taken from NASA’s Chandra and NuSTAR observatories, shows the pulsar wind nebula PSR B1509–58, colloquially known as the ‘Hand of God.’ However, no divine intervention is required to explain this object, which can be understood with physical explanations alone. (NASA/JPL-CALTECH/MCGILL)

    NASA/Chandra X-ray Telescope

    NASA/DTU/ASI NuSTAR X-ray telescope


    There’s one question that most of us ask at some point in our lives whose answer still eludes humanity: where did all this come from? Any component of reality that we ask that question of ⁠ — of where it comes from ⁠ — always has an answer that refers to some earlier, pre-existing form of reality. We might know that we, as individuals, came from other humans, but then we can ask where the first humans came from? If the answer is another pre-existing life form, then we can ask the question of how life began. And we can continue this line of questioning as far back as we want, to even before the Big Bang, until science has nothing left to say, and all we have is the grand abyss of the unknown. It’s there that this week’s question, from Mya Alexander, comes in:

    “I am very interested in space and with who made us and what made us… what do you have to say about people who say that ‘God’ made us?”

    I’m interested in those questions too, Mya, and as you might have suspected, I have a lot to say about it.

    The Mercury-bound MESSENGER spacecraft captured several stunning images of Earth during a gravity assist swingby of its home planet on Aug. 2, 2005. Several hundred images, taken with the wide-angle camera in MESSENGER’s Mercury Dual Imaging System (MDIS), were sequenced into a movie documenting the view from MESSENGER as it departed Earth. Earth rotates roughly once every 24 hours on its axis and moves through space in an elliptical orbit around our Sun. (NASA / MESSENGER MISSION)

    NASA Messenger satellite schematic, ended its mission in 2015 with a dramatic, but planned, event – crashing into the surface of the planet that it had been studying for over four years.

    For every question that we can conceive of asking, there are a few possibilities as to what the ultimate outcome will be. For the questions where our scientific footing is the most sturdy, we can state that not only is it a question that has a scientific answer, but that we’ve gathered sufficient evidence about the Universe to determine exactly what the answer is, and that we’ve ruled out every other potentially viable alternative.

    These are questions like, “what is the shape of the Earth,” “have human beings ever walked on the Moon,” and “is planet Earth steadily warming since the dawn of the industrial revolution?” We know the answers to questions like these extremely well, and with extremely small uncertainties. We might make superior measurements and refine these answers to even better degrees in the future, but not only are the answers knowable, but they are known.

    But perhaps we don’t know the answer to the question we’re asking. Perhaps we’re asking a question like one of the following:

    When and how did the first human beings arise on our planet?
    When and how did life begin on Earth?
    When and how did the Milky Way come to be?
    When and where did the very first star in the observable Universe form?
    Or where did all the matter (as opposed to antimatter) that enabled our Universe to form as-is come from?

    There are a lot of pieces of information that we scientifically know surrounding these questions, but the exact, definitive answers to them remain elusive. We fully expect that the answers to these (and similar) questions are knowable, and one of the goals of modern science is to uncover these answers. However, we do not have them yet.

    An equally-symmetric collection of matter and antimatter (of X and Y, and anti-X and anti-Y) bosons could, with the right GUT properties, give rise to the matter/antimatter asymmetry we find in our Universe today. However, we assume that there is a physical, rather than a divine, explanation for the matter-antimatter asymmetry we observe today, but we do not yet know for certain. (E. SIEGEL / BEYOND THE GALAXY)

    And finally, there are questions that we can ask or ponder whose answers may never be revealed to us. As vast and enormous and old as our Universe is, the part of it that we can access and gain information from is most definitely finite.

    We cannot observe any signals from more than 46.1 billion light-years away, as that’s the farthest extent of the observable Universe from our perspective.

    We cannot measure any information from more than 13.8 billion years ago, since everything that exists is limited by both the speed of light and the time that’s passed since the Big Bang.

    And even though the number of particles present in the Universe is mind-boggling, as there are approximately 10⁹⁰ of them (including neutrinos and photons), that’s still a finite, quantifiable number.

    On a logarithmic scale, the Universe nearby has the solar system and our Milky Way galaxy. But far beyond are all the other galaxies in the Universe, the large-scale cosmic web, and eventually the moments immediately following the Big Bang itself. Although we cannot observe farther than this cosmic horizon which is presently a distance of 46.1 billion light-years away, wherein approximately 1⁰⁹⁰ total particles can be found to exist, there will be more Universe to reveal itself to us in the future. Still, the total amount of information available will always be finite and limited. (WIKIPEDIA USER PABLO CARLOS BUDASSI)

    In other words, there are questions we can ask whose answers — even if we consider the full suite of information available to an observer that exists in our physical Universe — may be scientifically impossible to know. We might be able to state what it was like when the Big Bang first began.

    Inflationary Universe. NASA/WMAP

    We might even be able to tease out some information about cosmic inflation, the state that preceded and set up the Big Bang.


    Alan Guth, from Highland Park High School and M.I.T., who first proposed cosmic inflation

    HPHS Owls

    Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex MittelmannColdcreation

    Alan Guth’s notes:

    Alan Guth’s original notes on inflation

    But if we want to know where cosmic inflation came from, how long it went on for, or what its properties were prior to that final fraction-of-a-second where its imprints actually affect our observable Universe, there doesn’t appear to be any way to test those ideas. Similarly, we cannot observe other Universes and thereby test the idea of a multiverse, or concoct a test that would enable us to probe the many-worlds idea of quantum mechanics.

    Inflation set up the hot Big Bang and gave rise to the observable Universe we have access to, but we can only measure the last tiny fraction of a second of inflation’s impact on our Universe. This is enough, however, to give us a large slew of predictions to go out an look for, many of which have already been observationally confirmed. E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research

    ESA/Planck 2009 to 2013

    CMB per ESA/Planck

    It’s important to recognize that within this Universe, these three classes of questions should be dealt with in fundamentally different ways.

    You can ask a question whose answer is not only knowable, but already known.
    You can ask a question whose answer seems to be knowable if we had enough information, and that information exists in our Universe, even if we don’t have it yet.
    You can ask a question whose answer is not knowable, even if we were to obtain every quantum bit of information available in the entire Universe.

    If you are interested in questions like how we came to be — where “we” can mean you and me, human beings, our conscious minds, life, particles, the Universe, space and time, or the laws of physics itself — your question will fall into one of these three categories.

    What I would say to someone who says that “God made us,” then, depends on which category their assertion falls into. If you’re asking a question whose answer is both knowable and very well known from a scientific perspective, that’s absolutely the worst intellectual place to argue for the existence of a deity who actively intervenes in our Universe. That’s, unfortunately, where many religions go awry, using dogma where scientific investigation is necessary.

    Given the laws of nature and our overarching scientific theories that explain our physical Universe, the only way to argue for a God on those grounds is to find an event that defied those rules, and instead required some sort of divine intervention to explain. Every time such an assertion has ever been made and put to the test, the results have always been 100% consistent with explanations that rely on the physical alone. Faith is not a good substitute for situations where scientific knowledge is both necessary and available.

    In scenarios where the answer should be scientifically knowable in principle, but we do not yet have adequate information to provide that answer, invoking a deity is only a slightly less bad idea than in the previous instance. This is what is infamously known as a God of the gaps argument: appealing to divine intervention to explain a physical phenomenon in this Universe that might be explicable by purely physical rules alone.

    Throughout the past few millennia, many phenomena that once fell into this category — including phenomena that people once ascribed to the acts of a divine being — have since had their nature revealed, and are explicable without an appeal to the divine at all. It may just be my opinion, but if your God is such a small God that you are invoking their name to explain a mundane phenomenon that could have a scientific explanation, you’re very likely to be disappointed when the decisive measurements or observations are finally made.

    However, there are questions that we are very much capable of asking that we can be quite confident fall outside the realm of science. When we ask questions about how we should live, how to treat one another, why we exist, or anything to do with our cosmic purpose, science appears to be ill-equipped to provide comprehensive, unambiguous answers. We can ask question that science has no answer for. As I wrote back at the start of 2018,

    “Religion is for anyone who wants it in their life, and science is as well. They are neither fundamentally incompatible, nor are they mutually exclusive. Knowledge, education, self-improvement, and the bettering of our shared world are endeavors that are open to everyone.”

    Did God, in some form, create the entire Universe? Not only don’t I know, but I daresay that no one does.

    Science cannot prove the existence of God, but it cannot disprove God either; it can only disprove the notion of a specific, poorly conceived God. If you claim that your God lives in the clouds, you can disprove that God by simply observing the clouds. If you claim that God lives in our Universe, you can disprove that God by observing the entire Universe. But if your God exists in an extra dimension, before cosmic inflation, or outside of space and time altogether, neither proof nor disproof is possible.

    In a fundamental way, it is purely a matter of what your faith is. All we can control, at the end of the day, is how we treat one another. Do we welcome those who believe different things than we do into our hearts, communities, and lives? Or do we shun, exclude, and “other” them?

    Regardless of what you believe, I have the same advice for you: choose kindness. It costs nothing, while benefitting the giver, the recipient, and those who simply witness it. Whether you say that God made us or not, I would say the same thing: the wonders and joys of science and the Universe are for you, exactly as you are, too.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 2:56 pm on January 2, 2020 Permalink | Reply
    Tags: "Antimatter Mystery Likely Due To Pulsars Not Dark Matter", , , , Cosmic ray astronomy, , Dark Matter-Fritz Zwicky and Vera Rubin, Ethan Siegel, Geminga, , , Positrons- the antimatter counterpart of electrons., Pulsars-Dame Susan Jocelyn Bell Burnell   

    From Ethan Siegel: “Antimatter Mystery Likely Due To Pulsars, Not Dark Matter” 

    From Ethan Siegel
    Jan 2

    NASA’s Fermi Satellite has constructed the highest resolution, high-energy map of the Universe ever created. Without space-based observatories such as this one, we could never learn all that we have about the Universe, nor could we even accurately measure the gamma-ray sky. (NASA/DOE/FERMI LAT COLLABORATION)

    NASA/Fermi LAT

    NASA/Fermi Gamma Ray Space Telescope

    For years, astronomers have been puzzled by an excess of antimatter particles. Unfortunately, dark matter is probably not the solution.

    When you look out at the Universe, what you see is only a tiny portion of what’s actually out there. If you examine the Universe solely with what’s perceptible to your eyes, you’ll miss out on a whole slew of information that exists in wavelengths of light that are invisible to us. From the highest-energy gamma rays to the lowest-energy radio waves, the electromagnetic spectrum is enormous, with visible light representing just a tiny sliver of what’s out there.

    However, there’s an entirely different method to measure the Universe: to collect actual particles and antiparticles, a science known as cosmic ray astronomy. For more than a decade, astronomers have seen a signal of cosmic ray positrons — the antimatter counterpart of the electron — that they’ve struggled to explain. Could it be humanity’s best clue towards solving the dark matter mystery? A new study says no, it’s probably just pulsars. [Physical Review D]

    Women in STEM – Dame Susan Jocelyn Bell Burnell

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    Dame Susan Jocelyn Bell Burnell at work on first plusar chart 1967 pictured working at the Four Acre Array in 1967. Image courtesy of Mullard Radio Astronomy Observatory.

    Dame Susan Jocelyn Bell Burnell 2009

    Dame Susan Jocelyn Bell Burnell (1943 – ), still working from http://www. famousirishscientists.weebly.com

    Here’s why.

    Cosmic rays produced by high-energy astrophysics sources can reach any object in the Solar System, and appear to permeate our local region of space omnidirectionally. When they collide with Earth, they strike atoms in the atmosphere, creating particle and radiation showers at the surface, while direct detectors in space, above the atmosphere, can measure the original particles directly. (ASPERA COLLABORATION / ASTROPARTICLE ERANET)

    There are a great many things in the Universe that are known to create positrons, the antimatter counterpart of electrons. Whenever you have a high-enough energy collision between two particles, there’s a certain amount of energy that will be available with the potential to create new particle-antiparticle pairs. If that available energy is greater than the equivalent mass of the new particle(s) you want to create, as defined by Einstein’s E = mc2, there’s a finite probability of generating those new particles.

    There are all sorts of high-energy processes that can lead to this type of energy becoming available, including particles accelerated by black holes, high-energy protons colliding with the galactic disk, or particles accelerated in the vicinity of neutron stars. Based on the known physics and astrophysics of the Universe, we know that a certain amount of positrons must be generated irrespective of any new physics.

    Two bubbles of high-energy signatures are evidence that electron/positron annihilation is occurring, likely powered by processes at the galactic center. Here on Earth, more positrons than can be explained by conventional physics are seen via direct cosmic ray experiments, putting forth the exciting possibility that dark matter might be the cause of both that excess and the galactic center gamma rays. (NASA’S GODDARD SPACE FLIGHT CENTER)

    However, we also expect that there is some new physics out there, because of the overwhelming astrophysical evidence for dark matter. While the true nature of dark matter will remain a mystery until the particle (or at least one of the particles) responsible for it is detected directly, many dark matter scenarios exist where not only is dark matter its own antiparticle, but that dark matter annihilations will also produce electron-positron pairs.

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)

    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)

    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background [CMB]hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    [caption id="attachment_73741" align="alignnone" width="632"] CMB per ESA/Planck

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LBNL LZ Dark Matter project at SURF, Lead, SD, USA

    Inside the ADMX experiment hall at the University of Washington Credit Mark Stone U. of Washington. Axion Dark Matter Experiment

    Whenever you have multiple possible physical explanations for what could cause an observable phenomenon, the key to telling which one matches reality is to tease out differences between the explanations. In particular, positrons due to dark matter should experience a cutoff at specific energies (corresponding to the mass of the dark matter particles), while positrons generated by conventional astrophysics should fall off more gradually.

    NASA/AMS02 device on the ISS

    In 2011, the Alpha Magnetic Spectrometer experiment (AMS-02) was launched with the goal of further investigating this mystery. After arriving at the International Space Station aboard the final mission of the Space Shuttle Endeavor, it was quickly set up and began sending data back to Earth within 3 days. During its operational phase, it collected and measured more than ten billion cosmic ray particles per year.

    What’s remarkable about AMS-02 is that it didn’t just measure cosmic ray particles, but was able to sort them both by type and by energy, providing us with an unprecedented set of data to evaluate whether the positrons appeared to be due to dark matter or not. At low energies, the data matched the predictions of cosmic rays colliding with the interstellar medium, but at higher energies, something else was clearly at play.

    If the AMS-02 experiment had not experienced any failures or required any repairs, it would have collected sufficient data to distinguish between pulsars (blue) or annihilating dark matter (red) as the source of the excess positrons. Either way, collisions of cosmic rays with the interstellar medium can only explain the low-energy signature, with another explanation required for the high-energy signatures. (AMS COLLABORATION)

    However, that’s not a slam dunk for dark matter by any means. At higher energies, it’s also possible that pulsars, which accelerate matter particles to incredible energies through a combination of their gravitational and electromagnetic forces, could produce a peaked excess of positrons at high energies.

    Although AMS-02 sees evidence (at 4-sigma, or 99.99% confidence) that there’s a peak and then a falloff in the observed energies of positrons, its sensitivity and event rate peters out at exactly the types of energies that would enable us to differentiate between a positron signal arising from pulsars versus one arising from annihilating dark matter. With spacewalks currently ongoing to attempt to repair AMS-02 and bring it back online to continue its observations, it may eventually collect enough data to discern, on its own, whether pulsars or dark matter provide the best fit to the data.

    The Vela pulsar, like all pulsars, is an example of a neutron star corpse. The gas and matter surrounding it is quite common, and is capable of providing fuel for the pulsing behavior of these neutron stars. Matter-antimatter pairs, as well as high-energy particles, are produced in copious amounts by neutron stars, offering up the possibility that they, and not dark matter, are responsible for the excess signals observed by AMS-02. (NASA/CXC/PSU/G.PAVLOV ET AL.)

    However, there’s more than one way to tell these two scenarios apart, as positrons produced by pulsars should also generate an additional signal that falls well outside the measurements that AMS-02 or any cosmic ray experiment could detect: gamma rays.

    If pulsars truly generate the positrons that could be responsible for the signal that cosmic ray experiments are seeing, then a significant fraction of those positrons will have the misfortune of colliding with electrons in the interstellar medium long before they arrive at our cosmic ray detectors. When positrons collide with electrons, they annihilate, with each reaction producing two gamma rays with a very specific energy signature: 511 keV of energy, the rest-energy equivalent of an electron’s (or positron’s) mass, also obtained from Einstein’s E = mc2.

    The production of matter/antimatter pairs (left) from pure energy is a completely reversible reaction (right), with matter/antimatter annihilating back to pure energy. When a photon is created and then destroyed, it experiences those events simultaneously, while being incapable of experiencing anything else at all. If you operate in the center-of-momentum (or center-of-mass) rest frame, particle/antiparticle pairs (including two photons) will zip off at 180 degree angles to one another, with energies equal to the rest-mass equivalent of each of the particles, as defined by Einstein’s E = mc². (DMITRI POGOSYAN / UNIVERSITY OF ALBERTA)

    However, pulsars should theoretically be able to accelerate these electrons and positrons up to extraordinarily high energies: energies that even the world’s most powerful terrestrial particle accelerator, the Large Hadron Collider, struggles to reach. When photons — even normal-energy starlight — interact with these ultra-relativistic (near light-speed) particles, they can get boosted to extraordinary energies through a process known as inverse Compton scattering.

    Based on physical parameters like the properties of the pulsar, the matter in the pulsar’s vicinity, the electrons and positrons generated, and the amount of starlight present nearby, a specific energy spectrum will be created for the photons generated from this process. Sum them all up for all of the nearby, relevant pulsars, and your gamma ray signature might indicate that pulsars, and not dark matter, cause this positron excess.

    Particles traveling near light speed can interact with starlight and boost it to gamma-ray energies. This animation shows the process, known as inverse Compton scattering. When light ranging from microwave to ultraviolet wavelengths collides with a fast-moving particle, the interaction boosts it to gamma rays, the most energetic form of light. (NASA / GSFC)

    About 800 light-years away, incredibly close by astronomical standards, one of the brightest gamma-ray pulsars in the entire sky can be found: Geminga. It was only discovered in 1972, and had its nature revealed in 1991, when the ROSAT mission measured evidence for a neutron star spinning at a rate of 4.2 revolutions-per-second.

    Geminga. Patrizia Caraveo (INAF/IASF), Milan

    ROSAT X-ray satellite built by DLR , with instruments built by West Germany, the United Kingdom and the United States

    Fast-forward to the present day, where NASA’s Fermi Large Area Telescope — with enormously improved spatial and energy resolution — is now the world’s most sophisticated gamma ray observatory. By subtracting out the gamma ray signal arising from cosmic rays colliding with interstellar gas clouds, the remnant signal from starlight interacting with accelerated electrons and positrons could be revealed.

    When a team of researchers led by Mattia di Mauro analyzed the Fermi data [Physical Review D], what they saw was spectacular: an energy-dependent signal that, at its largest, spanned some 20 degrees in the sky at the exact energies that AMS-02 was most sensitive to.

    This model of Geminga’s gamma-ray halo shows how the emission changes at different energies, a result of two effects. The first is the pulsar’s rapid motion through space over the decade Fermi’s Large Area Telescope has observed it. Second, lower-energy particles travel much farther from the pulsar before they interact with starlight and boost it to gamma-ray energies. This is why the gamma-ray emission covers a larger area at lower energies. (NASA’S GODDARD SPACE FLIGHT CENTER/M. DI MAURO)

    Explaining this glow, which decreases in size as Fermi looks at progressively higher energies, fit the models perfectly by leveraging a combination of inverse Compton scattering with the pulsar’s motion through interstellar space. According to Fiorenza Donato, coauthor on the recent Fermi study that measured gamma rays from Geminga Physical Review D [above],
    Lower-energy particles travel much farther from the pulsar before they run into starlight, transfer part of their energy to it, and boost the light to gamma rays. This is why the gamma-ray emission covers a larger area at lower energies. Also, Geminga’s halo is elongated partly because of the pulsar’s motion through space.

    The measurement of the gamma rays from Geminga alone suggests that this one pulsar could be responsible for as much as 20% of the high-energy positrons seen by the AMS-02 experiment.

    This animation shows a region of the sky centered on the pulsar Geminga. The first image shows the total number of gamma rays detected by Fermi’s Large Area Telescope at energies from 8 to 1,000 billion electron volts (GeV) — billions of times the energy of visible light — over the past decade. By removing all bright sources, astronomers discovered the pulsar’s faint, extended gamma-ray halo, concluding that this one pulsar could be responsible for up to 20% of the positrons detected by the AMS-02 experiment. (NASA/DOE/FERMI LAT COLLABORATION)

    Whenever there’s an unexplained phenomenon that we’ve measured or observed, it presents a tantalizing possibility to scientists: that perhaps there’s something new at play beyond what’s presently known. We know there are mysteries about our Universe that require new physics at some level — mysteries like dark matter, dark energy, or the cosmic matter-antimatter asymmetry — whose ultimate solution has yet to be discovered.

    However, we cannot claim evidence for a new discovery until everything that represents what’s already known is quantified and accounted for. By factoring in the effect of pulsars, the positron excess observed by the Alpha Magnetic Spectrometer collaboration may turn out to be explicable entirely by conventional high-energy astrophysics, with no need for dark matter. Right now, it appears that pulsars may be responsible for 100% of the observed excess, requiring scientists to go back to the drawing board for a direct signal that reveals our Universe’s elusive dark matter.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: