Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:57 am on October 14, 2019 Permalink | Reply
    Tags: "This One Award Was The Biggest Injustice In Nobel Prize History", , , Ethan Siegel   

    From Ethan Siegel: “This One Award Was The Biggest Injustice In Nobel Prize History” 

    From Ethan Siegel
    Oct 14. 2019

    Many deserving potential awardees were snubbed by the Nobel committee. But this takes the cake.

    Every October, the Nobel foundation awards prizes celebrating the greatest advances in numerous scientific fields.
    With a maximum of three winners per prize, many of history’s most deserving candidates [read “women”] have gone unrewarded [After Ethan’s offerings, I add two he did not include].

    1
    Lise Meitner, one of the scientists whose fundamental work led to the development of nuclear fission, was never awarded a Nobel Prize for her work. In perhaps a great injustice, Nazi scientist Otto Hahn was solely awarded a Nobel Prize in 1944 for his discovery of nuclear fission, despite the fact that Lise Meitner, a Jewish scientist, had actually made the critical discovery by herself. A onetime collaborator of Hahn’s, she not only never won a Nobel, but was forced to leave Germany due to her Jewish heritage. (ARCHIVES OF THE MAX PLANCK SOCIETY)

    However, the greatest injustices occurred when the scientists behind the most worthy contributions were snubbed.

    2
    Physics Professor Dr. Chien-Shiung Wu in a laboratory at Columbia University, in a photo dating back to 1958. Dr. Wu became the first woman to win the Research Corporation Award after providing the first experimental proof, along with scientists from the National Bureau of Standards, that the principle of parity conservation does not hold in weak subatomic interactions. Wu won many awards, but was snubbed for science’s most prestigious accolade in perhaps the greatest injustice in Nobel Prize history. (GETTY)

    Theoretical developments hold immense scientific importance, but only measured observables can confirm, validate, or refute a theory.

    3
    Unstable particles, like the big red particle illustrated above, will decay through either the strong, electromagnetic, or weak interactions, producing ‘daughter’ particles when they do. If the process that occurs in our Universe occurs at a different rate or with different properties if you look at the mirror-image decay process, that violates Parity, or P-symmetry. If the mirrored process is the same in all ways, then P-symmetry is conserved. (CERN)

    By the 1950s, physicists were probing the fundamental properties of the particles composing our Universe.

    4
    There are many letters of the alphabet that exhibit particular symmetries. Note that the capital letters shown here have one and only one line of symmetry; letters like “I” or “O” have more than one. This ‘mirror’ symmetry, known as Parity (or P-symmetry), has been verified to hold for all strong, electromagnetic, and gravitational interactions wherever tested. However, the weak interactions offered a possibility of Parity violation. The discovery and confirmation of this was worth the 1957 Nobel Prize in Physics. (MATH-ONLY-MATH.COM)

    Many expected that three symmetries:

    C-symmetry (swapping particles for antiparticles),
    P-symmetry (mirror-reflecting your system), and
    T-symmetry (time-reversing your system),

    would always be conserved.

    5
    Nature is not symmetric between particles/antiparticles or between mirror images of particles, or both, combined. Prior to the detection of neutrinos, which clearly violate mirror-symmetries, weakly decaying particles offered the only potential path for identifying P-symmetry violations. (E. SIEGEL / BEYOND THE GALAXY)

    But two theorists — Tsung-Dao Lee and Chen Ning Yang — suspected that mirror symmetry might be violated by the weak interactions.

    6
    Schematic illustration of nuclear beta decay in a massive atomic nucleus. Beta decay is a decay that proceeds through the weak interactions, converting a neutron into a proton, electron, and an anti-electron neutrino. An atomic nucleus has an intrinsic angular momentum (or spin) to it, meaning it has a spin-axis that you can point your thumb in, and then either the fingers of your left or right hand will describe the direction of the particle’s angular momentum. If one of the ‘daughter’ particles of the decay, like the electron, exhibits a preference for decaying with or against the spin axis, then Parity symmetry would be violated. If there’s no preference at all, then Parity would be conserved. (WIKIMEDIA COMMONS USER INDUCTIVELOAD)

    In 1956, scientist Chien-Shiung Wu put that idea to the experimental test.

    7
    Chien-Shiung Wu, at left, had a remarkable and distinguished career as an experimental physicist, making many important discoveries that confirmed (or refuted) a variety of important theoretical predictions. Yet she was never awarded a Nobel Prize, even as others who did less of the work were nominated and chosen ahead of her. (ACC. 90–105 — SCIENCE SERVICE, RECORDS, 1920S-1970S, SMITHSONIAN INSTITUTION ARCHIVES)

    By observing the radioactive decay (beta decay, a weak interaction), she showed that this process was intrinsically chiral.

    8
    Parity, or mirror-symmetry, is one of the three fundamental symmetries in the Universe, along with time-reversal and charge-conjugation symmetry. If particles spin in one direction and decay along a particular axis, then flipping them in the mirror should mean they can spin in the opposite direction and decay along the same axis. This property of ‘handedness,’ or ‘chirality,’ is extraordinarily important in understanding particle physics processes. This was observed not to be the case for the weak decays, the first indication that particles could have an intrinsic ‘handedness,’ and this was discovered by Madame Chien-Shiung Wu. (E. SIEGEL / BEYOND THE GALAXY)

    In 1957, Lee and Yang were awarded the physics Nobel; Wu was omitted entirely.

    Even today, only three women physicists — Marie Curie (1903), Maria Goeppert-Mayer (1963), and Donna Strickland (2018) — have ever won Nobel Prizes.

    9
    Donna Strickland, a graduate student in optics and a member of the Picosecond Research Group, is shown aligning an optical fiber. The fiber is used to frequency chirp and stretch an optical pulse that can later be amplified and compressed in order to achieve high-peak-power pulses. This work, captured on camera in 1985, was an essential part of what garnered her the 2018 physics Nobel, making her just the third woman in history to win the Nobel Prize in physics. (UNIVERSITY OF ROCHESTER; CARLOS & RHONDA STROUD)

    Missing here but included in Ethan’s article These 5 Women Deserved, And Were Unjustly Denied, A Nobel Prize In Physics, Oct 11, 2018, Vera Rubin and Dame Susan Jocelyn Bell Burnell I take the liberty of adding:

    Fritz Zwicky discovered Dark Matter when observing the movement of the Coma Cluster., Vera Rubin a Woman in STEM denied the Nobel, did most of the work on Dark Matter.

    Fritz Zwicky from http:// palomarskies.blogspot.com

    Coma cluster via NASA/ESA Hubble

    Astronomer Vera Rubin at the Lowell Observatory in 1965, worked on Dark Matter (The Carnegie Institution for Science)


    Vera Rubin measuring spectra, worked on Dark Matter (Emilio Segre Visual Archives AIP SPL)


    Vera Rubin, with Department of Terrestrial Magnetism (DTM) image tube spectrograph attached to the Kitt Peak 84-inch telescope, 1970. https://home.dtm.ciw.edu

    The LSST, or Large Synoptic Survey Telescope is to be named the Vera C. Rubin Observatory by an act of the U.S. Congress.

    LSST telescope, The Vera Rubin Survey Telescope currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Women in STEM – Dame Susan Jocelyn Bell Burnell

    Dame Susan Jocelyn Bell Burnell, discovered pulsars with radio astronomy. Jocelyn Bell at the Mullard Radio Astronomy Observatory, Cambridge University, taken for the Daily Herald newspaper in 1968. Denied the Nobel.

    Dame Susan Jocelyn Bell Burnell at work on first plusar chart 1967 pictured working at the Four Acre Array in 1967. Image courtesy of Mullard Radio Astronomy Observatory.

    Dame Susan Jocelyn Bell Burnell 2009

    Dame Susan Jocelyn Bell Burnell (1943 – ), still working from http://www. famousirishscientists.weebly.com

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:41 pm on October 13, 2019 Permalink | Reply
    Tags: "Ask Ethan: Can Gamma-Ray Jets Really Travel Faster Than The Speed Of Light?", , , , , Ethan Siegel, There’s an ultimate speed limit in the Universe: the speed of light in a vacuum- c., When you have a massive particle moving through the vacuum of space it must always move at a speed that’s slower than c.   

    From Ethan Siegel: “Ask Ethan: Can Gamma-Ray Jets Really Travel Faster Than The Speed Of Light?” 

    From Ethan Siegel
    Oct 12, 2019

    A recent headline claimed they could. But if gamma-rays are just a form of light, don’t they have to travel at light-speed?

    1
    Artist’s impression of an active galactic nucleus. The supermassive black hole at the center of the accretion disk sends a narrow, high-energy jet of matter into space, perpendicular to the black hole’s accretion disc. None of the particles or radiation within any physical structure, even one as exotic as this, should ever move faster than light in a vacuum. (DESY, SCIENCE COMMUNICATION LAB)

    There’s an ultimate speed limit in the Universe: the speed of light in a vacuum, c. If you don’t have any mass — whether you’re a light wave (a photon), a gluon, or even a gravitational wave — that’s the speed you must move at when you pass through a vacuum, while if you have mass, you can only move slower than c. So why, then, was there a recent story claiming that gamma-ray jets, where gamma-rays themselves are a high-energy form of light, can travel faster-than-light? That’s what Dr. Jeff Landrum wants to know, asking:

    What gives? Is it really possible for gamma-rays to exceed the speed of light and thereby “reverse” time? Is the time reversal just a theoretical claim that allows these hypothetical super-light speed particles to conform with Relativity or is there empirical evidence of this phenomenon?

    Let’s begin by looking at the basic physics governing the Universe.

    2
    All massless particles travel at the speed of light, but the speed of light changes depending on whether it’s traveling through vacuum or a medium. If you were to race the highest-energy cosmic ray particle ever discovered with a photon to the Andromeda galaxy and back, a journey of ~5 million light-years, the particle would lose the race by approximately 6 seconds. However, if you were to race a long-wavelength radio photon and a short-wavelength gamma-ray photon, as long as they only traveled through vacuum, they’d arrive at the same time. (NASA/SONOMA STATE UNIVERSITY/AURORE SIMONNET)

    Light comes in a wide variety of wavelengths, frequencies, and energies. Although the energy inherent to light is quantized into discrete energy packets (a.k.a., photons), there are some properties shared by all forms of light.

    Light of any wavelength, from picometer-wavelength gamma-rays to radio waves more than a trillion times longer, all move at the speed of light in a vacuum.
    The frequency of any photon is equal to the speed of light divided by the wavelength: the larger the wavelength, the shorter the frequency; the shorter the wavelength, the higher the frequency.
    The energy inherent to a photon is directly proportional to frequency: the highest-frequency/shortest-wavelength light is the most energetic, while the lowest-frequency/longest-wavelength light is the least energetic.

    As soon as you leave a vacuum, however, light of different wavelengths will behave very differently.

    2
    Light is nothing more than an electromagnetic wave, with in-phase oscillating electric and magnetic fields perpendicular to the direction of light’s propagation. The shorter the wavelength, the more energetic the photon, but the more susceptible it is to changes in the speed of light through a medium. (AND1MU / WIKIMEDIA COMMONS)

    Light, you have to remember, is an electromagnetic wave. When we talk about the wavelength of light, we’re talking about the distance between every two “nodes” in the wave-like pattern that its in-phase, oscillating electric and magnetic fields create.

    When you pass light through a medium, however, all of a sudden there are charged particles located in every direction: particles that create their own electric (and possibly magnetic) fields. When the light passes through them, its electric and magnetic fields interact with the particles in the medium, and the light is forced to move at a slower speed: the speed of light in that particular medium.

    What actually happens, though, that you might not expect, is that the amount that light slows down by depends on the light’s wavelength.

    4
    https://miro.medium.com/max/640/0*WzJk6x64H-jrljzH
    Schematic animation of a continuous beam of light being dispersed by a prism. If you had ultraviolet and infrared eyes, you’d be able to see that ultraviolet light bends even more than the violet/blue light, while the infrared light would remain less bent than the red light does. (LUCASVB / WIKIMEDIA COMMONS)

    Why does this occur? Why do longer-wavelength (redder) photons bend less (and therefore travel faster) when they travel through a medium compared to shorter-wavelength (bluer) photons, which bend by greater amounts and therefore travel slower?

    Any medium, remember, is made of up atoms, which in turn are made up of nuclei and electrons. When you apply an electric or a magnetic field to a medium, that medium itself will respond to the field: the medium gets polarized. This happens for all wavelengths of light. For longer wavelengths, however, the changes in the medium are slower; there are fewer cycles-per-second of the electromagnetic wave. Because electromagnetism always resists changes to electric and magnetic fields, the fields that change faster (corresponding to photons with shorter wavelengths, higher frequencies and greater energies) will be more effectively resisted by the medium light travels through.

    5
    This illustration, of light passing through a dispersive prism and separating into clearly defined colors, is what happens when many medium-to-high energy photons strike a crystal. Note how in a vacuum (outside of the prism) all light travels at the same speed, and does not disperse. However, as bluer light slows down more than redder light, the light that passes through a prism is successfully dispersed. (WIKIMEDIA COMMONS USER SPIGGET)

    This is the only “trick” we know of to cause light to move at a speed slower than the speed of light in a vacuum: to pass it through a medium. When we do that, the shortest-wavelength light — which is the most energetic — slows down by the greatest amount relative to longer-wavelength, lower-energy light. If we shone light of any frequency we chose through any medium at all, the gamma-rays, if any are generated, should travel the most slowly of all the different forms of light.

    Which is why this headline is so puzzling: how could gamma-ray jets move faster than light? If we take a look at the scientific paper itself [The Astrophysical Journal], we can see there’s another component that helps clear the story up: this radiation is not moving faster than c, the speed of light in a vacuum, but v, the speed of light in the particle-filled medium surrounding the source of these gamma rays.

    5
    A gamma-ray burst, like the one depicted here in an artist’s rendition, is thought to originate from a dense region of a host galaxy surrounded by a large shell, sphere, or halo of material. That material will have a speed of light inherent to that medium, and individual particles that travel through it, although always slower than the speed of light in vacuum, might be faster than the speed of light in that medium. (GEMINI OBSERVATORY / AURA / LYNETTE COOK)

    When you have a massive particle moving through the vacuum of space, it must always move at a speed that’s slower than c, the speed of light in a vacuum. However, if that particle then enters a medium where the speed of light is now v, which is less than c, it’s possible that the particle’s speed will suddenly now be greater than the speed of light in that medium.

    When this occurs, the particle, from its interactions with the medium, will produce a special type of radiation: blue/ultraviolet light known as Čerenkov radiation. Particles may be forbidden from traveling faster than the speed of light in a vacuum under all conditions, but nothing prevents them from traveling faster than light in a medium.

    6
    The Advanced Test Reactor core at Idaho National Laboratory isn’t glowing blue because there are any blue lights involved, but rather because this is a nuclear reactor producing relativistic, charged particles that are surrounded by water. When the particles pass through that water, they exceed the speed of light in that medium, causing them to emit Čerenkov radiation, which appears as this glowing blue light. (ARGONNE NATIONAL LABORATORY)

    What the new study is referring to is the fact that we have many different types of high-energy astrophysical phenomena that all appear to have the same general setup: extremely high-energy photons wind up getting emitted from a violent event in space in a matter-rich environment. This applies to long/intermediate gamma-ray bursts, short-period gamma-ray bursts, and X-ray flares as well.

    What the researchers did was introduce a new, simple model that would explain the bizarre properties seen in pulsing gamma-ray bursts. They model the gamma-ray emissions as originating from a jet of fast-moving particles, which is consistent with what we know. But they then introduce a fast-moving impactor wave that runs into this expanding jet, and as the density (and other properties) of the medium changes, that wave then accelerates from moving slower-than-light to moving faster-than-light in that medium.

    7
    In this artistic rendering, a blazar is accelerating protons that produce pions, which produce neutrinos and gamma rays. Photons are also produced. While you might not think much of the difference between particles moving at the speed of light and those moving at 99.99999% the speed of light, the latter case is of extreme interest, as moving into and out of a medium (or between media of different dielectric constants), you can create a shock when the particles begin moving faster than light in a particular medium. (ICECUBE/NASA)

    The thing is, when particles move through a medium, whether faster than light or slower than light, they are going to emit radiation either way. If you move faster than light, you produce both Čerenkov and collisional radiation. If you move slower than light, you produce Compton radiation (electron/photon scattering) or synchrotron shock radiation when you move slower than light.

    If you do both, which means you move slower than light through the medium for one part of the journey and faster than light through the medium for another part of the journey, you should see two sets of light-curve features for the gamma-rays that arrive on Earth.

    The slower-than-light radiation should exhibit a time-forward signal: where events that happen earlier arrive earlier, and ones that happened later arrive later. The radiation travels faster than the signal.
    But the faster-than-light radiation should produce a time-reversed signal: where the events that happen later arrive earlier, and the events that happen earlier arrive later. The signal travels faster than the radiation.

    The signals that arrive first will be the last ones to be emitted, and the ones to arrive last were the first ones emitted: exactly the opposite of what our conventional experience is. If it were a fist headed to your face instead of a particle, first you’d feel the impact, and then you’d see the fist right in front of you, rapidly moving away from you. This is only possible in a medium. In a vacuum, the speed of light always wins every race.

    9
    Figure 1 from the Hakkila/Nemiroff paper that illustrates a received GRB pulse (at left, orange), and the monotonic curve (black curve, left) that fits it best. When you subtract the curve from the actual signal, you get residuals, and part of the signal appears to be the time-reverse of the remainder. This is where the ‘subluminal pulse going superluminal’ idea comes from: from fitting the data so well. (J. HAKKILA AND R. NEMIROFF, APJ 833, 1 (2019))

    Gamma-ray bursts consist of multiple pulses, and look like spikes that rise quickly and then fall off a little more slowly. Those pulses are joined by extra, smaller signals known as residuals, and show a lot of complexity. However, a detailed examination shows that the pulse residuals are not independent, but are linked to one another: some have residuals that are the time-reversed residuals of other pulses.

    This is the big phenomenon that the new model put forth by Jon Hakkila and Robert Nemiroff is attempting to explain. The big deal isn’t that anything is going faster than light in a vacuum; it isn’t. The big deal is that this observed, otherwise inexplicable phenomena might have a simple astrophysical cause: a slower-than-light jet (in a medium) going superluminal (in that medium).

    The pulses originating from those two phases have overlapping arrival times, and disentangling that is how we can see this reflection-like behavior in the signal. It might not be the final answer, but it’s the best explanation for this otherwise unexplained phenomenon that humanity’s hit upon so far.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:46 am on October 12, 2019 Permalink | Reply
    Tags: "This One Puzzle Brought Physicists From Special To General Relativity", , , , , Ethan Siegel   

    From Ethan Siegel: “This One Puzzle Brought Physicists From Special To General Relativity” 

    From Ethan Siegel
    Oct 10, 2019

    1
    An illustration of heavily curved spacetime for a point mass, which corresponds to the physical scenario of being located outside the event horizon of a black hole. As you get closer and closer to the mass’s location in spacetime, space becomes more severely curved, eventually leading to a location from within which even light cannot escape: the event horizon. The radius of that location is set by the mass, charge, and angular momentum of the black hole, the speed of light, and the laws of General Relativity alone. (PIXABAY USER JOHNSONMARTIN)

    Even though it was the crowning achievement of Einstein’s career, he was only a small part of the full story.

    If you were a physicist in the early 20th century, there would have been no shortage of mysteries for you to ponder. Newton’s ideas about the Universe — about optics and light, about motion and mechanics, and about gravitation — had been incredibly successful under most circumstances, but were facing doubts and challenges like never before.

    Back in the 1800s, light was demonstrated to have wave-like properties: to interfere and diffract. But it also had particle-like properties, as it could scatter off of and even impart energy to electrons; light couldn’t be the “corpuscle” that Newton had imagined. Newtonian mechanics broke down at high speeds, as Special Relativity caused lengths to contract and time to dilate near the speed of light. Gravitation was the last Newtonian pillar left, and Einstein shattered it in 1915 by putting forth his theory of General Relativity. There was merely one key puzzle that brought us there.

    3
    Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, but all forms of energy, including but not limited to mass, contribute to spacetime curvature. If we were to replace Earth with a denser version, up to and including a singularity, the spacetime deformation shown here would be identical; only inside the Earth itself would a difference be notable. (CHRISTOPHER VITALE OF NETWORKOLOGIES AND THE PRATT INSTITUTE)

    Today, owing to Einstein’s theory, we visualize spacetime as a unified entity: a four-dimensional fabric that gets curved due to the presence of matter and energy. That curved background is the stage upon which all of the particles, antiparticles, and radiation in the Universe must travel through, and the curvature of our spacetime tells that matter how to move.

    This is the big idea of General Relativity, and why it’s such an upgraded idea from Special Relativity. Yes, space and time are still stitched together into a unified entity: spacetime. Yes, all massless particles travel at the speed of light relative to all observers, and all massive particles can never attain that speed. Instead, they move through the Universe seeing lengths contract, times dilating, and — in an upgrade from Special to General Relativity — seeing novel gravitational phenomena that wouldn’t appear otherwise.

    4
    Gravitational waves propagate in one direction, alternately expanding and compressing space in mutually perpendicular directions, defined by the gravitational wave’s polarization. Gravitational waves themselves, in a quantum theory of gravity, should be made of individual quanta of the gravitational field: gravitons. While gravitational waves might spread out evenly over space, the amplitude (which goes as 1/r) is the key quantity for detectors, not the energy (which goes as 1/r²). (M. PÖSSEL/EINSTEIN ONLINE)

    These relativistic effects, over roughly the past century, have shown up in a number of spectacular places. Light redshifts or blueshifts as it moves into or out of a gravitational field, as first detected by the Pound-Rebka experiment. Gravitational waves are emitted whenever two masses move relative to one another, an effect predicted 100 years ago but only detected over the past 4 years by LIGO/Virgo.

    Starlight bends when it passes close by a massive gravitational source: an effect seen in our Solar System just as robustly as it appears for distant galaxies and galaxy clusters. And, perhaps most spectacularly, the framework of General Relativity predicts that space will be curved in such a way that distant events can be seen in multiple locations at multiple different times. We’ve used this prediction to see a supernova explode multiple times in the same galaxy, a spectacular demonstration of General Relativity’s non-intuitive power.

    5
    The image to the left shows a part of the deep field observation of the galaxy cluster MACS J1149.5+2223 from Hubble’s Frontier Fields programme. The circle indicates the predicted position of the newest appearance of the supernova. To the lower right the Einstein cross event from late 2014 is visible. The image on the top right shows observations by Hubble from October 2015, taken at the beginning of the observation programme to detect the newest appearance of the supernova. The image on the lower right shows the discovery of the Refsdal Supernova on 11 December 2015, as predicted by several different models. No one thought Hubble would be doing something like this when it was first proposed; this showcases the ongoing power of a flagship-class observatory. (NASA & ESA AND P. KELLY (UNIVERSITY OF CALIFORNIA, BERKELEY))

    NASA/ESA Hubble Telescope

    The tests mentioned above are only some of the very thorough ways General Relativity has been probed, and are far from exhaustive. But most of the observable consequences that arise in General Relativity were only worked out well after the theory itself took shape. They could not be used to motivate the formulation of General Relativity itself, but something clearly did.

    If you had been a physicist in the early 20th century, you might have had an opportunity to beat Einstein to the punch. In the mid-1800s, it became clear that something was wrong with Mercury’s orbit: it wasn’t following the path that Newtonian gravity predicted. A similar problem with Uranus led to the discovery of Neptune, so many hoped that Mercury’s orbit not matching Newton’s predictions meant that a new planet must be present: one interior to Mercury’s orbit. The idea was so compelling that the planet was already pre-named: Vulcan.

    5
    After discovering Neptune by examining the orbital anomalies of Uranus, scientist Urbain Le Verrier turned his attention to the orbital anomalies of Mercury. He proposed an interior planet, Vulcan, as an explanation. Although Vulcan did not exist, it was Le Verrier’s calculations that helped lead Einstein to the eventual solution: General Relativity. (WIKIMEDIA COMMONS USER REYK)

    But Vulcan does not exist, as exhaustive searches quickly determined. If Newtonian gravity were perfect — i.e., if we idealize the Universe — and the Sun and Mercury were the only objects in the Solar System, then Mercury would make a perfect, closed ellipse in its orbit around the Sun.

    Of course, the Universe isn’t ideal. We view the Sun-Mercury system from Earth, which itself moves in an ellipse, rotates on its axis, and sees that spin-axis precess over time. Calculate that effect, and you’ll find that the shape of Mercury’s orbital path isn’t a closed ellipse any longer, but one whose aphelion and perihelion precesses at 5025 arc-seconds (where 3600 arc-seconds is 1 degree) per century. There are also many other planets in the Solar System that tug on the Sun-Mercury system. If you calculate all of their contributions, they add an additional 532 arc-seconds per century of precession.

    6
    According to two different gravitational theories, when the effects of other planets and the Earth’s motion are subtracted, Newton’s predictions are for a red (closed) ellipse, running counter to Einstein’s predictions of a blue (precessing) ellipse for Mercury’s orbit. (WIKIMEDIA COMMONS USER KSMRQ)

    All told, that leads to a theoretical prediction, in Newtonian gravity, of Mercury’s perihelion precessing by 5557 arc-seconds per century. But our very good observations showed us that figure was slightly off, as we saw a precession of 5600 arc-seconds per century. That extra 43 arc-seconds per century was a nagging mystery, and the failure of searches to turn up a planet interior to Mercury deepened the puzzle even further.

    It’s easy, in hindsight, to just wave our hands and claim that General Relativity provides the answer. But it wasn’t the only possible answer. We could have slightly modified Newton’s gravitational law to be slightly different from an inverse square law, and that could be responsible for the extra precession. We could have demanded that the Sun be an oblate spheroid rather than a sphere, and that could have caused the extra precession. Other observational constraints ruled these scenarios out, however, just like they ruled out the Vulcan scenario.

    7
    One revolutionary aspect of relativistic motion, put forth by Einstein but previously built up by Lorentz, Fitzgerald, and others, that rapidly moving objects appeared to contract in space and dilate in time. The faster you move relative to someone at rest, the greater your lengths appear to be contracted, while the more time appears to dilate for the outside world. This picture, of relativistic mechanics, replaced the old Newtonian view of classical mechanics, and can explain phenomena such as the lifetime of a cosmic ray muon. (CURT RENSHAW)

    But sometimes, theoretical progress can lead to even more profound theoretical progress. In 1905, Special Relativity was published, leading to an understanding that — at speeds approaching the speed of light — distances appear to contract along the direction of motion and time appears to dilate for one observer moving relative to another. In 1907/8, Einstein’s former professor, Hermann Minkowski, wrote down the first mathematical framework that unified space (3D) and time (1D) into a four-dimensional spacetime fabric.

    If this was all you knew, but you were thinking about the Mercury problem, you might have a spectacular realization: that Mercury isn’t just the closest planet to the Sun, but is also the fastest-moving planet in the Solar System.

    8
    The speed at which planets revolve around the Sun is dependent on their distance from the Sun. Neptune is the slowest planet in the Solar System, orbiting our Sun at just 5 km/s. Mercury, for comparison, revolves around the Sun at approximately 9 times the speed of Neptune. (NASA / JPL)

    With an average speed of 47.36 km/s, Mercury moves very slow compared to the speed of light: at 0.0158% the speed of light in a vacuum. However, it moves at this speed relentlessly, every moment of every day of every year of every century. While the effects of Special Relativity might be small on typical experimental timescales, we’ve been watching the planets move for centuries.

    Einstein never thought about this; he never thought to calculate the Special Relativistic effects of Mercury’s rapid motion around the Sun, and how that might impact the precession of its perihelion. But another contemporary scientist, Henri Poincaré, decided to do the calculation for himself. When he factored in length contraction and time dilation both, he found that it led to approximately another 7-to-10 arc-seconds of orbital precession per century.

    9
    The best way to see Mercury is from a large telescope, as dozens of stacked images (left, 1998, and center, 2007) in the infrared can reconstruct, or to actually go to Mercury and image it directly (right), as the Messenger mission did in 2009. The smallest planet in the Solar System, its proximity to Earth means it always appears larger than both Neptune and Uranus. (R. DANTOWITZ / S. TEARE / M. KOZUBAL)

    This was fascinating for two reasons:

    The contribution to the precession was literally a step in the right direction, accounting for approximately 20% of the discrepancy with an effect that must be present if the Universe obeys Special Relativity.
    But this contribution, on its own, is not sufficient to explain the full discrepancy.

    In other words, doing the Special Relativity calculation was a clue that we’re on the right track, getting closer to the answer. But all the same, it isn’t the full answer; that would require something else. As Einstein correctly surmised, that “something else” would be to concoct a theory of gravitation that also incorporated Special Relativity. It was by thinking along these lines — and following the add-ons that Minkowski and Poincaré contributed — that Einstein was at last able to formulate his equivalence principle, which led to the full-fledged theory of General Relativity.

    If we had never noticed this tiny deviation of Mercury’s expected behavior from its observed behavior, there wouldn’t have been a compelling observational demand to supersede Newton’s gravity. If Poincaré had never done the calculation that demonstrated how Special Relativity applies to this orbital problem, we might never have gotten that critical hint of the solution to this paradox lying in a unification of the physics of objects in motion (relativity) with our theory of gravitation.

    The realization that gravitation was just another form of acceleration was a tremendous boon to physics, but it might not have been possible without the hints that led to Einstein’s great epiphany. It’s a great lesson for us all, even today: when you see a discrepancy in the data from what you expect, it might be a harbinger of a scientific revolution. We must remain open-minded, but only through the interplay of theoretical predictions with experimental and observational results can we ever hope to take the next great leap in our understanding of this Universe.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 5:49 pm on October 9, 2019 Permalink | Reply
    Tags: "Advances Vs. Consequences: What Does The 21st Century Have In Store For Humanity?", Ethan Siegel, Sir Martin Rees of Cambridge-“Surviving the Century.”, Video lecture 1:18:20   

    From Ethan Siegel: “Advances Vs. Consequences: What Does The 21st Century Have In Store For Humanity?” Video Sir Martin Rees of Cambridge 

    From Ethan Siegel
    Oct 9, 2019

    1
    The southern Milky Way as viewed above ALMA is illustrative of one way we search for signals of intelligent aliens: through the radio band. If we found a signal, or if we transmitted a signal that was then found and responded to, it would be one of the greatest achievements in our planet’s history. As with many of humanity’s greatest endeavors, we have not made the critical breakthrough we so desperately seek, but we continue to look, explore, and learn with the best tools we can possibly construct. (ESO/B. TAFRESHI/TWAN)

    METI (Messaging Extraterrestrial Intelligence) International has announced plans to start sending signals into space

    Technology holds the promise of a better future, but our footprint on the planet threatens to undo all our dreams and progress.

    It’s pretty easy to look at the world we live in today and come away feeling either extremely pessimistic or optimistic, depending on which aspects you focus on. Optimistically, you could look at our life expectancy, our technological conveniences, our high standard of living and the scientific breakthroughs we continue to make and pursue. From biotech to space exploration, from robotics to artificial intelligence, the present is incredible and the future looks even brighter.

    Of course, there’s the flipside: a pessimistic point of view. Even a coarse look at the world shows a growing rejection of science in favor of ideology on issues from climate change to vaccinations to dental health to whether the Earth is flat or humans have landed on the Moon. We are rolling back environmental protections and seeing a rise in bigotry, isolationism, and authoritarianism. Our prospects are simultaneously both bright and dim, and what the 21st century holds will depend largely on our collective actions during the next critical decade.

    2
    This 1989 image was taken just one year after a catastrophic wildfire destroyed hundreds of thousands of acres of land and burned down countlessly many lodgepole pine trees. However, the very next year, wildflowers littered the burned forest landscape, one of the first major steps in the regrowth and regeneration of this ecosystem. Humans may wreak havoc on the planet, but nature will recover. The question of how resilient human civilization is has not yet been determined. (JIM PEACO / NPS)

    When you think about your own dreams for the future of humanity, what does it include? Do you think about the existential, large-scale problems the world is facing today, and how we might improve them? Depending on where you live and what issues plague your local corner of the globe, you might see:

    plants and animals struggling to survive in locations where they’ve previously thrived for millennia,
    extinction rates rising,
    food and water insecurities,
    increases in wildfire, hurricane, drought, and other extreme weather frequency and/or severity,
    mass extinctions and deforestation,

    all while we burn more fossil fuels and consume more energy, as a planet, than ever before.

    3
    The Patagonian glaciers of South America are sadly among the fastest melting in the world, but their beauty is undeniable. This photo was taken by the International Space Station, which completes a full orbit around Earth in approximately 90 minutes. Just minutes earlier, the ISS was flying over a tropical rainforest, showcasing how small our planet truly is and how a huge diversity of ecosystems are threatened by the changes humans have wrought upon our planet. (FYODOR YURCHIKHIN / RUSSIAN SPACE AGENCY)

    The history of humanity is a history of survival through endurance, tool-making and tool use, and through outsmarting every other form of nature: animal, plant, fungus, and even non-living threats. We have leveraged our acquired knowledge of the natural world ⁠ — including the laws and rules that govern how it works ⁠ — to rise to prominence and defeat so many of the natural challenges that every other species has been constrained by.

    The development of agriculture, first by farming and later through ranching, revolutionized humanity’s relationship with food. Sanitation, through infrastructure projects like granaries, sewers, and (more recently) transit systems have enabled our population centers to grow from villages to towns to cities to the modern metropolis. And the industrial revolution, coupled with the rise of electricity, has led humanity to conquer a multitude of inconvenient obstacles, including even the darkness of night itself.

    4
    This composite image of the Earth at night shows the effects of artificial lighting on how our planet appears along the portion that isn’t illuminated by sunlight. This image was constructed based on 1994/1995 data, and the intervening 25 years have seen approximately a twofold increase in the amount of light humans create at night on Earth. We have conquered the night, but only at a great environmental cost. (CRAIG MAYHEW AND ROBERT SIMMON, NASA GSFC; DATA FROM MARC IMHOFF/NASA GSFC & CHRISTOPHER ELVIDGE/NOAA NGDC)

    But our dominance over the environment and our technological progress comes with a cost: as we’ve gained the ability to transform our planet, we’ve ended up transforming it in more ways than we imagined. This was true in the 20th century as well, as problems like:

    antibiotic-resistant disease,
    the dust bowl,
    air pollution and skyrocketing asthma and COPD rates,
    unsafe drinking water,
    acid rain,
    and the hole in the ozone layer,

    all plagued our society. Each one of these problems, at the time, seemed like an existential threat to our advanced civilization continuing as we know it.

    5
    The famed ‘dust bowl’ of the 1930s occurred in the United States as a combination of sustained drought, winds, and sub-optimal farming practices led to an agricultural disaster. Elsewhere in the world, these conditions, even with better farming practices, are still at risk of arising. Here, young Australian boy Harry Taylor plays on the dust bowl his family farm became during the 2018 drought. This drought was cited by many as the worst in recorded history, rivaling the catastrophe of 1902 that no one who lived through it is still alive to recount. (Brook Mitchell/Getty Images)

    However, for each of these problems, humanity was able to band together and address these obstacles. Improved sanitation practices and new medical therapies help manage or even cure those afflicted with a myriad of infectious diseases and illnesses. Better farming practices have ended the risk of another dust bowl. Air and water regulations make it safe for us to breathe air and drink water.

    Even the two most recent problems we faced ⁠ — acid rain and the ozone layer ⁠ — were able to be solved. Through worldwide agreements on what can and cannot be produced and sold to consumers, we’ve seen the pH of rain return to normal and the hole in the ozone has not only stopped growing, but has begun to repair itself.

    6
    From 1998 to the present, the mid-latitudes of Earth have seen a rise in ozone levels in the upper stratosphere. However, the lower stratosphere indicates an offset of the same magnitude. This is evidence that even as the hole in the ozone repairs itself, we must remain vigilant in ensuring this problem is as thoroughly ‘solved’ as we think our actions should have rendered it. (W.T. BALL ET AL. (2018), ATMOS. CHEM. PHYS. DISCUSS., DOI.ORG/10.5194/ACP-2017–862)

    Of course, the 21st century poses challenges for humanity that we’ve never faced before. The internet has been a great force for the worldwide spread and access of factual information in places where it had previously never reached, but it’s also a great force for the spread of misinformation. We’ve explored more of our planet than ever, and are realizing that humanity is responsible for a currently ongoing mass extinction that Earth has not witnessed for tens of millions of years.

    The CO2 concentration in the atmosphere is higher than humans have ever experienced. Global average temperatures continue to rise, as do sea levels, now at an accelerated rate, and will continue to do so for decades. The human population continues to grow, and the past 12 months has seen our species add more CO2 to the atmosphere than any other 12 month span in history.

    7
    Although the CO2 emissions produced by the United States this past year is still 13% below the 2005 maximum produced in this country, the world’s total emissions have jumped by 23% since that same time. The past few years have seen a continued increase in global greenhouse gas emissions, as energy use has taken off largely due to the rise of new computing practices like Blockchain and Cryptocurrency. (US ENERGY INFORMATION ASSOCIATION / EIA.GOV)

    We now live in a time where the actions of a small group of people ⁠ — whether through malicious or benign intentions ⁠ — are capable of leading to global catastrophe. It’s not just climate change or the threat of nuclear war that hangs over us; it’s a slew of facts.

    It matters that a mass extinction is occurring right now: we’re destroying this planet’s proverbial “book of life” before we’ve even read it.

    It matters that computers are permeating ever-increasing facets of our life, as humanity’s recently rising electricity use (after a plateau earlier this decade) is almost entirely due to new computational uses, like cryptocurrencies and blockchain.

    It matters that the population is greater than ever before, as managing and distributing the edible food and drinkable water we produce is a greater challenge than ever before.

    8
    This August 2019 photograph shows wheat fields (foreground) and canola fields (towards the horizon) grown for food and oil to be pressed from the plant’s seed, respectively. According to the South African government’s statistical service, “The agriculture, forestry and fishing industry decreased by 13,2% in (2019’s) first quarter. The decrease was mainly because of a drop in the production of field crops and horticultural products”. Drought, climate change, economic downturn, security issues in rural areas, and uncertainty about the future of land reform in South Africa all pose difficulties for the food, water, and even the economic security of the country. (RODGER BOSCH/AFP/Getty Images)

    The big questions facing our species now is how we will tackle these problems, and many of the other existential worries facing humanity today. Can we survive our technological infancy? Can we overcome our greed, our bigotries, and our squabbling nature? Can we band together to find and enact solutions that benefit us all: friend and foe alike?

    This Wednesday, October 2, 2019, at 7 PM Eastern Time (4 PM Pacific Time), Sir Martin Rees of Cambridge will deliver a public lecture at Perimeter Institute entitled “Surviving the Century.” Martin’s latest book, On The Future: Prospects For Humanity, was released in 2018, and his lecture will closely follow much of the ground covered in that tome.


    1:18:20
    I’m so pleased to be able to live-blog this talk, which you can follow along with in real-time below, or by reading at any time after the conclusion of the lecture. It’s always wonderful to get a firsthand perspective on science and society from someone who’s concerned with the ever-changing role of a good scientist — who pushes our understanding forward for the betterment of humanity — and their responsibility to society.

    After all, as climate scientist Ben Santer eloquently put it:

    “[I]f you spend your entire career trying to advance understanding, you can’t walk away from that understanding when someone criticizes it or criticizes you. There’s no point in being a scientist if you walk away from everything you devoted your life to.”

    Our understanding of practically everything is more advanced than ever. Maybe, if we listen to that understanding, we can figure out the best way forward.

    (The live-blog will begin, below, just before 7 PM Eastern/4 PM Pacific time. All times are displayed in bold and in Pacific time, and correspond to the actual time the commentary was published.)

    9
    The relationship between distance modulus (y-axis, a measure of distance) and redshift (x-axis), along with the quasar data, in yellow and blue, with supernove data in cyan. The red points are averages of the yellow quasar points binned together. While the supernova and quasar data agree with one another where both are present (up to redshift of 1.5 or so), the quasar data goes much farther, indicating a deviation from the constant (solid line) interpretation. Note how quasar redshifts are not quantized in any way. (G. RISALITI AND E. LUSSO, ARXIV:1811.02590)

    3:55 PM: Welcome! The public lecture is just a few minutes away, and I realize that many of you might not know who Martin Rees is or why he’s such a big deal. Martin is an astrophysicist and cosmologist who’s worked on black holes, quasars, the cosmic microwave background, and understanding how cosmic structure forms.

    To the older people in physics/astronomy, he’s probably best known for using quasar distributions to disprove the idea of the steady-state theory even after the discovery of the Cosmic Microwave Background. To younger people, he’s best known for working on uncovering how the “dark ages” ended (when enough stars had formed that the UV radiation flooding the Universe reionized it), and uncovering the link between black holes and quasars.

    13
    The prediction of the Hoyle State and the discovery of the triple-alpha process is perhaps the most stunningly successful use of anthropic reasoning in scientific history. (WIKIMEDIA COMMONS USER BORB)

    3:59 PM: More recently, Martin Rees has been more interested in the intersection of science, ethics, and policy/politics, but also in anthropic reasoning: the idea that we can say meaningful things about reality just from the fact that we exist, and therefore the Universe must exist in such a way that makes our existence possible.

    If you worry that this treads troublingly close to religion, the short answer is: it can. Let’s see how well Martin Rees toes the line tonight!

    14
    The signs of the zodiac and horoscopes are common, but the motion of the planets do not affect the lives or dispositions or personalities of humans in any discernible scientific fashion. (GETTY)

    4:04 PM: No, Martin Rees will not do your horoscopes. He says that scientists are rotten forecasters, but they’re not as bad as economists.

    Ha ha.

    Let’s hope that’s the end of “punching down” towards less rigorous disciplines than physics and astronomy.

    4:07 PM: Martin Rees is talking about population growth. And yes, it’s been fast and enormous recently, but there’s not going to be a population explosion that last for infinitely long (or increase exponentially indefinitely). Instead, most models predict that population will plateau at around 10–11 billion humans, and that’s it. But yes: 9 billion by mid-century is a big number to feed by mid-century, and it’s coming fast.

    14
    Agricultural sprinklers giving water by intermission to Leek plants. Conventional farming may not be enough to feed a rising population. (GETTY)

    4:09 PM: He’s speaking about the need to feed the planet, and talking about Gandhi’s famous “there’s enough for everyone’s need, but not for everyone’s greed.” He’s talking about population projections far into the future, and the big problem of “how do we feed all these people,” but there are many reasons to hope.

    For one, population levels off as economic prosperity increases. This is happening in Asia already, has already happened in the Americas and Europe, and the biggest uncertainty is when will this happen in Africa. Rees’s prediction that “Nigeria alone will have 900 million people” by the end of the century is the most pessimistic one I’ve heard since Paul Ehrlich’s discredited “population bomb” idea.

    14
    The concentration of carbon dioxide in Earth’s atmosphere can be determined from both ice core measurements, which easily go back hundreds of thousands of years, and by atmospheric monitoring stations, like those atop Mauna Loa. The increase in atmospheric CO2 since the mid-1700s is staggering, and continues unabated. (CIRES & NOAA)

    4:13 PM: And yes, of course, CO2 is increasing, the planet is getting warmer, the sea levels are rising, and all of these problems are getting worse, faster, as time goes on.

    Rees is also being careful about mentioning uncertainties: in population, in CO2, and in the range of uncertainty of climate models and fossil fuel scenarios.

    4:15 PM: This is a good point and one that I normally make in different contexts: world policy is made by the voters (and the incumbents who court those voters for re-election) of first-world countries and what their representatives think will be popular. However, the roadmap to a low-carbon future is challenging, as the benefits will mostly trickle to relatively underdeveloped countries.

    That’s a hard political sell: do something that makes things more expensive for you, in the short-term, to make the quality of life better for others in the long-term.

    15
    A fusion device based on magnetically confined plasma. Hot fusion is scientifically valid, but has not yet been practically achieved to reach the ‘breakeven’ point. (PPPL MANAGEMENT, PRINCETON UNIVERSITY, THE DEPARTMENT OF ENERGY, FROM THE FIRE PROJECT)

    4:18 PM: So, what are the solutions? For energy, Rees mentions solar, wind, conservation, and new nuclear reactors, as well as research into fusion. This should be a no-brainer option: we need to introduce clean and economical systems of energy generation, because energy use is predicted to continue rising; the only way to reduce our carbon footprint under those conditions is to produce more electricity, and to do it in a greener fashion.

    16
    Signs and protesters from the 2013 March Against Monsanto in Vancouver, BC. While there may be legitimate complaints over our modern agricultural system, GMOs are not the evil technology that people make them out to be. (ROSALEE YAGIHARA OF WIKIMEDIA COMMONS)

    4:21 PM: “We should be advocates of scientific advances and new technologies, not luddites.” (Paraphrase.) You would think this would be a non-controversial statement, but there are a large number of green energy advocates who see a rejection of science and technology as the only path towards a sustainable future.

    Well, not if we want to meet the modern challenges that we’re actively facing and creating. More nutrient dense food, greater food production, better energy usages, and improved resiliency to financial disasters, natural catastrophes, and food/water/political instability are necessities we should all be investing in.

    17
    The Patagonian glaciers of South America are sadly among the fastest melting in the world, but their beauty is undeniable. This photo was taken by the International Space Station, which completes a full orbit around Earth in approximately 90 minutes. Just minutes earlier, the ISS was flying over a tropical rainforest, showcasing how small our planet truly is and how a huge diversity of ecosystems are threatened by the changes humans have wrought upon our planet. (FYODOR YURCHIKHIN / RUSSIAN SPACE AGENCY)

    4:24 PM: This is a problem that Martin Rees is identifying (specific to vaccines, biotech, or particular public health initiatives): how can we regulate the use (and misuse) of these technologies in a responsible way. As Rees put it, “even the global village will have its village idiots.”

    Balancing freedom, privacy, and security is of paramount importance. It’s hard to see how it will be implemented, of course, but my visceral reaction is to look to the locations that got it the most wrong (*cough* Facebook *cough*), and to learn the lessons of opting out/in, the need for curation of factual, truthful information, and responsible actions.

    18
    Ed Fredkin joined contract research firm Bolt Beranek & Newman (BBN) in the early 1960s where he wrote a PDP-1 assembler (FRAP) and participated in early projects using the machine. He went on to become a major contributor in the field of artificial intelligence. (COMPUTER HISTORY MUSEUM, CA. 1960)

    4:28 PM: As AI systems become more intrusive, pervasive; as the cloud begins storing information about all our actions, our locations, our emotions, etc.; as our face gets recognized everywhere we go; we lose our privacy. We lose our connection with technologies whenever computers outpace humans. And we lose our connection with each other (something Rees isn’t touching on) as we layer technological barrier upon technological barrier between our old-style face-to-face interactions.

    Robots can’t learn by watching human beings. Common sense and etiquette cannot be learned by a robot (yet). And agility/dexterity of a robot is far below that of a small child. And yes, computers can defeat humans at Go, but only by using about a million times the energy of a human brain.

    4:32 PM: I am not a fan of this current Rees proposal: everyone must work and this should be work that cannot be done by computers. He thinks we can re-employ every unskilled laborer doing this. I just don’t see the demand being there, but this is not a question that has a scientific answer.

    I just don’t see it happening; people are better than a well-programmed machine at only a small number of tasks, and what can be automated out of our error-prone ways should be.

    19
    Graduate students might love their work and the knowledge they gain from doing it, but they can ill-afford to be the highest-taxed Americans. Here, Michael Hopkins, left, and Bryce Lee, both graduate students at Virginia Tech. are shown demonstrating autonomous robots. (JOHN F. WILLIAMS / US NAVY)

    4:35 PM: There are a lot of fears around autonomous robots, and then brings up Kurzweil’s immortality fantasies of AI outpacing humanity, becoming more intelligent, and then humans would begin to transcend biology.

    I have thought, for a long time, that people who think along these lines need to understand something: you are not your brain. You are not a computer program; you do not reason the way a computer does, and a computer/brain interface is extraordinarily limited.

    Instead, you are the electrical signals that propagate through your brain and body. That, after all, is the difference between a living and dead human: the electrical activity in your brain. Kill someone and the activity stops. Copying your brain to a computer would not keep that electrical signal the same; it would cease to be you. Kurzweil’s dream, of downloading your intelligence to a computer, is basically doing “copy, paste, and then delete the original.”

    Therefore, you die. Only if we accept that aspect of reality can we effectively do something meaningfully positive with the lives we have. (At least, that’s what I think.)

    20
    The curvature of space, as induced by the planets and Sun in our Solar System, must be taken into account for any observations that a spacecraft or other observatory would make. General Relativity’s effects, even the subtle ones, cannot be ignored in applications ranging from space exploration to GPS satellites to a light signal passing near the Sun. (NASA/JPL-CALTECH, FOR THE CASSINI MISSION)

    4:38 PM: Martin Rees thinks that the Solar System will be filled, in the future, with militarized probes. Yes, Cassini, New Horizons, Juno, Messenger, and other recent planetary/Solar System missions are now outdated and will be superseded by new technologies. We will be better at doing astronomy, science, and understanding our Universe.

    But militarized? I don’t see it. Martin Rees also thinks that the era of crewed spaceflight is over. And sure, if we’re willing to abandon our bodies, of course there’s no point in crewed spaceflight.

    My recommendation would be twofold: accept our physical reality as it is (i.e., as we understand it to be), and then invest in science, technology, R&D, and forward-looking endeavors that better the future of humanity as a whole as much as possible. But this is a tall order, too.

    20
    The very first launch of the Falcon Heavy, on February 6, 2018, was a tremendous success. The rocket reached low-Earth-orbit, deployed its payload successfully, and the main boosters returned to Cape Kennedy, where they landed successfully. The promise of a reusable heavy-lift vehicle is now a reality, and could lower launch costs to ~$1000/pound. Private spaceflight may play a role in our future, but I hope it’s not the only one. (JIM WATSON/AFP/GETTY IMAGES)

    4:42 PM: Now I’m disappointed. After all the talk about banding together as a world for the good of humanity, and what’s difficult to sell to various countries with various national ideologies and values, Martin Rees sees privatized spaceflight as the only prospect for humans traveling to worlds other than Earth.

    Maybe he’s right; maybe I’m the one who’s unrealistic. But I still hope that the civilization-scale adventures and enterprises we dream of can be accomplished by humanity if we band together as a world in cooperation. It’s my hope for the future of space exploration, for the future of our energy needs, for the future of agriculture and food/water production and distribution, and for the future of basic research, particle and low-temperature physics, and so much more.

    I am not looking forward to a post-human era. I am looking forward to a pro-human era.

    21
    This is an aerial view of a Solar Farm in the Ukraine, which is a carbon-free power plant once fully set up and installed. (Maxym Marusenko/NurPhoto via Getty Images)

    4:47 PM: Here’s something you won’t get in Martin Rees’s talk: what a tremendous tipping point will look like. Right now, sunlight is the most important tool for agriculture: it determines what we grow, where, and in what quantities. But someday, technology will reach a point where it’s going to be more efficient to:

    gather sunlight with solar panels,
    grow crops with special lights designed to optimize plant growth,
    and then use the leftover energy to power the world.

    We will someday reach the point where this will be better than growing crops with direct sunlight, outdoors. That’s quite a dream, but when we reach the tech level, it will transform our civilization.

    21
    An artist’s rendition of a potentially habitable exoplanet orbiting a sun-like star. When it comes to life beyond Earth, we have yet to discover our first inhabited world, but TESS is bringing us the star systems which will be our most likely, early candidates for discovering it. (NASA AMES / JPL-CALTECH)

    4:50 PM: What about life in the Universe? We have detected thousands of exoplanets, and will have the prospect of detecting life beyond our Solar System, from microbial life to intelligent aliens. This will be a tremendous advance that we can expect (and maybe even hope) to start probing this century.

    21
    Today on Earth, ocean water only boils, typically, when lava or some other superheated material enters it. But in the far future, the Sun’s energy will be enough to do it, and on a global scale. (JENNIFER WILLIAMS / FLICKR)

    4:54 PM: Let’s remember something important: life on Earth is organic, but organic life won’t be possible forever. Rees sees that human intelligence will outpace human intelligence, and will become the dominant force of “intelligence” not only on our planet, but in the Universe.

    He thinks that any alien signal we find won’t be biological in nature, but electronic.

    I must be crazy to be on the pro-biology side… but I can’t help but be sentimental about our own lives and existences. Somehow, to me, they have value intrinsic to themselves, that electronic beings, even an electronic intelligence, wouldn’t have. There are my biological biases, laid bare for the whole world to see.

    4:58 PM: Was there more than one Big Bang, or just one? If there were many, are there varieties in the physical laws and constants that they obey?

    If we take inflation as we understand it today, the answers are: many, occurring in forever causally disconnected regions, with the same laws and constants everywhere.

    But people do sure love to speculate that there may be more, and if (that’s a really big if), as Martin Rees contends, they may have varying laws and constants, then maybe we can use anthropic reasoning (which is a very unappealing substitute for science) to speculate, and then maybe (which I doubt) it will be a question that falls into the realm of physics, not metaphysics.

    22
    Ermin Omerovic, a 19-year-old man living in the central town of Jajce, is seen eating pizza using his bionic hand. Ermin had a work accident and lost his right arm. After a surgery, carried out for the first time in Bosnia and Herzegovina and the Balkans, he reaches his brain-controlled prosthetic hand. But there is a big difference between a technologically augmented human versus a machine that could be considered alive. (Elman Omic/Anadolu Agency via Getty Images)

    5:02 PM: “Technology needs to be wisely directed, and directed by a value that science alone cannot decide.” Well, at least this should be non-controversial: if we wish to act ethically, we need a code of ethics and morality for humanity, and that code transcends science.

    Will that include machine intelligence. (Is Data from Star Trek alive, and am I arguing the opposite position from Captain Picard?)

    5:06 PM: What about designer babies? Where do we draw the ethical line?

    I have a feeling it will be very much like the early days of any technology: things we’re uncomfortable with today will become commonplace tomorrow. Ethics erode quickly with the acceleration of the enabling technology’s ubiquity.

    5:09 PM: Martin Rees is now fielding a question about the anti-science trend, but instead focuses on the optimistic take: people are actually interested in science. Kids love dinosaurs and kids love space: even things that are divorced from their reality and their experience appeals to people. Extreme and ill-informed opinions get more traction, and Rees believes this is what magnifies the rise of populism.

    This is astute, to me.

    But Rees says it’s important that everyone have a feel for science, as most of the decisions that have to be made by politicians involve science (and economics and ethics), and so to be an informed citizen, you need to have some feel for science and for quantitative numbers. And it’s a part of our culture, too; it’s the only universal culture that straddles all bounds of faith and nationality. (Man, if the rest of this talk was like the answer to this question, I’d be fawning!)

    5:12 PM: If we wish to succeed as a species, we have to band together as a planet, with multi-national bodies that regulate technology, otherwise the potential for abuses will be too great. This includes a worldwide carbon policy, but not a worldwide energy initiative, as new innovations and technologies will have a monetary payback/payoff, so there’s an incentive to benefit the entire world on this front.

    5:14 PM: The final question is to speculate about alien life, but maybe it’s better to speculate about our own future instead.

    Will we be able to overcome our prejudices against those who are “different” from us on the surface, and take actions that benefit the whole of humanity even if they don’t benefit us personally?
    Will we really abandon our flesh-and-blood bodies for the promise of augmented ones, or of entirely cybernetic ones?
    Can we band together as a global civilization to address the global problems we’ve created as a by-product of our species’ unprecedented success?

    We have this odd combination of intelligence and aggression in our species, but perhaps aggression is unique to us, and will be our demise whereas aliens won’t be subject to that.

    5:16 PM: And that’s a wrap! Thanks for tuning in, and I hope you enjoyed this thought-provoking talk!

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:53 am on October 9, 2019 Permalink | Reply
    Tags: "Astronomers Debate: How Many Habitable Planets Does Each Sun-Like Star Have?", , , , , Ethan Siegel   

    From Ethan Siegel: “Astronomers Debate: How Many Habitable Planets Does Each Sun-Like Star Have?” 

    From Ethan Siegel
    Oct 8, 2019

    1
    The ideal ‘Earth 2.0’ will be an Earth-sized, Earth-mass planet at a similar Earth-Sun distance from a star that’s very much like our own. We have yet to find such a world, but are working hard to estimate how many such planets might be out there in our galaxy. With so much data at our disposal, it’s puzzling how varied the different estimates are. (NASA AMES/JPL-CALTECH/T. PYLE)

    We know a lot about what else is out there, but we still don’t know everything.

    In the quest for life in the Universe, it makes sense to look at worlds that are similar to the only success story we know of for certain: our planet Earth. Here at home, we inhabit a rocky planet with a thin atmosphere that orbits our star by rotating rapidly on its axis, with liquid water stably on its surface for billions of years. We have the right temperature and pressure at our surface for continents and liquid oceans, and the right raw ingredients for life to potentially arise.

    We might not yet know how omnipresent or rare life actually is in our galaxy and Universe. Questions concerning the origin of life or the frequency of life evolving into a complex, intelligent or even technologically advanced civilization remain unanswered, as we lack that information. But exoplanet data? We’ve got plenty. That’s why it’s such a puzzle that astronomers can’t agree on how many Earth-like planets each Sun-like star should possess.

    2
    30 protoplanetary disks, or proplyds, as imaged by Hubble in the Orion Nebula. Hubble is a brilliant resource for identifying these disk signatures in the optical, but has little power to probe the internal features of these disks, even from its location in space. Many of these young stars have only recently left the proto-star phase. Star-forming regions like this will frequently give rise to thousands upon thousands of new stars all at once. (NASA/ESA AND L. RICCI (ESO))

    The story begins whenever we have the formation of a new star. New stars are practically always formed when a cloud of gas collapses under its own gravity, working to accumulate mass via gravitational growth before the radiation pressure from newly-formed stars, both inside this particular mass clump and elsewhere throughout the star-forming region, blows off the needed material.

    A small percentage (about 1%) of these stars will be hot, blue, massive, and short-lived: either O-class, B-class, or A-class stars. The lifetimes of these stars are only a tiny percentage of our own Sun’s lifetime, and they don’t live long enough to support the evolution of life as we know it on Earth. Meanwhile, most stars (about 75–80%) are red dwarfs: M-class stars. These stars have Earth-sized planets, many of which are in their star’s habitable zones, but their properties are very different from that of Earth’s.

    3
    The classification system of stars by color and magnitude is very useful. By surveying our local region of the Universe, we find that only 5% of stars are as massive (or more) than our Sun is. It is thousands of times as luminous as the dimmest red dwarf star, but the most massive O-stars are millions of times as luminous as our Sun. About 20% of the total population of stars out there fall into the F, G, or K classes. (KIEFF/LUCASVB OF WIKIMEDIA COMMONS / E. SIEGEL)

    While there are many interesting possibilities concerning life on planets around M-class stars, they face challenges that are extraordinarily different from the challenges of Earth-like worlds. For example:

    Earth-sized planets around M-class stars will become tidally locked, where the same face always faces the star, instead of rotating on their axis with a different period from its revolution.
    M-class stars emit high-energy flares very frequently, which poses the danger of stripping any thin atmospheres away on cosmically short timescales.
    M-class stars emit very little ultraviolet and blue light, rendering photosynthesis as-we-know it impossible.
    And M-class stars emit copious amounts of X-rays, possibly enough to sterilize the surface of any terrestrial planet orbiting it.

    Life may yet exist on worlds such as these, but it’s a controversial proposition.

    4
    All inner planets in a red dwarf system will be tidally locked, with one side always facing the star and one always facing away, with a ring of Earth-like habitability between the night and day sides. But even though these worlds are so different from our own, we have to ask the biggest question of all: could one of them still potentially be habitable? (NASA/JPL-CALTECH)

    On the other hand, it’s tempting to go for the slam dunk in the search for life beyond our Solar System: to look for Earth-sized planets at Earth-like distances with Earth-like conditions around Sun-like (F-class, G-class, or K-class) stars.

    This is a great question to ask, because it’s one that we have lots of data for. We know what fraction of stars fall into these Sun-like classes (around 20% or so), and we’ve observed thousands upon thousands of these stars for approximately a period of three years with NASA’s Kepler satellite during its primary mission.

    The funny this is this: we’ve had the Kepler data for the better part of the past decade, and as of 2019, estimates range from a low of 0.013 Earth-like planets per Sun-like star, to a high of 1.24: a difference of a factor of 100.

    5
    Over the past decade, since the first arrival of data from NASA’s Kepler mission, estimates of the number of Sun-like (F, G, and K-class stars) with Earth-like planets around them varies from a low of ~1% odds per star to odds greater than 100% (between 1 and 2 Earth-like planets) per star. These uncertainties, like the data, are literally astronomical. (DAVID KIPPING, VIA https://TWITTER.COM/DAVID_KIPPING/STATUS/1177938189903896576)

    This is an extreme rarity in science. Normally, if scientists agree on the physical laws that govern a system, agree on the conditions that describe or categorize a system, and use the same data, they’re all going to get the same result. Everyone is definitely using the full suite of exoplanet data available (mostly Kepler), so there must be a problem with some of the assumptions that go into calculating just how common an Earth-like world around a Sun-like star is.

    The first thing that should be emphasized, however, is that there’s no disagreement over the Kepler data itself! When a planet is fortuitously aligned with its parent star and our line-of-sight, it will transit across the face of the star once per orbit, blocking a fraction of the star’s light for a small amount of time. The more transit events we build up, the stronger the signal gets. Owing to Kepler’s mission, we’ve discovered thousands of stars with exoplanets around them.

    6
    Kepler was designed to look for planetary transits, where a large planet orbiting a star could block a tiny fraction of its light, reducing its brightness by ‘up to’ 1%. The smaller a world is relative to its parent star, the more transits you need to build up a robust signal, and the longer its orbital period, the longer you need to observe to get a detection signal that rises above the noise. Kepler successfully accomplished this for thousands of planets around stars beyond our own. (MATT OF THE ZOONIVERSE/PLANET HUNTERS TEAM)

    What we can compute without significant uncertainties are the likelihood of having a planet of a particular radius orbiting a star of a particular type at a particular distance. Kepler has enabled us to do population statistics of exoplanets of a wide variety of types, and through that, we can infer a likelihood range of having an Earth-sized planet orbiting a Sun-like star across a range of orbital distances.

    There are some uncertainties that arise when we look at this problem alone, but they’re relatively small. The Kepler mission, owing to its design specifications (the relatively-short duration of a 3 year primary mission and a limited sensitivity to relatively small flux dips) meant that the easiest planets to find were relatively large planets orbiting close in to relatively small stars. Earth-sized worlds at Earth-like distances around Sun-like stars were slightly beyond Kepler’s capabilities.

    7
    Today, we know of over 4,000 confirmed exoplanets, with more than 2,500 of those found in the Kepler data. These planets range in size from larger than Jupiter to smaller than Earth. Yet because of the limitations on the size of Kepler and the duration of the mission, the majority of planets are very hot and close to their star, at small angular separations. TESS has the same issue with the first planets it’s discovering: they’re preferentially hot and in close orbits. Only through dedicates, long-period observations (or direct imaging) will we be able to detect planets with longer period (i.e., multi-year) orbits. (NASA/AMES RESEARCH CENTER/JESSIE DOTSON AND WENDY STENZEL; MISSING EARTH-LIKE WORLDS BY E. SIEGEL)

    So there are the uncertainties that must arise because we make inferences about exoplanet population statistics. That’s a reasonable source of uncertainty, and one that we can expect to improve as more powerful planet-finding telescopes and missions come online over the coming decade. But it’s not the primary reason for the big discrepancy in astronomer’s estimates for the number of Earth-like worlds around Sun-like stars.

    A second source of uncertainty (that is much larger) arises from the big question of “where is the habitable zone?” We typically define this as the range of distances an Earth-sized planet with an Earth-like atmosphere could exist from its parent star and still have liquid water on its surface. The answer to this question is much more difficult to obtain.

    8
    The habitable zone is the range of distances from a star where liquid water might pool on the surface of an orbiting planet. If a planet is too close to its parent star, it will be too hot and water would have evaporated. If a planet is too far from a star it is too cold and water is frozen. Stars come in a wide variety of sizes, masses and temperatures. Stars that are smaller, cooler and lower mass than the Sun (M-dwarfs) have their habitable zone much closer to the star than the Sun (G-dwarf). Stars that are larger, hotter and more massive than the Sun (A-dwarfs) have their habitable zone much farther out from the star. Scientists do not agree on where the habitable zone should extend to for both its inner and outer boundaries. (NASA/KEPLER MISSION/DANA BERRY)

    You might be tempted to say “well, Venus is too hot, Mars is too cold, and Earth is just right,” and to proceed under those assumptions. But there are many ways we could have altered Venus’ atmosphere to have had the planet beneath it be habitable, just like Earth is, for 4+ billion years. Similarly, were we to replace Mars with a more massive world with a thicker atmosphere, it could remain habitable as well, with liquid water persisting on its surface until the present day.

    What we seem to be learning is that defining the habitable zone for an Earth-sized planet is not as simple as saying, “between this inner distance and that outer distance,” but rather as being co-dependent on factors such as planet mass, the contents and density of a planet’s atmosphere, and stellar evolution factors that link a star’s past and future histories to the habitability of the planet orbiting it.

    9
    his figure shows the real stars in the sky for which a planet in the habitable zone can be observed. The color coding shows the probability of observing an exoEarth candidate if it’s present around that star (green is a high probability, red is a low one). Note how the size of your telescope/observatory in space impacts what you can see, which impacts the type of telescope we’ll need to start truly studying the Earth-like worlds that exist in our relatively nearby neighborhood. (C. STARK AND J. TUMLINSON, STSCI)

    Not knowing exactly where the habitable zone is could cause us to grossly overestimate the number of Earth-like worlds by being too liberal with our assumptions, or it could cause us to exclude potentially Earth-like worlds if we’re too conservative. As with most things, it’s likely that the liberal assumptions will help us encapsulate the corner cases of unlikely outcomes that occasionally occur, while the conservative assumptions might capture the plurality of worlds that are most conducive to Earth-like outcomes.

    However, the largest source of uncertainty might come from failing to adequately estimate which worlds are Earth-like (and potentially habitable) based on their radius alone.

    10
    The small Kepler exoplanets known to exist in the habitable zone of their star. Whether the worlds classified as super-Earths are actually Earth-like or Neptune-like is an open question, but it may not even be important for a world to orbit a Sun-like star or be in this so-called habitable zone in order for life to have the potential of arising. The assumptions we make about these worlds and their properties is directly related to the estimates we make for the fraction of Sun-like stars with Earth-like planets around them. (NASA/AMES/JPL-CALTECH)

    Astronomers agree on neither the lower limit for the size of an Earth-like world nor on the upper limit.

    If a world is too small, the thought is that it will quickly radiate its internal heat away; its core will cease any magnetic activity; the solar wind will strip the atmosphere away; and then the world will have its atmospheric pressure drop below a critical threshold (the triple point of fresh water) and that’s the end for life’s chances. This is what happened to Mars, and many scientists think that this is the fate for all worlds below about 70% of Earth’s radius.

    But if a world is too large (even a little bit larger than Earth), its atmosphere won’t remain thin and breathable, but will become thick and crushing. There’s a critical amount of mass that a planet can have during its formation before a crucial transition occurs: either the planet won’t have enough gravity to keep its primordial hydrogen and helium gases, or it will cross that threshold and have enough.

    11
    The 21 Kepler planets discovered in the habitable zones of their stars, no larger than twice the Earth’s diameter. Most of these worlds orbit red dwarfs, closer to the “bottom” of the graph, and are likely not Earth-like. Meanwhile, the worlds that are 1.5 Earth radii or more in size are almost certainly not Earth-like either. Nailing down the population statistics on the exoplanets in our galaxies will help us tremendously in discovering and measuring the properties of true Earth-like worlds in the future. (NASA AMES/N. BATALHA AND W. STENZEL)

    Below that threshold, you can still have liquid water on your planet’s surface; it can be Earth-like. But above that threshold, and you start looking at having at atmosphere that’s so thick, the atmospheric pressure becomes crushing: many thousands of times what we experience here on Earth.

    This has been exacerbated by a term astronomers have been using for over a decade, but that needs to go: super-Earth. There is this idea that a planet could be significantly larger and more massive than Earth, but still be rocky with a thin atmosphere. In our Solar System, there are no worlds between the sizes of Venus/Earth and Neptune/Uranus, and so we don’t have firsthand experience with where, in that range, the average line between rocky and gas-rich worlds are. But thanks to the exoplanet data we do have, that answer is already known.

    12
    The classification scheme of planets as either rocky, Neptune-like, Jupiter-like or stellar-like. The border between Earth-like and Neptune-like is murky, occurring at approximately 1.2 Earth radii. Direct imaging of candidate super-Earth worlds, which might be possible with the James Webb Space Telescope, should enable us to determine whether there’s a gas envelope around each planet in question or not. Note that there are four main classifications of ‘world’ here, and that the cutoff between rocky planets and those with a gas envelope occurs well-below the sizes of any planet whose atmosphere we’ve measured as of 2019. Note the absence of a ‘super-Earth’ category. (CHEN AND KIPPING, 2016, VIA https://ARXIV.ORG/PDF/1603.08614V2.PDF)

    If you’re more than 2 Earth masses, which translates into more than about 120–125% the radial size of Earth, you are no longer rocky, but possess that dreaded hydrogen and helium envelope. The same one that Neptune and Uranus possess; the same kind that the recently announced habitable zone exoplanet with water on it has.

    We know that there are between 200 billion and 400 billion stars in the Milky Way galaxy. About 20% of those stars are Sun-like, for about 40-to-80 billion Sun-like stars in our galaxy. There are very likely billions of Earth-sized worlds orbiting those stars with the potential for the right conditions to have liquid water on their surfaces and being otherwise Earth-like, but whether that’s 1 or 2 billion or 50 or 100 billion is still unknown. Future planet-finding and exploring missions will need better answers than we presently have today, and that’s all the more reason to keep looking with every tool in our arsenal.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 1:04 pm on October 7, 2019 Permalink | Reply
    Tags: "One Cosmic Mystery Illuminates Another, As Fast Radio Burst Intercepts A Galactic Halo", , , , , Ethan Siegel   

    From Ethan Siegel: “One Cosmic Mystery Illuminates Another, As Fast Radio Burst Intercepts A Galactic Halo” 

    From Ethan Siegel
    Oct 7, 2019

    1
    This artist’s impression represents the path of the fast radio burst FRB 181112 traveling from a distant host galaxy to reach the Earth. FRB 181112 was pinpointed by the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope. Follow-up observations with ESO’s Very Large Telescope (VLT) revealed that the radio pulses have passed through the halo of a massive galaxy on their way toward Earth. This finding allowed astronomers to analyse the radio signal for clues about the nature of the halo gas. (ESO/M. KORNMESSER)

    Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West. ASKAP consists of 36 identical parabolic antennas, each 12 metres in diameter, working together as a single instrument with a total collecting area of approximately 4,000 square metres.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    There’s so much we don’t know about fast radio bursts and galactic halos. Combined, we get a unique window on the Universe.

    Deep in space, mysterious signals known as Fast Radio Bursts (FRBs) stream towards Earth.

    2
    The host galaxies of fast radio bursts remain mysterious for most of the FRBs we’ve seen, but a few of them have had their host galaxy’s detected. For FRB 121102, whose repeating bursts were extremely polarized, the host was identified as a dwarf galaxy with an active galactic nucleus. Perhaps interestingly, the stars within it, on average, have far fewer heavy elements (and hence, rocky, potentially habitable planets) than the ones in our Milky Way. (GEMINI OBSERVATORY/AURA/NSF/NRC)

    NOAO Gemini North on MaunaKea, Hawaii, USA, Altitude 4,213 m (13,822 ft)

    These FRBs last milliseconds or less, originate in ultra-distant galaxies, and sometimes repeat.

    3
    Waterfall plot of the fast radio burst FRB 110220 discovered by Dan Thornton (University of Manchester). The image shows the power as a function of time (x axis) for more than 800 radio frequency channels (y axis) and shows the characteristic sweep one expects for sources of galactic and extragalactic origin. FRBs come as either single or multiple discrete bursts lasting from tens of microseconds to a few milliseconds, but no longer. (MATTHEW BAILES / SWINBURNE UNIVERSITY OF TECHNOLOGY / THE CONVERSATION)

    Although scientists have studied them intensely since their discovery, their origins remain mysterious.

    Meanwhile, an estimated 2 trillion galaxies populate our observable Universe.

    4
    For a FRB originating from a galaxy with the magnitude that the observed host galaxy of FRB 181112 possesses, the probabilities of having a random association within one arc-second (1/3600th of a degree) of another galaxy can be computed. Typical odds of such an association range between 0.25% and 0.40%, with the median value of 0.31%: about 1-in-300 odds. We’ve clearly gotten lucky, as humanity has not yet detected anywhere near 300 FRBs in total. (ESO/X. PROCHASKA ET AL.)

    With incredibly large distances for FRBs to traverse, each one risks passing through an intervening galaxy.

    5
    In November of 2018, the fast radio burst FRB 181112 arrived here at Earth, but not before passing through the halo of the brighter foreground galaxy at the upper left. The burst passed through the galactic halo at a distance of approximately 95,000 light-years away from the galaxy’s center. (ESO/X. PROCHASKA ET AL.)

    Giving off multiple pulses of under 40 microseconds apiece, FRB 181112 became the first burst to intercept a galactic halo.

    6
    This diagram shows how scientists determined the size of the halo of the Andromeda galaxy: by looking at absorption features from distant quasars, whose light either did or did not pass through the halo surrounding Andromeda. Where the halo is present, its gas absorbs some of the quasar light and darkens it across a very small wavelength range. By measuring the tiny dip in brightness at that specific range, scientists could tell how much gas is between us and each quasar. Doing this for more distant galaxies requires not only alternative techniques, but also serendipitous alignments. (NASA, ESA, AND A. FEILD (STSCI))

    Halos are their own enigmas, populated with cool, enriched gas extending for hundreds of thousands of light-years.

    7
    The galaxy Centaurus A has a dusty disk component in it, but is dominated by an elliptical shape and a halo of satellites: evidence of a highly evolved galaxy that has experienced many mergers in its past. It is the closest active galaxy to us, but accelerates away from our Local Group. Each galaxy ought to be unique in terms of the properties of the normal matter in its halo, but broad categorizations by galaxy type, age, mass. morphology, metallicity, and star formation history should be possible. (CHRISTIAN WOLF & SKYMAPPER TEAM/AUSTRALIAN NATIONAL UNIVERSITY)

    This gas is necessary for fueling future star-formation, but its physical properties remain largely unexplored.

    8
    A distant quasar will have a big bump (at right) coming from the Lyman-series transition in its hydrogen atoms. To the left, a series of lines known as a forest appears. These dips are due to the absorption of intervening gas clouds, and the fact that the dips have the strengths they do place constraints on many properties, such as the temperature of dark matter, which must be cold. However, this can also be used to constrain and/or measure the properties of any intervening galactic halos, including the gas within them. (M. RAUCH, ARAA V. 36, 1, 267 (1998))

    Absorption features previously revealed abundant, cool (~10,000 K), low-density gas in these halos.

    9
    FRB 181112 comes to us from a distance of nearly 6 billion light-years away. However, it passed through the halo of an intervening foreground galaxy perhaps a billion light-years closer: a rare event with just a 0.3% probability of occurring for an FRB this distant. The vertical line at about 1.5 Gpc (~5 billion light-years) represents where the FRB signal passed through the foreground galaxy’s dark matter (and normal matter) halos. (ESO/X. PROCHASKA ET AL.)

    But properties like total halo mass and hot (~1,000,000+ K) gas density are still undetermined.

    10
    The positions of the known fast radio bursts as of 2013, including four who had identifiable host galaxies, helped prove the extragalactic origins of these objects. The remaining radio emissions show the locations of galactic sources like gas and dust. The absorption features, polarizations, and pulse-lengthening of the FRBs we receive can tell us information about our own galaxy’s halo, but a serendipitous close pass to a foreground extragalactic object is an even greater probe of the outer galactic halos present in our nearby Universe. (MPIFR/C. NG; SCIENCE/D. THORNTON ET AL.)

    When FRB 181112’s pulses traversed this galaxy’s halo, they were surprisingly unaffected.

    This burst revealed a tranquil halo for this Milky Way-like galaxy, with:

    very low-density gas,
    no turbulence,
    no clumps,
    and negligible magnetization.

    11
    In searching for the free electron density (x-axis) and the magnetic field parallel to the FRB’s propagation direction (y-axis), scientists measured numerous properties of the arriving radiation. Only constraints could be placed: the magnetic field can be no stronger than about one-millionth the field strength generated by planet Earth at its surface, or about one-millionth the strength of a typical refrigerator magnet. (ESO/X. PROCHASKA ET AL.)

    Are these properties universal to all Milky Way-like galaxies?

    12
    Within a dark matter halo, which could extend for millions of light-years, the normal matter collects towards the center. When the densities reach large enough amounts, owing either to gravitational collapse or the funneling of the gas into the disk/core, the gas will trigger the formation of new stars within. Having a foreground signal pass close to another galaxy is a rare, 1-in-300 odds event. (J. TURNER)

    More observations, with additional FRBs, hold the answers.

    13
    Fast Radio Bursts (FRBs) have opened up an entirely new realm of astronomy for the 21st century. This discovery marks the first time a burst has passed through a foreground galaxy, giving us indicators of the properties of the halo gas within it. (DANIELLE FUTSELAAR)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 11:30 am on October 7, 2019 Permalink | Reply
    Tags: "Has Google Actually Achieved ‘Quantum Supremacy’ With Its New Quantum Computer?", , Ethan Siegel   

    From Ethan Siegel: “Has Google Actually Achieved ‘Quantum Supremacy’ With Its New Quantum Computer?” 

    From Ethan Siegel
    Oct 5, 2019

    1
    Shown here is one component of a quantum computer (a dilution refrigerator), as shown here in a clean room from a 2016 photo. Quantum computers would achieve Quantum Supremacy if they could complete any calculation significantly more quickly and efficiently than a classical computer can. That achievement won’t, on its own, however, let us achieve all of the dreams we have of what Quantum Computation could bring to humanity. (GETTY)

    A fully programmable quantum computer that can outperform any classical computer is right at the edge of today’s technology.

    Earlier this month, a new story leaked out: Google, one of the leading companies invested in the endeavor of quantum computing, claims to have just achieved Quantum Supremacy. While our classical computers ⁠ — like laptops, smartphones and even modern supercomputers ⁠ — are extraordinarily powerful, there are many scientific questions whose complexity goes far beyond their brute-force capabilities to calculate or simulate.

    But if we could build a powerful enough quantum computer, ⁠it’s possible that many problems that are impractical to solve with a classical computer would suddenly be solvable with a quantum computer. This idea, that quantum computers could efficiently solve a computation that a classical computer can only solve inefficiently, is known as Quantum Supremacy. Has Google actually done that? Let’s dive into the problem and find out.

    2
    The way solid-state storage devices work today is by the presence or absence of charged particles across a substrate/gate, which inhibits or allows the flows of current, thereby encoding a 0 or a 1. In principle, we can move from bits to qubits by having, instead of a gate with a permanent charge, a quantum bit that encodes either a 0 or 1 when measured, but can exist in a superposition of states otherwise. (E. SIEGEL / TREKNOLOGY)

    The idea of a classical computer is simple, and goes back to Alan Turing and the concept of a Turing machine. With information encoded into bits (i.e., 0s and 1s), you can apply a series of operations (such as AND, OR, NOT, etc.) to those bits to perform any arbitrary computations you like. Some of those computations might be easy; others might be hard; it depends on the problem. But, in theory, if you can design an algorithm to successfully perform a computation, no matter how computationally expensive it is, you can program it into a classical computer.

    However, a quantum computer is a little bit different. Instead of regular bits, which are always either 0 or 1, a quantum computer uses qubits, or the quantum analog of bits. As with most things, going to the quantum world from the classical world means we need to change how we view this particular physical system.

    3
    This ion trap, whose design is largely based on the work of Wolfgang Paul, is one of the early examples of an ion trap being used for a quantum computer. This 2005 photo is from a laboratory in Innsbruck, Austria, and shows the setup of one component of a now-outdated quantum computer. Ion trap computers have much slower computational times than superconducting qubit computers, but they have much longer coherence timescales to compensate. (MNOLF / WIKIMEDIA COMMONS)

    Instead of recording a 0 or 1 permanently as a bit, a qubit is a two-state quantum mechanical system, where the ground state represents 0 and the excited state represents 1. (For example, an electron can be spin up or spin down; a photon can be left-handed or right-handed in its polarization, etc.) When you prepare your system initally, as well as when you read out the final results, you’ll see only 0s and 1s for the values of qubits, just like with a classical computer and classical bits.

    But unlike a classical computer, when you’re actually performing these computational operations, the qubit isn’t in a determinate state, but rather lives in a superposition of 0s and 1s: similar to the simultaneously part-dead and part-alive Schrodinger’s cat. It’s only when the computations are over, and you read out your final results, that you measure what the true end-state is.

    4
    In a traditional Schrodinger’s cat experiment, you do not know whether the outcome of a quantum decay has occurred, leading to the cat’s demise or not. Inside the box, the cat will be either alive or dead, depending on whether a radioactive particle decayed or not. If the cat were a true quantum system, the cat would be neither alive nor dead, but in a superposition of both states until observed. (WIKIMEDIA COMMONS USER DHATFIELD)

    There’s a big difference between classical computers and quantum computers: prediction, determinism and probability. As with all quantum mechanical systems, you cannot simply provide the initial conditions of your system and the algorithm of which operators act on it and then predict what the final state will be. Instead, you can only predict the probability distribution of what the final state will look like, and then by performing the critical experiment over and over again can you hope to match and produce that expected distribution.

    You might think that you need a quantum computer to simulate quantum behavior, but that’s not necessarily true. You can simulate quantum behavior on a quantum computer, but you should also be able to simulate it on a Turing machine: i.e., a classical computer.

    5
    Computer programs with enough computational power behind them can brute-force analyze a candidate Mersenne prime to see if it corresponds to a perfect number or not, using algorithms that run without flaw on a conventional (non-quantum) computer. For small numbers, this can be accomplished easily; for large numbers, this task is extremely difficult and requires ever more computational power. (C++ PROGRAM ORIGINALLY FROM PROGANSWER.COM)

    This is one of the most important ideas in all of computer science: the Church-Turing thesis. It states that if a problem can be solved by a Turing machine, it can also be solved by a computational device. That computational device could be a laptop, smartphone, supercomputer or even a quantum computer; a problem that could be solved by one such device should be solvable on all of them. This is generally accepted, but it tells you nothing about the speed or efficiency of that computation, nor about Quantum Supremacy in general.

    Instead, there’s another step that’s much more controversial: the extended Church-Turing thesis. It states that a Turing machine (like a classical computer) can always efficiently simulate any computational model, even to simulate an inherently quantum computation. If you could provide a counterexample to this — if you could demonstrate even one example where quantum computers were vastly more efficient than a classical computer — that would mean that Quantum Supremacy has been demonstrated.

    6
    IBM’s Four Qubit Square Circuit, a pioneering advance in computations, could someday lead to quantum computers powerful enough to simulate an entire Universe. But the field of quantum computation is still in its infancy, and demonstrating Quantum Supremacy, today, under any circumstances would be a remarkable milestone. (IBM RESEARCH)

    This is the goal of many teams working independently: to design a quantum computer that can out-perform a classical computer by a significant margin under at least one reproducible condition. The key to understanding how this is possible is the following: in a classical computer, you can subject any bit (or combination of bits) of information to a number of classical operations. This includes operations you’re familiar with, such as AND, OR, NOT, etc.

    But if you have a quantum computer, with qubits instead of bits, you’ll have a number of purely quantum operations you can perform in addition to the classical ones. These quantum operations obey particular rules that could be simulated on a classical computer, but only at great computational expense. On the other hand, they can be easily simulated by a quantum computer on one condition: that the time it takes to perform all of your computational operations is short enough compared to the coherence time of the qubits.

    7
    In a quantum computer, qubits that are in an excited state (a “1” state) will decay back to the ground state (a “0” state) on a timescale known as the coherence time. If one of your qubits decays before all of your computations are performed and you read out your answer, that will create an error. (GETTY)

    With all this in mind, the Google team had a paper that was briefly posted to NASA’s website (likely an early draft of what the final paper will be) that was later removed, but not before many scientists had a chance to read and download it. While the implications of their accomplishments have not yet been fully sorted out, here’s how you can imagine what they did.

    Imagine you have 5 bits or qubits of information: 0 or 1. They all start in a 0 state, but you prepare a state where two of these bits/qubits are excited to be in the “1” state. If your bits or qubits are perfectly controlled, you can prepare that state explicitly. For example, you can excite bit/qubit numbers 1 and 3, in which case your system’s physical state will be |10100>. You can then “pulse in” random operations to act on these bits/qubits, and you expect that what you’ll get is a specific probability distribution for the outcome.

    8
    A 9-qubit quantum circuit, as micrographed out and labeled. Gray regions are aluminum, dark regions are where the aluminum is etched away, and colors have been added to distinguish the various circuit elements. For a computer like this, which uses superconducting qubits, the device must be kept supercooled at millikelvin temperatures to work as a true quantum computer, and operates appropriately only on timescales significantly below ~50 microseconds. (C. NEILL ET AL. (2017), ARXIV:1709.06678V1, QUANT-PH)

    The Google team chose a particular protocol for their experiment attempting to achieve Quantum Supremacy, demanding that the total number of excited bits/qubits (or the number of 1s) must be preserved after the application of an arbitrary number of operations. These operations are completely random, meaning that which bits/qubits are excited (1) or in the ground state (0) are free to vary; you’d need two “1” states and three “0” states for the five qubit examples. If you didn’t have truly random operations, and if you didn’t have the purely quantum operations encoded in your computer, you’d expect that all 10 of the possible final states would appear with equal probability.

    (The ten possibilities are |11000>, |10100>, |10010>, |10001>, |01100>, |01010>, |01001>, |00110>, |00101>, and |00011>.)

    But if you have a quantum computer that behaves as a true quantum computer, you won’t get a flat distribution. Instead, some states should occur more frequently in a final-state outcome than the others, and others should be very infrequent. This is a counterintuitive aspect of reality that only arises from quantum phenomena, and the existence of purely quantum gates. We can simulate this phenomena classically, but only at great computational cost.

    9
    When you perform an experiment on a qubit state that starts off as |10100> and you pass it through 10 coupler pulses (i.e., quantum operations), you won’t get a flat distribution with equal probabilities for each of the 10 possible outcomes. Instead, some outcomes will have abnormally high probabilities and some will have very low ones. Measuring the outcome of a quantum computer can determine whether you are maintaining the expected quantum behavior or losing it in your experiment. (C. NEILL ET AL. (2017), ARXIV:1709.06678V1, QUANT-PH)

    If we only applied the allowable classical gates, even with a quantum computer, we wouldn’t get the quantum effect out. Yet we can clearly see that the probability distribution we actually get isn’t flat, but that some possible end states are much more likely than the 10% you’d naively expect, and some are far less likely. The existence of these ultra-low and ultra-high probability states is a purely quantum phenomenon, and the odds that you’ll get these low-probability and high-probability outcomes (instead of a flat distribution) is an important signature of quantum behavior.

    In the field of quantum computing, the odds of getting at least one final state that exhibits a very low-probability of appearing should follow a specific probability distribution: the Porter-Thomas distribution. If your quantum computer was perfect, you could perform as many operations as you wanted for as long as you wanted, and then read out the outcomes to see if your computer followed the Porter-Thomas distribution, as expected.

    10
    The Porter-Thomas distribution, shown here for 5, 6, 7, 8, and 9 qubits, plots probabilities for achieving certain results in the probability distribution dependent on the number of qubits and possible states. Note the straight line, which indicates the expected quantum results. If the total amount of time it takes to run your quantum circuit is too long, you get a classical result: exemplified by the short green lines, which definitely don’t follow the Porter-Thomas distribution. (C. NEILL ET AL. (2017), ARXIV:1709.06678V1, QUANT-PH)

    Practically, though, quantum computers aren’t perfect. Any quantum system, no matter how it’s prepared (the Google team used superconducting qubits, but other quantum computers, using quantum dots or ion traps, for example, are also possible), will have a coherence time: the amount of time you can expect a qubit prepared in an excited state (i.e., 1) to remain in that state. Beyond that time, it should decay back to the ground state, or 0.

    This is important, because it requires a finite amount of time to apply a quantum operator to your system: known as gate time. The gate time must be very short compared to the coherence timescale, otherwise your state might decay and your final state won’t give you the desired outcome. Also, the more qubits you have, the greater the complexity of your device and the higher the probability of error-introducing crosstalk between qubits. In order to have an error-free quantum computer, you must apply all of your quantum gates to the full suite of qubits before the system decoheres.

    Superconducting qubits remain stable only for ~50 microseconds. Even with a gate time of ~20 nanoseconds, you can only expect to perform a few dozen computations, at most, before decoherence ruins your experiment and gives you the dreaded flat distribution, losing the quantum behavior we sought after so thoroughly.

    11
    This idealized five qubit setup, where the initial circuit is prepared with qubits 1 and 3 in the initial state, is subject to 10 independent pulses (or quantum gates) before yielding a final-state result. If the total time spent passing through the quantum gates is much shorter than the coherence/decoherence time of the system, we can expect to achieve the desired quantum computational results. If not, we cannot perform the calculation on a current quantum computer. (C. NEILL ET AL. (2017), ARXIV:1709.06678V1, QUANT-PH)

    The problem that the Google scientists solved with their 53-qubit computer was not a useful problem in any regard. In fact, the setup was specifically engineered to be easy for quantum computers and computationally very expensive for classical ones. The way they finessed this was to make a system of n qubits, which requires on the order of 2^n bits of memory on a classical computer to simulate, and to pick operations that are as computationally expensive as possible for a classical computer.

    The original algorithm put forth by a collaboration of scientists, including many on the current Google team, required a 72-qubit quantum computer to demonstrate Quantum Supremacy. Because the team couldn’t achieve that just yet, they went back to the 53-qubit computer, but replaces an easy-to-simulate quantum gate (CZ) with another quantum gate: the fSim gate (which is a combination of the CZ with the iSWAP gate), which is more computationally expensive to simulate for a classical computer.

    12
    Different types of quantum gates exhibit various fidelities (or the percentage of error-free gates) depending on the type of gate chosen, and also exhibit various computational expenses for classical computers. An older attempt at Quantum Supremacy used CZ gates and required 72 qubits; using more iSWAP-like gates enabled the Google team to achieve Quantum Supremacy with only 53 qubits. (NATURE PHOTONICS, VOLUME 12, PAGES 534–539 (2018))

    There is a long-shot hope for those who want to preserve the extended Church-Turing thesis: perhaps with a clever enough computational algorithm, we could lower the computational time for this problem on a classical computer. It seems unlikely that this is plausible, but it’s the one scenario that could revoke what appears to be the first achievement of Quantum Supremacy.

    For now, though, the Google team appears to have achieved Quantum Supremacy for the first time: by solving this one particular (and probably not practically useful) mathematical problem. They performed this computational task with a quantum computer in a much faster time than even the biggest, most powerful (classical) supercomputer in the country could. But achieving useful Quantum Supremacy would enable us to:

    make high-performance quantum chemistry and quantum physics calculations,
    replace all classical computers with superior quantum computers,
    and to run Shor’s algorithm for arbitrarily large numbers.

    Quantum Supremacy may have arrived; useful Quantum Supremacy is still far off from being achieved. For example, if you wanted to factor a 20-digit semiprime number, Google’s quantum computer cannot solve this problem at all. Your off-the-shelf laptop, however, can do this in milliseconds.

    13
    The Sycamore processor, which is a rectangular array of 54 qubits connected to its four nearest neighbros with couplers, contains one inoperable qubit, leading to an effective 53 qubit quantum computer. The optical image shown here illustrates the scale and color of the Sycamore chip as seen in optical light. (GOOGLE AI QUANTUM AND COLLABORATORS, RETRIEVED FROM NASA)

    Progress in the world of quantum computing is astounding, and despite the claims of its detractors, systems with greater numbers of qubits are undoubtedly on the horizon. When successful quantum error-correction arrives (which will certainly require many more qubits and the necessity of addressing and solving a number of other issues), we’ll be able to extend the coherence timescale and perform even more in-depth calculations. As the Google team themselves noted,

    ” Our experiment suggests that a model of computation may now be available that violates [the extended Church-Turing thesis]. We have performed random quantum circuit sampling in polynomial time with a physically realized quantum processor (with sufficiently low error rates), yet no efficient method is known to exist for classical computing machinery.”

    With the creation of the very first programmable quantum computer that can efficiently perform a calculation on qubits that cannot be efficiently carried out on a classical computer, Quantum Supremacy has officially arrived. Later this year, the Google team will surely publish this result and be lauded for their extraordinary accomplishment. But our biggest dreams of quantum computing are still a long way off. It’s more important than ever, if we want to get there, to keep on pushing the frontiers as fast and far as possible.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 2:45 pm on September 30, 2019 Permalink | Reply
    Tags: , , , , , Ethan Siegel, Milky Way mapping   

    From Ethan Siegel: “This Is What The Milky Way’s Magnetic Field Looks Like” 

    From Ethan Siegel
    Sep 30 . 2019

    1
    The dust in the Milky Way, shown in darker and redder colors, are regions where new star formation is taking place. These dusty regions are correlated with the magnetic fields present in our galaxy, and the background light gets polarized in a measurable way as a result. (ESA/PLANCK COLLABORATION. ACKNOWLEDGMENT: M.-A. MIVILLE-DESCHÊNES, CNRS — INSTITUT D’ASTROPHYSIQUE SPATIALE, UNIVERSITÉ PARIS-XI, ORSAY, FRANCE)

    ESA/Planck 2009 to 2013

    If you thought the Planck satellite just made temperature maps of the cosmic microwave background, this will astound you.

    The Milky Way, to human eyes, appears as simply a mix of stars and light-blocking dust.

    2
    A map of star density in the Milky Way and surrounding sky, clearly showing the Milky Way, the Large and Small Magellanic Clouds (our two largest satellite galaxies), and if you look more closely, NGC 104 to the left of the SMC, NGC 6205 slightly above and to the left of the galactic core, and NGC 7078 slightly below. In visible light, only starlight and the presence of light-blocking dust is revealed, but other wavelengths have the capacity to reveal fascinating and informative structures far beyond what the optical part of the spectrum can. (ESA/GAIA)

    ESA/GAIA satellite

    However, a glimpse in additional wavelengths reveals enormously rich, detailed structures.

    3
    This ultra-detailed view of the Milky Way spans many different wavelengths of light, and as such it can reveal gas, charged particles, many types of dust, and many other signals that appear in the microwave and millimeter wavelength ranges. The Planck satellite provides us with our best all-sky view of the cosmos in this wavelength range. (ESA/NASA/JPL-CALTECH)

    NASA/ESA Hubble Telescope

    Observations show galactic foreground signals combined with cosmic signals originating way back from the Big Bang.

    4
    The Planck satellite constructed all-sky maps of the sky in nine different wavelengths of light, at frequencies spanning from 30 GHz all the way up to 857 GHz: frequencies that can only be observed from space. Although the foreground features in the Milky Way are quite prominent, the main science goal of Planck was to analyze the background light: the cosmic microwave background. (ESA AND THE PLANCK COLLABORATION)

    Leveraging observations across many different wavelengths, Planck scientists identified the cause and source of many galactic foregrounds.

    5
    The signal of the Milky Way galaxy as revealed by the Planck satellite during its first year of data-taking observations. Planck is now 10 years old, and understanding which components of the Planck signal are galactic versus extragalactic is of paramount importance to extracting correct information about our Universe. (ESA/ LFI & HFI CONSORTIA)

    The Milky Way’s gas, dust, stars and more create fascinating, measurable structures.

    6
    The fluctuations in the Cosmic Microwave Background, as seen by Planck. There is no evidence for any repeating structures, and although there is some uncertainty in how accurate and comprehensive our foreground subtraction is, the success of the Planck data in matching and superseding other CMB observations like COBE, Boomerang, WMAP, AFI and others tells us that if we’re not on the perfectly correct track, we’re extremely close. (ESA AND THE PLANCK COLLABORATION)

    Cosmic Microwave Background NASA/WMAP

    NASA/WMAP 2001 to 2010

    COBE/CMB

    NASA/ Cosmic Background Explorer COBE 1989 to 1993.

    Subtracting out all the foregrounds yields the cosmic background signal, which possesses tiny temperature imperfections.

    7
    This map is of the galactic magnetic foreground of the Milky Way. The contour lines show the direction of the magnetic field projected on the plane of the sky, while light/dark regions correspond to fully-unpolarized/fully-polarized regions of emission from the galaxy. (ESA AND THE PLANCK COLLABORATION)

    But the galactic foreground isn’t useless; it’s a map unto itself.

    8
    The all-sky map of the galactic foreground emissions overlaid with polarization and magnetic field data. This is the first accurate, high-resolution, all-sky map of our galaxy’s magnetic field and foreground structures. (ESA AND THE PLANCK COLLABORATION)

    All background light gets polarized by these foregrounds, enabling the reconstruction of our galaxy’s magnetic field.

    9
    The alignment of neutral hydrogen (white lines) with the polarization data from the CMB (gradients) is an inexplicable surprise, unless there’s an additional galactic foreground. In theory, only ionized hydrogen should align with the polarization data. This surprise is one of the very few observations that the Planck science team exhibits tension with other measurements, such as radio pencil-beam data taken from Arecibo. (CLARK ET AL., PHYSICAL REVIEW LETTERS, VOLUME 115, ISSUE 24, ID.241302 (2015))


    NAIC Arecibo Observatory operated by University of Central Florida, Yang Enterprises and UMET, Altitude 497 m (1,631 ft).

    Quite surprisingly, neutral hydrogen appears to be aligned with the CMB’s polarization.

    10
    As seen in yellow, a bridge of hot gas (detected by Planck) connects the galaxy clusters Abell 399 and Abell 401. The Planck data, when combined with X-ray data (in red) and LOFAR radio data (in blue) reveals a bridge of relativistic electrons connecting these two clusters across a distance of 10 million light-years. This is the largest-scale magnetic field ever detected in our Universe, and shows how successful Planck can be for reconstructing magnetic fields. (ESA/PLANCK COLLABORATION / STSCI/DSS (L); M. MURGIA / INAF, BASED ON F. GOVONI ET AL., 2019, SCIENCE (R))

    ASTRON LOFAR European Map


    ASTRON LOFAR Radio Antenna Bank, Netherlands

    However, Planck data of distant galaxies matches well with reconstructed magnetic fields.

    11
    The current models of galactic (and other) foregrounds along with the cosmic microwave background. There is some evidence that indicates the possibility that free-free scattering (from free electrons) has been modeled insufficiently, but other observations indicate that we may be spot on. This is a minor issue, but one that has not been conclusively resolved. (ESA AND THE PLANCK COLLABORATION)

    12
    A close-up view of one of many regions of our galaxy, with the dustiest regions shown in red. The dark red regions are locations where new stars are forming, and the contour lines that show the reconstructed magnetic fields from our galaxy illustrate the interplay of star-forming regions with these fields. (ESA/PLANCK COLLABORATION. ACKNOWLEDGMENT: M.-A. MIVILLE-DESCHÊNES, CNRS — INSTITUT D’ASTROPHYSIQUE SPATIALE, UNIVERSITÉ PARIS-XI, ORSAY, FRANCE)

    What’s certain is that dust grains correlate with these giant magnetic structures.

    13
    A quick look at any zoomed-in region of the galaxy shows that magnetic fields are not coherent and unidirectional on scales of the Milky Way, but rather only on the scales of individual star clusters. Beyond distance scales of a few dozen light-years, magnetic fields flip and switch directions, dominated by local, rather than galaxy-scale, dynamics. (ESA/PLANCK COLLABORATION. ACKNOWLEDGMENT: M.-A. MIVILLE-DESCHÊNES, CNRS — INSTITUT D’ASTROPHYSIQUE SPATIALE, UNIVERSITÉ PARIS-XI, ORSAY, FRANCE)

    The link is through star-formation, which occurs inside these obscured regions.

    14
    Although an image like this might remind you of Van Gogh’s famous ‘Starry Night’ painting, this doesn’t illustrate atmospheric turbulence at all, since 100% of the data used in creating this image was taken from space. These lines represent magnetic fields and polarization instead, which illuminate the Universe in an entirely different way. (ESA/PLANCK COLLABORATION. ACKNOWLEDGMENT: M.-A. MIVILLE-DESCHÊNES, CNRS — INSTITUT D’ASTROPHYSIQUE SPATIALE, UNIVERSITÉ PARIS-XI, ORSAY, FRANCE)

    Extragalactic light is unavoidably affected by our galactic magnetic fields, enabling the construction of these beautiful maps.

    15
    Even in the direction that points directly away from the galactic center, the plane of our Milky Way still contains dusty, star-forming regions, still generates its own magnetic field, and still polarizes any background light that passes through this region of space. In order to understand the Universe, we have to model and account for every single component successfully. (ESA/PLANCK COLLABORATION. ACKNOWLEDGMENT: M.-A. MIVILLE-DESCHÊNES, CNRS — INSTITUT D’ASTROPHYSIQUE SPATIALE, UNIVERSITÉ PARIS-XI, ORSAY, FRANCE)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 12:25 pm on September 29, 2019 Permalink | Reply
    Tags: "Ask Ethan: Why Are There Only Three Generations Of Particles?", , , Ethan Siegel, , , ,   

    From Ethan Siegel: “Ask Ethan: Why Are There Only Three Generations Of Particles?” 

    From Ethan Siegel
    Sep 28, 2019

    1
    The particles of the standard model, with masses (in MeV) in the upper right. The Fermions make up the left three columns (three generations); the bosons populate the right two columns. If a speculative idea like mirror-matter is correct, there may be a mirror-matter counterpart for each of these particles. (WIKIMEDIA COMMONS USER MISSMJ, PBS NOVA, FERMILAB, OFFICE OF SCIENCE, UNITED STATES DEPARTMENT OF ENERGY, PARTICLE DATA GROUP)

    With the discovery of the Higgs boson, the Standard Model is now complete. Can we be sure there isn’t another generation of particles out there?

    The Universe, at a fundamental level, is made up of just a few different types of particles and fields that exist amidst the spacetime fabric that composes otherwise empty space. While there may be a few components of the Universe that we don’t understand — like dark matter and dark energy — the normal matter and radiation not only well-understood, it’s perfectly well-described by our best theory of particles and their interactions: the Standard Model. There’s an intricate but ordered structure to the Standard Model, with three “generations” of particles. Why three? That’s what Peter Brouwer wants to know, asking:

    Particle families appear as a set of 3, characterised by the electron, muon and tau families. The last 2 being unstable and decaying. So my question is: Is it possible that higher order particles exist? And if so, what energies might such particles be found? If not, how do we know that they don’t exist.

    This is a big question. Let’s dive in.

    2
    The particles and antiparticles of the Standard Model have now all been directly detected, with the last holdout, the Higgs Boson, falling at the LHC earlier this decade. All of these particles can be created at LHC energies, and the masses of the particles lead to fundamental constants that are absolutely necessary to describe them fully. These particles can be well-described by the physics of the quantum field theories underlying the Standard Model, but they do not describe everything, like dark matter. (E. SIEGEL / BEYOND THE GALAXY)

    There are two classes of particles in the Standard Model: the fermions, which have half-integer spins (±½, ±1½, ±2½, etc.) and where every fermion has an antimatter (anti-fermion) counterpart, and the bosons, which have integer spins (0, ±1, ±2, etc.) and are neither matter nor antimatter. The bosons simply are what they are: 1 Higgs boson, 1 boson (photon) for the electromagnetic force, 3 bosons (W+, W- and Z) for the weak force, and 8 gluons for the strong force.

    The bosons are the force-carrying particles that enable the fermions to interact, but the fermions (and anti-fermions) carry fundamental charges that dictate which forces (and bosons) they’re affected by. While the quarks couple to all three forces, the leptons (and anti-leptons) don’t feel the strong force, and the neutrinos (and anti-neutrinos) don’t feel the electromagnetic force, either.

    3
    This diagram displays the structure of the standard model (in a way that displays the key relationships and patterns more completely, and less misleadingly, than in the more familiar image based on a 4×4 square of particles). In particular, this diagram depicts all of the particles in the Standard Model (including their letter names, masses, spins, handedness, charges, and interactions with the gauge bosons: i.e., with the strong and electroweak forces). It also depicts the role of the Higgs boson, and the structure of electroweak symmetry breaking, indicating how the Higgs vacuum expectation value breaks electroweak symmetry, and how the properties of the remaining particles change as a consequence. Note that the Z boson couples to both quarks and leptons, and can decay through neutrino channels. (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS)

    But what’s perhaps most puzzling about the Standard Model is that unlike the bosons, there are “copies” of the fermions. In addition to the fermionic particles that make up the stable or quasi-stable matter we’re familiar with:

    protons and neutrons (made of bound states of up-and-down quarks along with the gluons),
    atoms (made of atomic nuclei, which is made of protons and neutrons, as well as electrons),
    and electron neutrinos and electron antineutrinos (created in the nuclear reactions that involve building up to or decaying down from pre-existing nuclear combinations),

    there are two additional generations of heavier particles for each of these. In addition to the up-and-down quarks and antiquarks in 3 colors apiece, there are also the charm-and-strange quarks plus the top-and-bottom quarks. In addition to the electron, the electron neutrino and their antimatter counterparts, there are also the muon and muon neutrino, plus the tau and the tau neutrino.

    4
    A four-muon candidate event in the ATLAS detector at the Large Hadron Collider. (Technically, this decay involves two muons and two anti-muons.) The muon/anti-muon tracks are highlighted in red, as the long-lived muons travel farther than any other unstable particle. The energies achieved by the LHC are sufficient for creating Higgs bosons; previous electron-positron colliders could not achieve the necessary energies. (ATLAS COLLABORATION/CERN)

    For some reason, there are three copies, or generations, of fermionic particles that show up in the Standard Model. The heavier versions of these particles don’t spontaneously arise from conventional particle interactions, but will show up at very high energies.

    In particle physics, you can create any particle-antiparticle pair at all so long as you have enough available energy at your disposal. How much energy do you need? Whatever the mass of your particle is, you need enough energy to create both it and its partner antiparticle (which happens to always have the same mass as its particle counterpart). From Einstein’s E = mc², which details the conversion between mass and energy, so long as you have enough energy to make them, you can. This is exactly how we create particles of all types from high-energy collisions, like the kind occurring in cosmic rays or at the Large Hadron Collider.

    Cosmic rays produced by high-energy astrophysics sources (ASPERA collaboration – AStroParticle ERAnet)

    LHC

    CERN map


    CERN LHC Maximilien Brice and Julien Marius Ordan


    CERN LHC particles

    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS

    CERN ATLAS Image Claudia Marcelloni CERN/ATLAS

    ALICE

    CERN/ALICE Detector


    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    5
    A decaying B-meson, as shown here, may decay more frequently to one type of lepton pair than the other, contradicting Standard Model expectations. If this is the case, we’ll either have to modify the Standard Model or incorporate a new parameter (or set of parameters) into our understanding of how these particles behave, as we needed to do when we discovered that neutrinos had mass. (KEK / BELLE COLLABORATION)

    KEK-Accelerator Laboratory

    KEK Belle detector, at the High Energy Accelerator Research Organisation (KEK) in Tsukuba, Ibaraki Prefecture, Japan

    By the same token, whenever you create one of these unstable quarks or leptons (leaving neutrinos and antineutrinos aside), there’s always the possibility that they’ll decay to a lighter particle through the weak interactions. Because all the Standard Model fermions couple to the weak force, it’s only a matter of a fraction-of-a-second before any of the following particles — strange, charm, bottom, or top quarks, as well as the muon or tau leptons — decay down to that stable first generation of particles.

    As long as it’s energetically allowed and not forbidden by any of the other quantum rules or symmetries that exist in our Universe, the heavier particles will always decay in this fashion. The big question, though, of why there are three generations, is driven not by theoretical motivations, but by experimental results.

    6
    The first muon ever detected, along with other cosmic ray particles, was determined to be the same charge as the electron, but hundreds of times heavier, due to its speed and radius of curvature. The muon was the first of the heavier generations of particles to be discovered, dating all the way back to the 1930s. (PAUL KUNZE, IN Z. PHYS. 83 (1933))

    The muon is the lightest of the fermions to extend beyond the first generation of particles, and caused the famed physicist I. I. Rabi to exclaim, when he was shown the evidence of this particle “who ordered that?” As particle accelerators became more ubiquitous and more energetic over the next decades, particles like mesons and baryons, including ones with strange quarks and later charmed quarks, soon surfaced.

    However, it was only with the advent of the Mark I experiment at SLAC in the 1970s (which co-discovered the charm quark) that evidence for a third generation arose: in the form of the tau (and anti-tau) lepton. That 1976 discovery is now 43 years old. In the time since, we’ve directly detected every particle in the Standard Model, including all of the quarks and neutrinos and anti-neutrinos. Not only have we found them, but we’ve measured their particle properties exquisitely.

    7
    The rest masses of the fundamental particles in the Universe determine when and under what conditions they can be created, and also describe how they will curve spacetime in General Relativity. The properties of particles, fields, and spacetime are all required to describe the Universe we inhabit. (FIG. 15–04A FROM UNIVERSE-REVIEW.CA)

    Based on all we now know, we should be able to predict how these particles interact with themselves and one another, how they decay, and how they contribute to things like cross-sections, scattering amplitudes, branching ratios and event rates for any particle we choose to examine.

    The structure of the Standard Model is what enables us to do these calculations, and the particle content of the Standard Model enables us to predict which light particles the heavier ones will decay into. Perhaps the strongest example is the Z-boson, the neutral particle that mediates the weak force. The Z-boson is the third most massive particle known, with a rest mass of 91.187 GeV/c²: nearly 100 times more massive than a proton. Every time we create a Z-boson, we can experimentally measure the probability that it will decay into any particular particle or combinations of particles.

    8
    At LEP, the large electron-positron collider, thousands upon thousands of Z-bosons were created, and the decays of those Z particles were measured to reconstruct what fraction of Z-bosons became various quark and lepton combinations. The results clearly indicate that there are no fourth-generation particles below 45 GeV/c² in energy. (CERN / ALEPH COLLABORATION)

    CERN LEP Collider

    9
    For detecting the direction and momenta of charged particles with extreme accuracy, the ALEPH detector had at its core a time projection chamber, for years the world’s largest. In the foreground from the left, Jacques Lefrancois, Jack Steinberger, Lorenzo Foa and Pierre Lazeyras. ALEPH was an experiment on the LEP accelerator, which studied high-energy collisions between electrons and positrons from 1989 to 2000.

    By examining what fraction of the Z-bosons we create in accelerators decay to:

    electron/positron pairs,
    muon/anti-muon pairs,
    tau/anti-tau pairs,
    and “invisible” channels (i.e., neutrinos),

    we can determine how many generations of particles there are. As it turns out, 1-out-of-30 Z-bosons decay to each of electron/positron, muon/anti-muon, and tau/anti-tau pairs, while a total out of 1-in-5 Z-boson decays are invisible. According the Standard Model and our theory of particles and their interactions, that translates to 1-in-15 Z-bosons (with ~6.66% odds) will decay to each of the three types of neutrinos that exist.

    These results tell us that if there is a fourth (or more) generation of particles, every single one of them, including leptons and neutrinos, have a mass that’s greater than 45 GeV/c²: a threshold that only the Z, W, Higgs, and top particles are known to exceed.

    10
    The final results from many different particle accelerator experiments have definitively showed that the Z-boson decays to charged leptons about 10% of the time, neutral leptons about 20%, and hadrons (quark-containing particles) about 70% of the time. This is consistent with 3 generations of particles and no other number. (CERN / LEP COLLABORATION)

    Now, there’s nothing forbidding a fourth generation from existing and being much, much heavier than any of the particles we’ve observed so far; theoretically, it’s very much allowed. But experimentally, these collider results aren’t the only thing constraining the number of generational species in the Universe; there’s another constraint: the abundance of the light elements that were created in the early stages of the Big Bang.

    When the Universe was approximately one second old, it contains only protons, neutrons, electrons (and positrons), photons, and neutrinos and anti-neutrinos among the Standard Model particles. Over those first few minutes, protons and neutrons will eventually fuse to form deuterium, helium-3, helium-4, and lithium-7.

    11
    The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. Note the key point here: a good scientific theory (Big Bang Nucleosynthesis) makes robust, quantitative predictions for what should exist and be measurable, and the measurements (in red) line up extraordinarily well with the theory’s predictions, validating it and constraining the alternatives. The curves and the red line are for 3 neutrino species; more or fewer lead to results that conflict with the data severely, particularly for deuterium and helium-3. (NASA / WMAP SCIENCE TEAM)

    But how much will they form? That’s dependent on just a few parameters, like the baryon-to-photon ratio, which is commonly used to predict these abundances as the only parameter we vary.

    But we can vary any number of parameters we typically assume are fixed, such as the number of neutrino generations. From Big Bang Nucleosynthesis, as well as from the imprint of neutrinos on the leftover radiation glow from the Big Bang (the cosmic microwave background), we can conclude that there are three — not two or fewer and not four or more — generations of particles in the Universe.

    12
    The fit of the number of neutrino species required to match the CMB fluctuation data. Since we know there are three neutrino species, we can use this information to infer the temperature-equivalent of massless neutrinos at these early times, and arrive at a number: 1.96 K, with an uncertainty of just 0.02 K. (BRENT FOLLIN, LLOYD KNOX, MARIUS MILLEA, AND ZHEN PAN (2015) PHYS. REV. LETT. 115, 091301)

    It is eminently possible that there are more particles out there than the Standard Model, as we know it, presently predicts. In fact, given all the components of the Universe that aren’t accounted for in the Standard Model, from dark matter to dark energy to inflation to the origin of the matter-antimatter asymmetry, it’s practically unreasonable to conclude that there aren’t additional particles.

    But if the additional particles fit into the structure of the Standard Model as an additional generation, there are tremendous constraints. They could not have been created in great abundance during the early Universe. None of them can be less massive than 45.6 GeV/c². And they could not imprint an observable signature on the cosmic microwave background or in the abundance of the light elements.

    Experimental results are the way we learn about the Universe, but the way those results fit into our most successful theoretical frameworks is how we conclude what else does and doesn’t exist in our Universe. Unless a future accelerator result surprises us tremendously, three generations is all we get: no more, no less, and nobody knows why.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: