Tagged: Ethan Siegel Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:42 am on May 16, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel,   

    From Ethan Siegel: “We Have Now Reached The Limits Of The Hubble Space Telescope” 

    From Ethan Siegel
    May 16, 2019

    The Hubble Space Telescope, as imaged during its last and final servicing mission. The only way it can point itself is from the internal spinning devices that allow it to change its orientation and hold a stable position. But what it can see is determined by its instruments, mirror, and design limitations. It has reached those ultimate limits; to go beyond them, we’ll need a better telescope. (NASA)

    The world’s greatest observatory can go no further with its current instrument set.

    The Hubble Space Telescope has provided humanity with our deepest views of the Universe ever. It has revealed fainter, younger, less-evolved, and more distant stars, galaxies, and galaxy clusters than any other observatory. More than 29 years after its launch, Hubble is still the greatest tool we have for exploring the farthest reaches of the Universe. Wherever astrophysical objects emit starlight, no observatory is better equipped to study them than Hubble.

    But there are limits to what any observatory can see, even Hubble. It’s limited by the size of its mirror, the quality of its instruments, its temperature and wavelength range, and the most universal limiting factor inherent to any astronomical observation: time. Over the past few years, Hubble has released some of the greatest images humanity has ever seen. But it’s unlikely to ever do better; it’s reached its absolute limit. Here’s the story.

    The Hubble Space Telescope (left) is our greatest flagship observatory in astrophysics history, but is much smaller and less powerful than the upcoming James Webb (center). Of the four proposed flagship missions for the 2030s, LUVOIR (right) is by far the most ambitious. By probing the Universe to fainter objects, higher resolution, and across a wider range of wavelengths, we can improve our understanding of the cosmos in unprecedented ways. (MATT MOUNTAIN / AURA)

    NASA/ESA/CSA Webb Telescope annotated

    NASA Large UV Optical Infrared Surveyor (LUVOIR)

    From its location in space, approximately 540 kilometers (336 mi) up, the Hubble Space Telescope has an enormous advantage over ground-based telescopes: it doesn’t have to contend with Earth’s atmosphere. The moving particles making up Earth’s atmosphere provide a turbulent medium that distorts the path of any incoming light, while simultaneously containing molecules that prevent certain wavelengths of light from passing through it entirely.

    While ground-based telescopes at the time could achieve practical resolutions no better than 0.5–1.0 arcseconds, where 1 arcsecond is 1/3600th of a degree, Hubble — once the flaw with its primary mirror was corrected — immediately delivered resolutions down to the theoretical diffraction limit for a telescope of its size: 0.05 arcseconds. Almost instantly, our views of the Universe were sharper than ever before.

    This composite image of a region of the distant Universe (upper left) uses optical (upper right) and near-infrared (lower left) data from Hubble, along with far-infrared (lower right) data from Spitzer. The Spitzer Space Telescope is nearly as large as Hubble: more than a third of its diameter, but the wavelengths it probes are so much longer that its resolution is far worse. The number of wavelengths that fit across the diameter of the primary mirror is what determines the resolution.(NASA/JPL-CALTECH/ESA)

    Sharpness, or resolution, is one of the most important factors in discovering what’s out there in the distant Universe. But there are three others that are just as essential:

    the amount of light-gathering power you have, needed to view the faintest objects possible,
    the field-of-view of your telescope, enabling you to observe a larger number of objects,
    and the wavelength range you’re capable of probing, as the observed light’s wavelength depends the object’s distance from you.

    Hubble may be great at all of these, but it also possesses fundamental limits for all four.

    When you look at a region of the sky with an instrument like the Hubble Space Telescope, you are not simply viewing the light from distant objects as it was when that light was emitted, but also as the light is affected by all the intervening material and the expansion of space, that it experiences along its journey. Although Hubble has taken us farther back than any other observatory to date, there are fundamental limits to it, and reasons why it will be incapable of going farther. (NASA, ESA, AND Z. LEVAY, F. SUMMERS (STSCI))

    The resolution of any telescope is determined by the number of wavelengths of light that can fit across its primary mirror. Hubble’s 2.4 meter (7.9 foot) mirror enables it to obtain that diffraction-limited resolution of 0.05 arcseconds. This is so good that only in the past few years have Earth’s most powerful telescopes, often more than four times as large and equipped with state-of-the-art adaptive optics systems, been able to compete.

    To improve upon the resolution of Hubble, there are really only two options available:

    1. use shorter wavelengths of light, so that a greater number of wavelengths can fit across a mirror of the same size,
    2. or build a larger telescope, which will also enable a greater number of wavelengths to fit across your mirror.

    Hubble’s optics are designed to view ultraviolet light, visible light, and near-infrared light, with sensitivities ranging from approximately 100 nanometers to 1.8 microns in wavelength. It can do no better with its current instruments, which were installed during the final servicing mission back in 2009.

    This image shows Hubble servicing Mission 4 astronauts practice on a Hubble model underwater at the Neutral Buoyancy Lab in Houston under the watchful eyes of NASA engineers and safety divers. The final servicing mission on Hubble was successfully completed 10 years ago; Hubble has not had its equipment or instruments upgraded since, and is now running up against its fundamental limitations. (NASA)

    Light-gathering power is simply about collecting more and more light over a greater period of time, and Hubble has been mind-blowing in that regard. Without the atmosphere to contend with or the Earth’s rotation to worry about, Hubble can simply point to an interesting spot in the sky, apply whichever color/wavelength filter is desired, and take an observation. These observations can then be stacked — or added together — to produce a deep, long-exposure image.

    Using this technique, we can see the distant Universe to unprecedented depths and faintnesses. The Hubble Deep Field was the first demonstration of this technique, revealing thousands of galaxies in a region of space where zero were previously known. At present, the eXtreme Deep Field (XDF) is the deepest ultraviolet-visible-infrared composite, revealing some 5,500 galaxies in a region covering just 1/32,000,000th of the full sky.

    The Hubble eXtreme Deep Field (XDF) may have observed a region of sky just 1/32,000,000th of the total, but was able to uncover a whopping 5,500 galaxies within it: an estimated 10% of the total number of galaxies actually contained in this pencil-beam-style slice. The remaining 90% of galaxies are either too faint or too red or too obscured for Hubble to reveal, and observing for longer periods of time won’t improve this issue by very much. Hubble has reached its limits. (HUDF09 AND HXDF12 TEAMS / E. SIEGEL (PROCESSING))

    Of course, it took 23 days of total data taking to collect the information contained within the XDF. To reveal objects with half the brightness as the faintest objects seen in the XDF, we’d have to continue observing for a total of 92 days: four times as long. There’s a severe trade-off if we were to do this, as it would tie up the telescope for months and would only teach us marginally more about the distant Universe.

    Instead, an alternative strategy for learning more about the distant Universe is to survey a targeted, wide-field area of the sky. Individual galaxies and larger structures like galaxy clusters can be probed with deep but large-area views, revealing a tremendous level of detail about what’s present at the greatest distances of all. Instead of using our observing time to go deeper, we can still go very deep, but cast a much wider net.

    This, too, comes with a tremendous cost. The deepest, widest view of the Universe ever assembled by Hubble took over 250 days of telescope time, and was stitched together from nearly 7,500 individual exposures. While this new Hubble Legacy Field is great for extragalactic astronomy, it still only reveals 265,000 galaxies over a region of sky smaller than that covered by the full Moon.

    Hubble was designed to go deep, but not to go wide. Its field of view is extremely narrow, which makes a larger, more comprehensive survey of the distant Universe all but prohibitive. It’s truly remarkable how far Hubble has taken us in terms of resolution, survey depth, and field-of-view, but Hubble has truly reached its limit on those fronts.

    In the big image at left, the many galaxies of a massive cluster called MACS J1149+2223 dominate the scene. Gravitational lensing by the giant cluster brightened the light from the newfound galaxy, known as MACS 1149-JD, some 15 times. At upper right, a partial zoom-in shows MACS 1149-JD in more detail, and a deeper zoom appears to the lower right. This is correct and consistent with General Relativity, and independent of how we visualize (or whether we visualize) space. (NASA/ESA/STSCI/JHU)

    Finally, there are the wavelength limits as well. Stars emits a wide variety of light, from the ultraviolet through the optical and into the infrared. It’s no coincidence that this is what Hubble was designed for: to look for light that’s of the same variety and wavelengths that we know stars emit.

    But this, too, is fundamentally limiting. You see, as light travels through the Universe, the fabric of space itself is expanding. This causes the light, even if it’s emitted with intrinsically short wavelengths, to have its wavelength stretched by the expansion of space. By the time it arrives at our eyes, it’s redshifted by a particular factor that’s determined by the expansion rate of the Universe and the object’s distance from us.

    Hubble’s wavelength range sets a fundamental limit to how far back we can see: to when the Universe is around 400 million years old, but no earlier.

    The most distant galaxy ever discovered in the known Universe, GN-z11, has its light come to us from 13.4 billion years ago: when the Universe was only 3% its current age: 407 million years old. But there are even more distant galaxies out there, and we all hope that the James Webb Space Telescope will discover them. (NASA, ESA, AND G. BACON (STSCI))

    The most distant galaxy ever discovered by Hubble, GN-z11, is right at this limit. Discovered in one of the deep-field images, it has everything imaginable going for it.

    It was observed across all the different wavelength ranges Hubble is capable of, with only its ultraviolet-emitted light showing up in the longest-wavelength infrared filters Hubble can measure.
    It was gravitationally lensed by a nearby galaxy, magnifying its brightness to raise it above Hubble’s naturally-limiting faintness threshold.
    It happens to be located along a line-of-sight that experienced a high (and statistically-unlikely) level of star-formation at early times, providing a clear path for the emitted light to travel along without being blocked.

    No other galaxy has been discovered and confirmed at even close to the same distance as this object.

    Only because this distant galaxy, GN-z11, is located in a region where the intergalactic medium is mostly reionized, can Hubble reveal it to us at the present time. To see further, we require a better observatory, optimized for these kinds of detection, than Hubble. (NASA, ESA, AND A. FEILD (STSCI))

    Hubble may have reached its limits, but future observatories will take us far beyond what Hubble’s limits are. The James Webb Space Telescope is not only larger — with a primary mirror diameter of 6.5 meters (as opposed to Hubble’s 2.4 meters) — but operates at far cooler temperatures, enabling it to view longer wavelengths.

    At these longer wavelengths, up to 30 microns (as opposed to Hubble’s 1.8), James Webb will be able to see through the light-blocking dust that hampers Hubble’s view of most of the Universe. Additionally, it will be able to see objects with much greater redshifts and earlier lookback times: seeing the Universe when it was a mere 200 million years old. While Hubble might reveal some extremely early galaxies, James Webb might reveal them as they’re in the process of forming for the very first time.

    The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds, and provide the best constraints on how dark energy evolves over cosmic time. (NASA / GODDARD / WFIRST)


    Other observatories will take us to other frontiers in realms where Hubble is only scratching the surface. NASA’s proposed flagship of the 2020s, WFIRST, will be very similar to Hubble, but will have 50 times the field-of-view, making it ideal for large surveys. Telescopes like the LSST will cover nearly the entire sky, with resolutions comparable to what Hubble achieves, albeit with shorter observing times. And future ground-based observatories like GMT or ELT, which will usher in the era of 30-meter-class telescopes, might finally surpass Hubble in terms of practical resolution.

    At the limits of what Hubble is capable of, it’s still extending our views into the distant Universe, and providing the data that enables astronomers to push the frontiers of what is known. But to truly go farther, we need better tools. If we truly value learning the secrets of the Universe, including what it’s made of, how it came to be the way it is today, and what its fate is, there’s no substitute for the next generation of observatories.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 4:16 pm on May 11, 2019 Permalink | Reply
    Tags: , , , , , Ethan Siegel,   

    From Ethan Siegel: “How Does The Event Horizon Telescope Act Like One Giant Mirror?” 

    From Ethan Siegel
    May 11, 2019

    The Allen Telescope Array is potentially capable of detecting a strong radio signal from Proxima b, or any other star system with strong enough radio transmissions. It has successfully worked in concert with other radio telescopes across extremely long baselines to resolve the event horizon of a black hole: arguably its crowning achievement. (WIKIMEDIA COMMONS / COLBY GUTIERREZ-KRAYBILL)

    If you want to observe the Universe more deeply and at higher resolution than ever before, there’s one tactic that everyone agrees is ideal: build as big a telescope as possible. But the highest resolution image we’ve ever constructed in astronomy doesn’t come from the biggest telescope, but rather from an enormous array of modestly-sized telescopes: the Event Horizon Telescope. How is that possible? That’s what our Ask Ethan questioner for this week, Dieter, wants to know, stating:

    “I’m having difficulty understanding why the EHT array is considered as ONE telescope (which has the diameter of the earth).
    When you consider the EHT as ONE radio telescope, I do understand that the angular resolution is very high due to the wavelength of the incoming signal and earth’s diameter. I also understand that time syncing is critical.
    But it would help very much to explain why the diameter of the EHT is considered as ONE telescope, considering there are about 10 individual telescopes in the array.”

    It’s made up of scores of telescopes at many different sites across the world. But it acts like one giant telescope. Here’s how.

    Event Horizon Telescope Array

    Arizona Radio Observatory
    Arizona Radio Observatory/Submillimeter-wave Astronomy (ARO/SMT)

    Atacama Pathfinder EXperiment

    CARMA Array no longer in service
    Combined Array for Research in Millimeter-wave Astronomy (CARMA)

    Atacama Submillimeter Telescope Experiment (ASTE)
    Atacama Submillimeter Telescope Experiment (ASTE)

    Caltech Submillimeter Observatory
    Caltech Submillimeter Observatory (CSO)

    IRAM 30m Radio telescope, on Pico Veleta in the Spanish Sierra Nevada,, Altitude 2,850 m (9,350 ft)

    Institut de Radioastronomie Millimetrique (IRAM) 30m

    James Clerk Maxwell Telescope interior, Mauna Kea, Hawaii, USA
    James Clerk Maxwell Telescope interior, Mauna Kea, Hawaii, USA

    Large Millimeter Telescope Alfonso Serrano
    Large Millimeter Telescope Alfonso Serrano

    CfA Submillimeter Array Mauna Kea, Hawaii, USA, Altitude 4,080 m (13,390 ft)

    Submillimeter Array Hawaii SAO

    ESO/NRAO/NAOJ ALMA Array, Chile

    South Pole Telescope SPTPOL
    South Pole Telescope SPTPOL

    Future Array/Telescopes

    IRAM NOEMA in the French Alps on the wide and isolated Plateau de Bure at an elevation of 2550 meters, the telescope currently consists of ten antennas, each 15 meters in diameter.interferometer, Located in the French Alpes on the wide and isolated Plateau de Bure at an elevation of 2550 meters

    NSF CfA Greenland telescope

    Greenland Telescope

    ARO 12m Radio Telescope, Kitt Peak National Observatory, Arizona, USA, Altitude 1,914 m (6,280 ft)

    ARO 12m Radio Telescope

    Constructing an image of the black hole at the center of Messier 87 is one of the most remarkable achievements we’ve ever made. Here’s what made it possible.

    The brightness distance relationship, and how the flux from a light source falls off as one over the distance squared. The Earth has the temperature that it does because of its distance from the Sun, which determines how much energy-per-unit-area is incident on our planet. Distant stars or galaxies have the apparent brightness they do because of this relationship, which is demanded by energy conservation. Note that the light also spreads out in area as it leaves the source. (E. SIEGEL / BEYOND THE GALAXY)

    The first thing you need to understand is how light works. When you have any light-emitting object in the Universe, the light it emits will spread out in a sphere upon leaving the source. If all you had was a photo-detector that was a single point, you could still detect that distant, light-emitting object.

    But you wouldn’t be able to resolve it.

    When light (i.e., a photon) strikes your point-like detector, you can register that the light arrived; you can measure the light’s energy and wavelength; you can know what direction the light came from. But you wouldn’t be able to know anything about that object’s physical properties. You wouldn’t know its size, shape, physical extent, or whether different parts were different colors or brightnesses. This is because you’re only receiving information at a single point.

    Nebula NGC 246 is better known as the Skull Nebula, for the presence of its two glowing eyes. The central eye is actually a pair of binary stars, and the smaller, fainter one is responsible for the nebula itself, as it blows off its outer layers. It’s only 1,600 light-years away, in the constellation of Cetus. Seeing this as more than a single object requires the ability to resolve these features, dependent on the size of the telescope and the number of wavelengths of light that fit across its primary mirror. (GEMINI SOUTH GMOS, TRAVIS RECTOR (UNIV. ALASKA))

    Gemini Observatory GMOS on Gemini South

    Gemini/South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    What would it take to know whether you were looking at a single point of light, such as a star like our Sun, or multiple points of light, like you’d find in a binary star system? For that, you’d need to receive light at multiple points. Instead of a point-like detector, you could have a dish-like detector, like the primary mirror on a reflecting telescope.

    When the light comes in, it’s not striking a point anymore, but rather an area. The light that had spread out in a sphere now gets reflected off of the mirror and focused to a point. And light that comes from two different sources, even if they’re close together, will be focused to two different locations.

    Any reflecting telescope is based on the principle of reflecting incoming light rays via a large primary mirror which focuses that light to a point, where it’s then either broken down into data and recorded or used to construct an image. This specific diagram illustrates the light-paths for a Herschel-Lomonosov telescope system. Note that two distinct sources will have their light focused to two distinct locations (blue and green paths), but only if the telescope has sufficient capabilities. (WIKIMEDIA COMMONS USER EUDJINNIUS)

    If your telescope mirror is large enough compared to the separation of the two objects, and your optics are good enough, you’ll be able to resolve them. If you build your apparatus right, you’ll be able to tell that there are multiple objects. The two sources of light will appear to be distinct from one another. Technically, there’s a relationship between three quantities:

    the angular resolution you can achieve,
    the diameter of your mirror,
    and the wavelength of light you’re looking in.

    If your sources are closer together, or your telescope mirror is smaller, or you look using a longer wavelength of light, it becomes more and more challenging to resolve whatever you’re looking at. It makes it harder to resolve whether there are multiple objects or not, or whether the object you’re viewing has bright-and-dark features. If your resolution is insufficient, everything appears as nothing more than a blurry, unresolved single spot.

    The limits of resolution are determined by three factors: the diameter of your telescope, the wavelength of light your viewing in, and the quality of your optics. If you have perfect optics, you can resolve all the way down to the Rayleigh limit, which grants you the highest-possible resolution allowed by physics. (SPENCER BLIVEN / PUBLIC DOMAIN)

    So that’s the basics of how any large, single-dish telescope works. The light comes in from the source, with every point in space — even different points originating from the same object — emitting its own light with its own unique properties. The resolution is determined by the number of wavelengths of light that can fit across our primary mirror.

    If our detectors are sensitive enough, we’ll be able to resolve all sorts of features on an object. Hot-and-cold regions of a star, like sunspots, can appear. We can make out features like volcanoes, geysers, icecaps and basins on planets and moons. And the extent of light-emitting gas or plasma, along with their temperatures and densities, can be imaged as well. It’s a fantastic achievement that only depends on the physical and optical properties of your telescope.

    The second-largest black hole as seen from Earth, the one at the center of the galaxy Messier 87, is shown in three views here. At the top is optical from Hubble, at the lower-left is radio from NRAO, and at the lower-right is X-ray from Chandra. These differing views have different resolutions dependent on the optical sensitivity, wavelength of light used, and size of the telescope mirrors used to observe them. The Chandra X-ray observations provide exquisite resolution despite having an effective 8-inch (20 cm) diameter mirror, owing to the extremely short-wavelength nature of the X-rays it observes. (TOP, OPTICAL, HUBBLE SPACE TELESCOPE / NASA / WIKISKY; LOWER LEFT, RADIO, NRAO / VERY LARGE ARRAY (VLA); LOWER RIGHT, X-RAY, NASA / CHANDRA X-RAY TELESCOPE)

    NASA/ESA Hubble Telescope

    NRAO/Karl V Jansky Expanded Very Large Array, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    NASA/Chandra X-ray Telescope

    But maybe you don’t need the entire telescope. Building a giant telescope is expensive and resource intensive, and it actually serves two purposes to build them so large.

    The larger your telescope, the better your resolution, based on the number of wavelengths of light that fit across your primary mirror.
    The larger your telescope’s collecting area, the more light you can gather, which means you can observe fainter objects and finer details than you could with a lower-area telescope.

    If you took your large telescope mirror and started darkening out some spots — like you were applying a mask to your mirror — you’d no longer be able to receive light from those locations. As a result, the brightness limits on what you could see would decrease, in proportion to the surface area (light-gathering area) of your telescope. But the resolution would still be equal to the separation between the various portions of the mirror.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ALMA is perhaps the most advanced and most complex array of radio telescopes in the world, is capable of imaging unprecedented details in protoplanetary disks, and is also an integral part of the Event Horizon Telescope.

    This is the principle on which arrays of telescopes are based. There are many sources out there, particularly in the radio portion of the spectrum, that are extremely bright, so you don’t need all that collecting area that comes with building an enormous, single dish.

    Instead, you can build an array of dishes. Because the light from a distant source will spread out, you want to collect light over as large an area as possible. You don’t need to invest all your resources in constructing an enormous dish with supreme light-gathering power, but you still need that same superior resolution. And that’s where the idea of using a giant array of radio telescopes comes from. With a linked array of telescopes all over the world, we can resolve some of the radio-brightest but smallest angular-size objects out there.

    EHT map

    This diagram shows the location of all of the telescopes and telescope arrays used in the 2017 Event Horizon Telescope observations of M87. Only the South Pole Telescope was unable to image M87, as it is located on the wrong part of the Earth to ever view that galaxy’s center. Every one of these locations is outfitted with an atomic clock, among other pieces of equipment. (NRAO)

    Functionally, there is no difference between thinking about the following two scenarios.

    The Event Horizon Telescope is a single mirror with a lot of masking tape over portions of it. The light gets collected and focused from all these disparate locations across the Earth into a single point, and then synthesized together into an image that reveals the differing brightnesses and properties of your target in space, up to your maximal resolution.
    The Event Horizon Telescope is itself an array of many different individual telescopes and individual telescope arrays. The light gets collected, timestamped with an atomic clock (for syncing purposes), and recorded as data at each individual site. That data is then stitched-and-processed together appropriately to create an image that reveals the brightnesses and properties of whatever you’re looking at in space.

    The only difference is in the techniques you have to use to make it happen, but that’s why we have the science of VLBI: very long-baseline interferometry.

    In VLBI, the radio signals are recorded at each of the individual telescopes before being shipped to a central location. Each data point that’s received is stamped with an extremely accurate, high-frequency atomic clock alongside the data in order to help scientists get the synchronization of the observations correct. (PUBLIC DOMAIN / WIKIPEDIA USER RNT20)

    You might immediately start thinking of wild ideas, like launching a radio telescope into deep space and using that, networked with the telescopes on Earth, to extend your baseline. It’s a great plan, but you must understand that there’s a reason we didn’t just build the Event Horizon Telescope with two well-separated sites: we want that incredible resolution in all directions.

    We want to get full two-dimensional coverage of the sky, which means ideally we’d have our telescopes arranged in a large ring to get those enormous separations. That’s not feasible, of course, on a world with continents and oceans and cities and nations and other borders, boundaries and constraints. But with eight independent sites across the world (seven of which were useful for the M87 image), we were able to do incredibly well.

    The Event Horizon Telescope’s first released image achieved resolutions of 22.5 microarcseconds, enabling the array to resolve the event horizon of the black hole at the center of M87. A single-dish telescope would have to be 12,000 km in diameter to achieve this same sharpness. Note the differing appearances between the April 5/6 images and the April 10/11 images, which show that the features around the black hole are changing over time. This helps demonstrate the importance of syncing the different observations, rather than just time-averaging them. (EVENT HORIZON TELESCOPE COLLABORATION)

    Right now, the Event Horizon Telescope is limited to Earth, limited to the dishes that are presently networked together, and limited by the particular wavelengths it can measure. If it could be modified to observe at shorter wavelengths, and could overcome the atmospheric opacity at those wavelengths, we could achieve higher resolutions with the same equipment. In principle, we might be able to see features three-to-five times as sharp without needing a single new dish.

    By making these simultaneous observations all across the world, the Event Horizon Telescope really does behave as a single telescope. It only has the light-gathering power of the individual dishes added together, but can achieve the resolution of the distance between the dishes in the direction that the dishes are separated.

    By spanning the diameter of Earth with many different telescopes (or telescope arrays) simultaneously, we were able to obtain the data necessary to resolve the event horizon.

    The Event Horizon Telescope behaves like a single telescope because of the incredible advances in the techniques we use and the increases in computational power and novel algorithms that enable us to synthesize this data into a single image. It’s not an easy feat, and took a team of over 100 scientists working for many years to make it happen.

    But optically, the principles are the same as using a single mirror. We have light coming in from different spots on a single source, all spreading out, and all arriving at the various telescopes in the array. It’s just as though they’re arriving at different locations along an extremely large mirror. The key is in how we synthesize that data together, and use it to reconstruct an image of what’s actually occurring.

    Now that the Event Horizon Telescope team has successfully done exactly that, it’s time to set our sights on the next target: learning as much as we can about every black hole we’re capable of viewing. Like all of you, I can hardly wait.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 9:07 am on May 11, 2019 Permalink | Reply
    Tags: "Cosmology’s Biggest Conundrum Is A Clue, , , , , Ethan Siegel, Not A Controversy"   

    From Ethan Siegel: “Cosmology’s Biggest Conundrum Is A Clue, Not A Controversy” 

    From Ethan Siegel
    May 10, 2019

    The expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. It took thousands of scientists working for hundreds of years for us to arrive at this picture, and yet the lack of a consensus on what the expansion rate actually is tells us that either something is dreadfully wrong, we have an unidentified error somewhere, or there’s a new scientific revolution just on the horizon. (C. FAUCHER-GIGUÈRE, A. LIDZ, AND L. HERNQUIST, SCIENCE 319, 5859 (47))

    How fast is the Universe expanding? The results might be pointing to something incredible.

    If you want to know how something in the Universe works, all you need to do is figure out how some measurable quantity will give you the necessary information, go out and measure it, and draw your conclusions. Sure, there will be biases and errors, along with other confounding factors, and they might lead you astray if you’re not careful. The antidote for that? Make as many independent measurements as you can, using as many different techniques as you can, to determine those natural properties as robustly as possible.

    If you’re doing everything right, every one of your methods will converge on the same answer, and there will be no ambiguity. If one measurement or technique is off, the others will point you in the right direction. But when we try to apply this technique to the expanding Universe, a puzzle arises: we get one of two answers, and they’re not compatible with each other. It’s cosmology’s biggest conundrum, and it might be just the clue we need to unlock the biggest mysteries about our existence.

    The redshift-distance relationship for distant galaxies. The points that don’t fall exactly on the line owe the slight mismatch to the differences in peculiar velocities, which offer only slight deviations from the overall observed expansion. The original data from Edwin Hubble, first used to show the Universe was expanding, all fit in the small red box at the lower-left. (ROBERT KIRSHNER, PNAS, 101, 1, 8–13 (2004))

    We’ve known since the 1920s that the Universe is expanding, with the rate of expansion known as the Hubble constant. Ever since, it’s been a quest for the generations to determine “by how much?”

    Edwin Hubble looking through a 100-inch Hooker telescope at Mount Wilson in Southern California, 1929 discovers the Universe is Expanding

    Early on, there was only one class of technique: the cosmic distance ladder. This technique was incredibly straightforward, and involved just four steps.

    Choose a class of object whose properties are intrinsically known, where if you measure something observable about it (like its period of brightness fluctuation), you know something inherent to it (like its intrinsic brightness).
    Measure the observable quantity, and determine what its intrinsic brightness is.
    Then measure the apparent brightness, and use what you know about cosmic distances in an expanding Universe to determine how far away it must be.
    Finally, measure the redshift of the object in question.

    The farther a galaxy is, the faster it expands away from us, and the more its light appears redshifted.

    A galaxy moving with the expanding Universe will be even a greater number of light years away, today, than the number of years (multiplied by the speed of light) that it took the light emitted from it to reach us. But how fast the Universe is expanding is something that astronomers using different techniques cannot agree on. (LARRY MCNISH OF RASC CALGARY CENTER)

    The redshift is what ties it all together. As the Universe expands, any light traveling through it will also stretch. Light, remember, is a wave, and has a specific wavelength. That wavelength determines what its energy is, and every atom and molecule in the Universe has a specific set of emission and absorption lines that only occur at specific wavelengths. If you can measure at what wavelength those specific spectral lines appear in a distant galaxy, you can determine how much the Universe has expanded from the time it left the object until it arrived at your eyes.

    Combine the redshift and the distance for a variety of objects all throughout the Universe, and you can figure out how fast it’s expanding in all directions, as well as how the expansion rate has changed over time.

    The history of the expanding Universe, including what it’s composed of at present. It is only by measuring how light redshifts as it travels through the expanding Universe that we can come to understand it as we do, and that requires a large series of independent measurements.(ESA AND THE PLANCK COLLABORATION (MAIN), WITH MODIFICATIONS BY E. SIEGEL; NASA / WIKIMEDIA COMMONS USER 老陳 (INSET))

    All throughout the 20th century, scientists used this technique to try and determine as much as possible about our cosmic history. Cosmology — the scientific study of what the Universe is made of, where it came from, how it came to be the way it is today, and what its future holds — was derided by many as a quest for two parameters: the current expansion rate and how the expansion rate evolved over time. Until the 1990s, scientists couldn’t even agree on the first of these.

    They were all using the same technique, but made different assumptions. Some groups used different types of astronomical objects from one another, others used different instruments with different measurement errors. Some classes of object turned out to be more complicated than we originally thought they’d be. But many problems still showed up.

    Standard candles (L) and standard rulers (R) are two different techniques astronomers use to measure the expansion of space at various times/distances in the past. Based on how quantities like luminosity or angular size change with distance, we can infer the expansion history of the Universe. Using the candle method is part of the distance ladder, yielding 73 km/s/Mpc. Using the ruler is part of the early signal method, yielding 67 km/s/Mpc. (NASA / JPL-CALTECH)

    If the Universe were expanding too quickly, there wouldn’t have been enough time to form planet Earth. If we can find the oldest stars in our galaxy, we know the Universe has to be at least as old as the stars within it. And if the expansion rate evolved over time, because there was something other than matter or radiation in it — or a different amount of matter than we’d assumed — that would show up in how the expansion rate changed over time.

    Resolving these early controversies were the primary scientific motivation for building the Hubble Space Telescope.

    NASA/ESA Hubble Telescope

    It’s key project was to make this measurement, and was tremendously successful. The rate it got was 72 km/s/Mpc, with just a 10% uncertainty. This result, published in 2001, solved a controversy as old as Hubble’s law itself. Alongside the discovery of dark matter and energy, it seemed to give us a fully accurate and self-consistent picture of the Universe.

    The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties, especially the Cepheid variable and supernovae steps; it also would be biased towards higher or lower values if we lived in an underdense or overdense region. There are enough independent methods use to construct the cosmic distance ladder that we can no longer reasonably fault one ‘rung’ on the ladder as the cause of our mismatch between different methods. (NASA, ESA, A. FEILD (STSCI), AND A. RIESS (STSCI/JHU))

    The distance ladder group has grown far more sophisticated over the intervening time. There are now an incredibly large number of independent ways to measure the expansion history of the Universe:

    using distant gravitational lenses,
    using supernova data,
    using rotational and dispersion properties of distant galaxies,
    or using surface brightness fluctuations from face-on spirals,

    and they all yield the same result. Regardless of whether you calibrate them with Cepheid variable stars, RR Lyrae stars, or red giant stars about to undergo helium fusion, you get the same value: ~73 km/s/Mpc, with uncertainties of just 2–3%.

    The Variable Star RS Puppis, with its light echoes shining through the interstellar clouds. Variable stars come in many varieties; one of them, Cepheid variables, can be measured both within our own galaxy and in galaxies up to 50–60 million light years away. This enables us to extrapolate distances from our own galaxy to far more distant ones in the Universe. Other classes of individual star, such as a star at the tip of the AGB or a RR Lyrae variable, can be used instead of Cepheids, yielding similar results and the same cosmic conundrum over the expansion rate. (NASA, ESA, AND THE HUBBLE HERITAGE TEAM)

    It would be a tremendous victory for cosmology, except for one problem. It’s now 2019, and there’s a second way to measure the expansion rate of the Universe. Instead of looking at distant objects and measuring how the light they’ve emitted has evolved, we can using relics from the earliest stages of the Big Bang. When we do, we get values of ~67 km/s/Mpc, with a claimed uncertainty of just 1–2%. These numbers are different by 9% from one another, and the uncertainties do not overlap.

    Modern measurement tensions from the distance ladder (red) with early signal data from the CMB and BAO (blue) shown for contrast. It is plausible that the early signal method is correct and there’s a fundamental flaw with the distance ladder; it’s plausible that there’s a small-scale error biasing the early signal method and the distance ladder is correct, or that both groups are right and some form of new physics (shown at top) is the culprit. But right now, we cannot be sure.(ADAM RIESS (PRIVATE COMMUNICATION))

    This time, however, things are different. We can no longer expect that one group will be right and the other will be wrong. Nor can we expect that the answer will be somewhere in the middle, and that both groups are making some sort of error in their assumptions. The reason we can’t count on this is that there are too many independent lines of evidence. If we try to explain one measurement with an error, it will contradict another measurement that’s already been made.

    The total amount of stuff that’s in the Universe is what determines how the Universe expands over time. Einstein’s General Relativity ties the energy content of the Universe, the expansion rate, and the overall curvature together. If the Universe expands too quickly, that implies that there’s less matter and more dark energy in it, and that will conflict with observations.

    Before Planck, the best-fit to the data indicated a Hubble parameter of approximately 71 km/s/Mpc, but a value of approximately 69 or above would now be too great for both the dark matter density (x-axis) we’ve seen via other means and the scalar spectral index (right side of the y-axis) that we require for the large-scale structure of the Universe to make sense. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015))

    For example, we know that the total amount of matter in the Universe has to be around 30% of the critical density, as seen from the large-scale structure of the Universe, galaxy clustering, and many other sources. We also see that the scalar spectral index — a parameter that tells us how gravitation will form bound structures on small versus large scales — has to be slightly less than 1.

    If the expansion rate is too high, you not only get a Universe with too little matter and too high of a scalar spectral index to agree with the Universe we have, you get a Universe that’s too young: 12.5 billion years old instead of 13.8 billion years old. Since we live in a galaxy with stars that have been identified as being more than 13 billion years old, this would create an enormous conundrum: one that cannot be reconciled.

    Located around 4,140 light-years away in the galactic halo, SDSS J102915+172927 is an ancient star that contains just 1/20,000th the heavy elements the Sun possesses, and should be over 13 billion years old: one of the oldest in the Universe, and having possibly formed before even the Milky Way. The existence of stars like this informs us that the Universe cannot have properties that lead to an age younger than the stars within it. (ESO, DIGITIZED SKY SURVEY 2)

    But perhaps no one is wrong. Perhaps the early relics point to a true set of facts about the Universe:

    it is 13.8 billion years old,
    it does have roughly a 70%/25%/5% ratio of dark energy to dark matter to normal matter,
    it does appear to be consistent with an expansion rate that’s on the low end of 67 km/s/Mpc.

    And perhaps the distance ladder also points to a true set of facts about the Universe, where it’s expanding at a larger rate today on cosmically nearby scales.

    Although it sounds bizarre, both groups could be correct. The reconciliation could come from a third option that most people aren’t yet willing to consider. Instead of the distance ladder group being wrong or the early relics group being wrong, perhaps our assumptions about the laws of physics or the nature of the Universe is wrong. In other words, perhaps we’re not dealing with a controversy; perhaps what we’re seeing is a clue of new physics.

    A doubly-lensed quasar, like the one shown here, is caused by a gravitational lens. If the time-delay of the multiple images can be understood, it may be possible to reconstruct an expansion rate for the Universe at the distance of the quasar in question. The earliest results now show a total of four lensed quasar systems, providing an estimate for the expansion rate consistent with the distance ladder group. (NASA HUBBLE SPACE TELESCOPE, TOMMASO TREU/UCLA, AND BIRRER ET AL)

    It is possible that the ways we measure the expansion rate of the Universe are actually revealing something novel about the nature of the Universe itself. Something about the Universe could be changing with time, which would be yet another explanation for why these two different classes of technique could yield different results for the Universe’s expansion history. Some options include:

    our local region of the Universe has unusual properties compared to the average (which is already disfavored),
    dark energy is changing in an unexpected fashion over time,
    gravity behaves differently than we’ve anticipated on cosmic scales,
    or there is a new type of field or force permeating the Universe.

    The option of evolving dark energy is of particular interest and importance, as this is exactly what NASA’s future flagship mission for astrophysics, WFIRST, is being explicitly designed to measure.


    The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds.(NASA / GODDARD / WFIRST)

    Right now, we say that dark energy is consistent with a cosmological constant. What this means is that, as the Universe expands, dark energy’s density remains a constant, rather than becoming less dense (like matter does). Dark energy could also strengthen over time, or it could change in behavior: pushing space inwards or outwards by different amounts.

    Our best constraints on this today, in a pre-WFIRST world, show that dark energy is consistent with a cosmological constant to about the 10% level. With WFIRST, we’ll be able to measure any departures down to the 1% level: enough to test whether evolving dark energy holds the answer to the expanding Universe controversy. Until we have that answer, all we can do is continue to refine our best measurements, and look at the full suite of evidence for clues as to what the solution might be.

    While matter (both normal and dark) and radiation become less dense as the Universe expands owing to its increasing volume, dark energy is a form of energy inherent to space itself. As new space gets created in the expanding Universe, the dark energy density remains constant. If dark energy changes over time, we could discover not only a possible solution to this conundrum concerning the expanding Universe, but a revolutionary new insight concerning the nature of existence. (E. SIEGEL / BEYOND THE GALAXY)

    This is not some fringe idea, where a few contrarian scientists are overemphasizing a small difference in the data. If both groups are correct — and no one can find a flaw in what either one has done — it might be the first clue we have in taking our next great leap in understanding the Universe. Nobel Laureate Adam Riess, perhaps the most prominent figure presently researching the cosmic distance ladder, was kind enough to record a podcast with me, discussing exactly what all of this might mean for the future of cosmology.

    It’s possible that somewhere along the way, we have made a mistake somewhere. It’s possible that when we identify it, everything will fall into place just as it should, and there won’t be a controversy or a conundrum any longer. But it’s also possible that the mistake lies in our assumptions about the simplicity of the Universe, and that this discrepancy will pave the way to a deeper understanding of our fundamental cosmic truths.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 1:09 pm on May 8, 2019 Permalink | Reply
    Tags: "What Was It Like When Life’s Complexity Exploded?", , As creatures grew in complexity they accumulated large numbers of genes that encoded for specific structures that performed a variety of functions., , , Ethan Siegel, Evolution- in many ways- is like an arms race. The different organisms that exist are continuously competing for limited resources., If an organism develops the ability to perform a specific function then it will have a genetic sequence that encode the information for forming a structure that performs it., In biology structure and function is arguably the most basic relationship of all., Once the first living organisms arose our planet was filled with organisms harvesting energy and resources from the environment, The second major evolutionary step involves the development of specialized components within a single organism, What we do know is that life existed on Earth for nearly four billion years before the Cambrian explosion which heralds the rise of complex animals.   

    From Ethan Siegel: “What Was It Like When Life’s Complexity Exploded?” 

    From Ethan Siegel
    May 8, 2019

    During the Cambrian era in Earth’s history, some 550–600 million years ago, many examples of multicellular, sexually-reproducing, complex and differentiated life forms emerged for the first time. This period is known as the Cambrian explosion, and heralds an enormous leap in the complexity of organisms found on Earth. (GETTY)

    We’re a long way from the beginnings of life on Earth. Here’s the key to how we got there.

    The Universe was already two-thirds of its present age by the time the Earth formed, with life emerging on our surface shortly thereafter. But for billions of years, life remained in a relatively primitive state. It took nearly a full four billion years before the Cambrian explosion came: where macroscopic, multicellular, complex organisms — including animals, plants, and fungi — became the dominant lifeforms on Earth.

    As surprising as it may seem, there were really only a handful of critical developments that were necessary in order to go from single-celled, simple life to the extraordinarily diverse sets of creatures we’d recognize today. We do not know if this path is one that’s easy or hard among planets where life arises. We do not know whether complex life is common or rare. But we do know that it happened on Earth. Here’s how.

    This coastline consists of quartzite Pre-cambrian rocks, many of which may have once contained evidence of the fossilized lifeforms that gave rise to modern plants, animals, fungi, and other multicellular, sexually-reproducing creatures. These rocks have undergone intensive folding over their long and ancient history, and do not display the rich evidence for complex life that later, Cambrian-era rocks do. (GETTY)

    Once the first living organisms arose, our planet was filled with organisms harvesting energy and resources from the environment, metabolizing them to grow, adapt, reproduce, and respond to external stimuli. As the environment changed due to resource scarcity, competition, climate change and many other factors, certain traits increased the odds of survival, while other traits decreased them. Owing to the phenomenon of natural selection, the organisms most adaptable to change survived and thrived.

    Relying on random mutations alone, and passing those traits onto offspring, is extremely limiting as far as evolution goes. If mutating your genetic material and passing it onto your offspring is the only mechanism you have for evolution, you might not ever achieve complexity.

    Acidobacteria, like the example shown here, are likely some of the first photosynthetic organisms of all. They have no internal structure or membranes, loose, free-floating DNA, and are anoxygenic: they do not produce oxygen from photosynthesis. These are prokaryotic organisms that are very similar to the primitive life found on Earth some ~2.5–3 billion years ago. (US DEPARTMENT OF ENERGY / PUBLIC DOMAIN)

    But many billions of years ago, life developed the ability to engage in horizontal gene transfer, where genetic material can move from one organism to another via mechanisms other than asexual reproduction. Transformation, transduction, and conjugation are all mechanisms for horizontal gene transfer, but they all have something in common: single-celled, primitive organisms that develop a genetic sequence that’s useful for a particular purpose can transfer that sequence into other organisms, granting them the abilities that they worked so hard to evolve for themselves.

    This is the primary mechanism by which modern-day bacteria develop antibiotic resistance. If one primitive organism can develop a useful adaptation, other organisms can develop that same adaptation without having to evolve it from scratch.

    The three mechanisms by which a bacterium can acquire genetic information horizontally, rather than vertically (through reproduction), are transformation, transduction, and conjugation. (NATURE, FURUYA AND LOWY (2006) / UNIVERSITY OF LEICESTER)

    The second major evolutionary step involves the development of specialized components within a single organism. The most primitive creatures have freely-floating bits of genetic material enclosed with some protoplasm inside a cell membrane, with nothing more specialized than that. These are the prokaryotic organisms of the world: the first forms of life thought to exist.

    But more evolved creatures contain within them the ability to create miniature factories, capable of specialized functions. These mini-organs, known as organelles, herald the rise of the eukaryotes. Eukaryotes are larger than prokaryotes, have longer DNA sequences, but also have specialized components that perform their own unique functions, independent of the cell they inhabit.

    Unlike their more primitive prokaryotic counterparts, eukaryotic cells have differentiated cell organelles, with their own specialized structure and function that allows them to perform many of the cells life processes in a relatively independent fashion from the rest of the cell’s functioning. (CNX OPENSTAX)

    These organelles include a cell nucleus, the lysosomes, chloroplasts, golgi bodies, endoplasmic reticulum, and the mitochondria. Mitochondria themselves are incredibly interesting, because they provide a window into life’s evolutionary past.

    If you take an individual mitochondria out of a cell, it can survive on its own. Mitochondria have their own DNA and can metabolize nutrients: they meet all of the definitions of life on their own. But they are also produced by practically all eukaryotic cells. Contained within the more complicated, more highly-evolved cells are the genetic sequences that enables them to create components of themselves that appear identical to earlier, more primitive organisms. Contained within the DNA of complex creatures is the ability to create their own versions of simpler creatures.

    Scanning electron microscope image at the sub-cellular level. While DNA is an incredibly complex, long molecule, it is made of the same building blocks (atoms) as everything else. To the best of our knowledge, the DNA structure that life is based on predates the fossil record. The longer and more complex a DNA molecule is, the more potential structures, functions, and proteins it can encode. (PUBLIC DOMAIN IMAGE BY DR. ERSKINE PALMER, USCDCP)

    In biology, structure and function is arguably the most basic relationship of all. If an organism develops the ability to perform a specific function, then it will have a genetic sequence that encode the information for forming a structure that performs it. If you gain that genetic code in your own DNA, then you, too, can create a structure that performs the specific function in question.

    As creatures grew in complexity, they accumulated large numbers of genes that encoded for specific structures that performed a variety of functions. When you form those novel structures yourself, you gain the abilities to perform those functions that couldn’t be performed without those structures. While simpler, single-celled organisms may reproduce faster, organisms capable of performing more functions are often more adaptable, and more resilient to change.

    Mitochondria, which are some of the specialized organelles found inside eukaryotic cells, are themselves reminiscent of prokaryotic organisms. They even have their own DNA (in black dots), cluster together at discrete focus points. With many independent components, a eukaryotic cell can thrive under a variety of conditions that their simpler, prokaryotic counterparts cannot. But there are drawbacks to increased complexity, too. (FRANCISCO J IBORRA, HIROSHI KIMURA AND PETER R COOK (BIOMED CENTRAL LTD))

    By the time the Huronian glaciation ended and Earth was once again a warm, wet world with continents and oceans, eukaryotic life was common. Prokaryotes still existed (and still do), but were no longer the most complex creatures on our world. For life’s complexity to explode, however, there were two more steps that needed to not only occur, but to occur in tandem: multicellularity and sexual reproduction.

    Multicellularity, according to the biological record left behind on planet Earth, is something that evolved numerous independent times. Early on, single-celled organisms gained the ability to make colonies, with many stitching themselves together to form microbial mats. This type of cellular cooperation enables a group of organisms, working together, to achieve a greater level of success than any of them could individually.

    Green algae, shown here, is an example of a true multicellular organism, where a single specimen is composed of multiple individual cells that all work together for the good of the organism as a whole. (FRANK FOX / MIKRO-FOTO.DE)

    Multicellularity offers an even greater advantage: the ability to have “freeloader” cells, or cells that can reap the benefits of living in a colony without having to do any of the work. In the context of unicellular organisms, freeloader cells are inherently limited, as producing too many of them will destroy the colony. But in the context of multicellularity, not only can the production of freeloader cells be turned on or off, but those cells can develop specialized structures and functions that assist the organism as a whole. The big advantage that multicellularity confers is the possibility of differentiation: having multiple types of cells working together for the optimal benefit of the entire biological system.

    Rather than having individual cells within a colony competing for the genetic edge, multicellularity enables an organism to harm or destroy various parts of itself to benefit the whole. According to mathematical biologist Eric Libby:

    “[A] cell living in a group can experience a fundamentally different environment than a cell living on its own. The environment can be so different that traits disastrous for a solitary organism, like increased rates of death, can become advantageous for cells in a group.”

    Shown are representatives of all major lineages of eukaryotic organisms, color coded for occurrence of multicellularity. Solid black circles indicate major lineages composed entirely of unicellular species. Other groups shown contain only multicellular species (solid red), some multicellular and some unicellular species (red and black circles), or some unicellular and some colonial species (yellow and black circles). Colonial species are defined as those that possess multiple cells of the same type. There is ample evidence that multicellularity evolved independently in all the lineages shown separately here. (2006 NATURE EDUCATION MODIFIED FROM KING ET AL. (2004))

    There are multiple lineages of eukaryotic organisms, with multicellularity evolving from many independent origins. Plasmodial slime molds, land plants, red algae, brown algae, animals, and many other classifications of living creatures have all evolved multicellularity at different times throughout Earth’s history. The very first multicellular organism, in fact, may have arisen as early as 2 billion years ago, with some evidence supporting the idea that an early aquatic fungus came about even earlier.

    But it wasn’t through multicellularity alone that modern animal life became possible. Eukaryotes require more time and resources to develop to maturity than prokaryotes do, and multicellular eukaryotes have an even greater timespan from generation to generation. Complexity faces an enormous barrier: the simpler organisms they’re competing with can change and adapt more quickly.

    A fascinating class of organisms known as siphonophores is itself a collection of small animals working together to form a larger colonial organism. These lifeforms straddle the boundary between a multicellular organism and a colonial organism. (KEVIN RASKOFF, CAL STATE MONTEREY / CRISCO 1492 FROM WIKIMEDIA COMMONS)

    Evolution, in many ways, is like an arms race. The different organisms that exist are continuously competing for limited resources: space, sunlight, nutrients and more. They also attempt to destroy their competitors through direct means, such as predation. A prokaryotic bacterium with a single critical mutation can have millions of generations of chances to take down a large, long-lived complex creature.

    There’s a critical mechanism that modern plants and animals have for competing with their rapidly-reproducing single-celled counterparts: sexual reproduction. If a competitor has millions of generations to figure out how to destroy a larger, slower organism for every generation the latter has, the more rapidly-adapting organism will win. But sexual reproduction allows for offspring to be significantly different from the parent in a way that asexual reproduction cannot match.

    Sexually-reproducing organisms only deliver 50% of their DNA apiece to their children, with many random elements determining which particular 50% gets passed on. This is why offspring only have 50% of their DNA in common with their parents and with their siblings, unlike asexually-reproducing lifeforms. (PETE SOUZA / PUBLIC DOMAIN)

    To survive, an organism must correctly encode all of the proteins responsible for its functioning. A single mutation in the wrong spot can send that awry, which emphasizes how important it is to copy every nucleotide in your DNA correctly. But imperfections are inevitable, and even with the mechanisms organisms have developed for checking and error-correcting, somewhere between 1-in-10,000,000 and 1-in-10,000,000,000 of the copied base pairs will have an error.

    For an asexually-reproducing organism, this is the only source of genetic variation from parent to child. But for sexually-reproducing organisms, 50% of each parent’s DNA will compose the child, with some ~0.1% of the total DNA varying from specimen to specimen. This randomization means that even a single-celled organism which is well-adapted to outcompeting a parent will be poorly-adapted when faced with the challenges of the child.

    In sexual reproduction, all organisms have two pairs of chromosomes, with each parent contributing 50% of their DNA (one set of each chromosome) to the child. Which 50% you get is a random process, allowing for enormous genetic variation from sibling to sibling, significantly different than either of the parents. (MAREK KULTYS / WIKIMEDIA COMMONS)

    Sexual reproduction also means that organisms will have an opportunity to a changing environment in far fewer generations than their asexual counterparts. Mutations are only one mechanism for change from the prior generation to the next; the other is variability in which traits get passed down from parent to offspring.

    If there is a wider variety among offspring, there is a greater chance of surviving when many members of a species will be selected against. The survivors can reproduce, passing on the traits that are preferential at that moment in time. This is why plants and animals can live decades, centuries, or millennia, and can still survive the continuous onslaught of organisms that reproduce hundreds of thousands of generations per year.

    It is no doubt an oversimplification to state that horizontal gene transfer, the development of eukaryotes, multicellularity, and sexual reproduction are all it takes to go from primitive life to complex, differentiated life dominating a world. We know that this happened here on Earth, but we do not know what its likelihood was, or whether the billions of years it needed on Earth are typical or far more rapid than average.

    What we do know is that life existed on Earth for nearly four billion years before the Cambrian explosion, which heralds the rise of complex animals. The story of early life on Earth is the story of most life on Earth, with only the last 550–600 million years showcasing the world as we’re familiar with it. After a 13.2 billion year cosmic journey, we were finally ready to enter the era of complex, differentiated, and possibly intelligent life.

    The Burgess Shale fossil deposit, dating to the mid-Cambrian, is arguably the most famous and well-preserved fossil deposit on Earth dating back to such early times. At least 280 species of complex, differentiated plants and animals have been identified, signifying one of the most important epochs in Earth’s evolutionary history: the Cambrian explosion. This diorama shows a model-based reconstruction of what the living organisms of the time might have looked like in true color. (JAMES ST. JOHN / FLICKR)

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 5:07 pm on May 7, 2019 Permalink | Reply
    Tags: , , , , Ethan Siegel, , Our sun's future- not good   

    From Ethan Siegel: “This Is What Our Sun’s Death Will Look Like, With Pictures From NASA’s Hubble” 

    From Ethan Siegel
    May 6, 2019

    The planetary nebula NGC 6369’s blue-green ring marks the location where energetic ultraviolet light has stripped electrons from oxygen atoms in the gas. Our Sun, being a single star that rotates on the slow end of stars, is very likely going to wind up looking akin to this nebula after perhaps another 6 or 7 billion years. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

    NASA/ESA Hubble Telescope

    Our Sun will someday run out of fuel. Here’s what it will look like when that happens.

    The fate of our Sun is unambiguous, determined solely by its mass.

    If all else fails, we can be certain that the evolution of the Sun will be the death of all life on Earth. Long before we reach the red giant stage, stellar evolution will cause the Sun’s luminosity to increase significantly enough to boil Earth’s oceans, which will surely eradicate humanity, if not all life on Earth. (OLIVERBEATSON OF WIKIMEDIA COMMONS / PUBLIC DOMAIN)

    Too small to go supernova, it’s still massive enough to become a red giant when its core’s hydrogen is exhausted.

    As the Sun becomes a true red giant, the Earth itself may be swallowed or engulfed, but will definitely be roasted as never before. The Sun’s outer layers will swell to more than 100 times their present diameter.(WIKIMEDIA COMMONS/FSGREGS)

    As the inner regions contract and heat up, the outer portions expand, becoming tenuous and rarified.

    Near the end of a Sun-like star’s life, it begins to blow off its outer layers into the depths of space, forming a protoplanetary nebula like the Egg Nebula, seen here. Its outer layers have not yet been heated to sufficient temperatures by the central, contracting star to create a true planetary nebula just yet. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI / AURA), HUBBLE SPACE TELESCOPE / ACS)

    NASA Hubble Advanced Camera forSurveys

    The interior fusion reactions generate intense stellar winds, which gently expel the star’s outer layers.

    The Eight Burst Nebula, NGC 3132, is not well-understood in terms of its shape or formation. The different colors in this image represent gas that radiates at different temperatures. It appears to have just a single star inside, which can be seen contracting down to form a white dwarf near the center of the nebula. (THE HUBBLE HERITAGE TEAM (STSCI/AURA/NASA))

    Single stars often shed their outer layers spherically, like 20% of planetary nebulae.

    The spiral structure around the old, giant star R Sculptoris is due to winds blowing off outer layers of the star as it undergoes its AGB phase, where copious amounts of neutrons (from carbon-13 + helium-4 fusion) are produced and captured. The spiral structure is likely due to the presence of another large mass that periodically orbits the dying star: a binary companion. (ALMA (ESO/NAOJ/NRAO)/M. MAERCKER ET AL.)

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    Stars with binary companions frequently produce spirals or other asymmetrical configurations.

    When our Sun runs out of fuel, it will become a red giant, followed by a planetary nebula with a white dwarf at the center. The Cat’s Eye nebula is a visually spectacular example of this potential fate, with the intricate, layered, asymmetrical shape of this particular one suggesting a binary companion. (NASA, ESA, HEIC, AND THE HUBBLE HERITAGE TEAM (STSCI/AURA); ACKNOWLEDGMENT: R. CORRADI (ISAAC NEWTON GROUP OF TELESCOPES, SPAIN) AND Z. TSVETANOV (NASA))

    Isaac Newton Group of Telescopes located at Roque de los Muchachos Observatory on La Palma in the Canary Islands

    The Twin Jet nebula, shown here, is a stunning example of a bipolar nebula, which is thought to originate from either a rapidly rotating star, or a star that’s part of a binary system when it dies. We’re still working to understand exactly how our Sun will appear when it becomes a planetary nebula in the distant future. (ESA, HUBBLE & NASA, ACKNOWLEDGEMENT: JUDY SCHMIDT)

    The leading explanation is that many stars rotate rapidly, which generates large-scale magnetic fields.

    Known as the Rotten Egg Nebula owing to the large presence of sulfur found inside, this is a planetary nebula in the earliest stages, where it is expected to grow significantly over the coming centuries. The gas being expelled is moving at an incredible speed of about 1,000,000 km/hr, or about 0.1% the speed of light. (ESA/HUBBLE & NASA, ACKNOWLEDGEMENT: JUDY SCHMIDT)

    Those fields accelerate the loosely-held particles populating the outer stellar regions along the dying star’s poles.

    The Ant Nebula, also known as Menzel 3, is showcased in this image. The leading candidate explanation for its appearance is that the dying, central star is spinning, which winds its strong magnetic fields up into shapes that get entangled, like spaghetti twirled too long with a giant fork. The charged particles interact with those field lines, heating up, emitting radiation, and then get ejected, where they’ll disappear off into interstellar space. (NASA, ESA & THE HUBBLE HERITAGE TEAM (STSCI/AURA); ACKNOWLEDGMENT: R. SAHAI (JET PROPULSION LAB), B. BALICK (UNIVERSITY OF WASHINGTON))

    NASA’s Hubble Space Telescope delivers the most spectacular images of this natural phenomenon.

    Nitrogen, hydrogen and oxygen are highlighted in the planetary nebula above, known as the Hourglass Nebula for its distinctive shape. The assigned colors distinctly show the locations of the various elements, which are segregated from one another. (NASA/HST/WFPC2; R SAHAI AND J TRAUGER (JPL))

    NASA/Hubble WFPC2. No longer in service.

    By assigning colors to specific elemental and spectral data, scientists create spectacular visualizations of these signatures.

    The nebula, officially known as Hen 2–104, appears to have two nested hourglass-shaped structures that were sculpted by a whirling pair of stars in a binary system. The duo consists of an aging red giant star and a burned-out star, a white dwarf. This image is a composite of observations taken in various colors of light that correspond to the glowing gases in the nebula, where red is sulfur, green is hydrogen, orange is nitrogen, and blue is oxygen. (NASA, ESA, AND STSCI)

    The cold, neutral gas will be boiled off by the central white dwarf in just ~10,000 years.

    The Helix Nebula may appear to be spherical in nature, but a detailed analysis has revealed a far more complex structure. By mapping out its 3D structure, we learn that its ring-like appearance is merely an artifact of the particular orientation and time at which we view it. Nebulae such as these are short-lived, lasting for only about 10,000 years until they fade away. (NASA, ESA, C.R. O’DELL (VANDERBILT UNIVERSITY), AND M. MEIXNER, P. MCCULLOUGH, AND G. BACON ( SPACE TELESCOPE SCIENCE INSTITUTE))

    In approximately 7 billion years, our Sun’s anticipated death should proceed in exactly this manner.

    This planetary nebula may be known as the ‘Butterfly Nebula’, but in reality it’s hot, ionized luminous gas blown off in the death throes of a dying star. The outer portions are illuminated by the hot, white dwarf this dying star leaves behind. Our Sun is likely in for a similar fate at the end of its red giant, helium-burning phase. (STSCI / NASA, ESA, AND THE HUBBLE SM4 ERO TEAM)

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:26 am on May 5, 2019 Permalink | Reply
    Tags: 'Where Does A Proton’s Mass Come From?', 99.8% of the proton’s mass comes from gluons, , Antiquarks, Asymptotic freedom: the particles that mediate this force are known as gluons., , , , Ethan Siegel, , , , , , , The production of Higgs bosons is dominated by gluon-gluon collisions at the LHC, , The strong interaction is the most powerful interaction in the entire known Universe.   

    From Ethan Siegel: “Ask Ethan: ‘Where Does A Proton’s Mass Come From?'” 

    From Ethan Siegel
    May 4, 2019

    The three valence quarks of a proton contribute to its spin, but so do the gluons, sea quarks and antiquarks, and orbital angular momentum as well. The electrostatic repulsion and the attractive strong nuclear force, in tandem, are what give the proton its size, and the properties of quark mixing are required to explain the suite of free and composite particles in our Universe. (APS/ALAN STONEBRAKER)

    The whole should equal the sum of its parts, but doesn’t. Here’s why.

    The whole is equal to the sum of its constituent parts. That’s how everything works, from galaxies to planets to cities to molecules to atoms. If you take all the components of any system and look at them individually, you can clearly see how they all fit together to add up to the entire system, with nothing missing and nothing left over. The total amount you have is equal to the amounts of all the different parts of it added together.

    So why isn’t that the case for the proton? It’s made of three quarks, but if you add up the quark masses, they not only don’t equal the proton’s mass, they don’t come close. This is the puzzle that Barry Duffey wants us to address, asking:

    “What’s happening inside protons? Why does [its] mass so greatly exceed the combined masses of its constituent quarks and gluons?”

    In order to find out, we have to take a deep look inside.

    The composition of the human body, by atomic number and by mass. The whole of our bodies is equal to the sum of its parts, until you get down to an extremely fundamental level. At that point, we can see that we’re actually more than the sum of our constituent components. (ED UTHMAN, M.D., VIA WEB2.AIRMAIL.NET/UTHMAN (L); WIKIMEDIA COMMONS USER ZHAOCAROL (R))

    There’s a hint that comes just from looking at your own body. If you were to divide yourself up into smaller and smaller bits, you’d find — in terms of mass — the whole was equal to the sum of its parts. Your body’s bones, fat, muscles and organs sum up to an entire human being. Breaking those down further, into cells, still allows you to add them up and recover the same mass you have today.

    Cells can be divided into organelles, organelles are composed of individual molecules, molecules are made of atoms; at each stage, the mass of the whole is no different than that of its parts. But when you break atoms into protons, neutrons and electrons, something interesting happens. At that level, there’s a tiny but noticeable discrepancy: the individual protons, neutrons and electrons are off by right around 1% from an entire human. The difference is real.

    From macroscopic scales down to subatomic ones, the sizes of the fundamental particles play only a small role in determining the sizes of composite structures. Whether the building blocks are truly fundamental and/or point-like particles is still not known. (MAGDALENA KOWALSKA / CERN / ISOLDE TEAM)


    Like all known organisms, human beings are carbon-based life forms. Carbon atoms are made up of six protons and six neutrons, but if you look at the mass of a carbon atom, it’s approximately 0.8% lighter than the sum of the individual component particles that make it up. The culprit here is nuclear binding energy; when you have atomic nuclei bound together, their total mass is smaller than the mass of the protons and neutrons that comprise them.

    The way carbon is formed is through the nuclear fusion of hydrogen into helium and then helium into carbon; the energy released is what powers most types of stars in both their normal and red giant phases. That “lost mass” is where the energy powering stars comes from, thanks to Einstein’s E = mc². As stars burn through their fuel, they produce more tightly-bound nuclei, releasing the energy difference as radiation.

    In between the 2nd and 3rd brightest stars of the constellation Lyra, the blue giant stars Sheliak and Sulafat, the Ring Nebula shines prominently in the night skies. Throughout all phases of a star’s life, including the giant phase, nuclear fusion powers them, with the nuclei becoming more tightly bound and the energy emitted as radiation coming from the transformation of mass into energy via E = mc². (NASA, ESA, DIGITIZED SKY SURVEY 2)

    NASA/ESA Hubble Telescope

    ESO Online Digitized Sky Survey Telescopes

    Caltech Palomar Samuel Oschin 48 inch Telescope, located in San Diego County, California, United States, altitude 1,712 m (5,617 ft)

    Australian Astronomical Observatory, Siding Spring Observatory, near Coonabarabran, New South Wales, Australia, 1.2m UK Schmidt Telescope, Altitude 1,165 m (3,822 ft)

    From http://archive.eso.org/dss/dss

    This is how most types of binding energy work: the reason it’s harder to pull apart multiple things that are bound together is because they released energy when they were joined, and you have to put energy in to free them again. That’s why it’s such a puzzling fact that when you take a look at the particles that make up the proton — the up, up, and down quarks at the heart of them — their combined masses are only 0.2% of the mass of the proton as a whole. But the puzzle has a solution that’s rooted in the nature of the strong force itself.

    The way quarks bind into protons is fundamentally different from all the other forces and interactions we know of. Instead of the force getting stronger when objects get closer, like the gravitational, electric, or magnetic forces, the attractive force goes down to zero when quarks get arbitrarily close. And instead of the force getting weaker when objects get farther away, the force pulling quarks back together gets stronger the farther away they get.

    The internal structure of a proton, with quarks, gluons, and quark spin shown. The nuclear force acts like a spring, with negligible force when unstretched but large, attractive forces when stretched to large distances. (BROOKHAVEN NATIONAL LABORATORY)

    This property of the strong nuclear force is known as asymptotic freedom, and the particles that mediate this force are known as gluons. Somehow, the energy binding the proton together, responsible for the other 99.8% of the proton’s mass, comes from these gluons. The whole of matter, somehow, weighs much, much more than the sum of its parts.

    This might sound like an impossibility at first, as the gluons themselves are massless particles. But you can think of the forces they give rise to as springs: asymptoting to zero when the springs are unstretched, but becoming very large the greater the amount of stretching. In fact, the amount of energy between two quarks whose distance gets too large can become so great that it’s as though additional quark/antiquark pairs exist inside the proton: sea quarks.

    When two protons collide, it isn’t just the quarks making them up that can collide, but the sea quarks, gluons, and beyond that, field interactions. All can provide insights into the spin of the individual components, and allow us to create potentially new particles if high enough energies and luminosities are reached. (CERN / CMS COLLABORATION)

    Those of you familiar with quantum field theory might have the urge to dismiss the gluons and the sea quarks as just being virtual particles: calculational tools used to arrive at the right result. But that’s not true at all, and we’ve demonstrated that with high-energy collisions between either two protons or a proton and another particle, like an electron or photon.

    The collisions performed at the Large Hadron Collider at CERN are perhaps the greatest test of all for the internal structure of the proton. When two protons collide at these ultra-high energies, most of them simply pass by one another, failing to interact. But when two internal, point-like particles collide, we can reconstruct exactly what it was that smashed together by looking at the debris that comes out.

    A Higgs boson event as seen in the Compact Muon Solenoid detector at the Large Hadron Collider. This spectacular collision is 15 orders of magnitude below the Planck energy, but it’s the precision measurements of the detector that allow us to reconstruct what happened back at (and near) the collision point. Theoretically, the Higgs gives mass to the fundamental particles; however, the proton’s mass is not due to the mass of the quarks and gluons that compose it. (CERN / CMS COLLABORATION)

    Under 10% of the collisions occur between two quarks; the overwhelming majority are gluon-gluon collisions, with quark-gluon collisions making up the remainder. Moreover, not every quark-quark collision in protons occurs between either up or down quarks; sometimes a heavier quark is involved.

    Although it might make us uncomfortable, these experiments teach us an important lesson: the particles that we use to model the internal structure of protons are real. In fact, the discovery of the Higgs boson itself was only possible because of this, as the production of Higgs bosons is dominated by gluon-gluon collisions at the LHC. If all we had were the three valence quarks to rely on, we would have seen different rates of production of the Higgs than we did.

    Before the mass of the Higgs boson was known, we could still calculate the expected production rates of Higgs bosons from proton-proton collisions at the LHC. The top channel is clearly production by gluon-gluon collisions. I (E. Siegel) have added the yellow highlighted region to indicate where the Higgs boson was discovered. (CMS COLLABORATION (DORIGO, TOMMASO FOR THE COLLABORATION) ARXIV:0910.3489)

    As always, though, there’s still plenty more to learn. We presently have a solid model of the average gluon density inside a proton, but if we want to know where the gluons are actually more likely to be located, that requires more experimental data, as well as better models to compare the data against. Recent advances by theorists Björn Schenke and Heikki Mäntysaari may be able to provide those much needed models. As Mäntysaari detailed:

    “It is very accurately known how large the average gluon density is inside a proton. What is not known is exactly where the gluons are located inside the proton. We model the gluons as located around the three [valence] quarks. Then we control the amount of fluctuations represented in the model by setting how large the gluon clouds are, and how far apart they are from each other. […] The more fluctuations we have, the more likely this process [producing a J/ψ meson] is to happen.”

    A schematic of the world’s first electron-ion collider (EIC). Adding an electron ring (red) to the Relativistic Heavy Ion Collider (RHIC) at Brookhaven would create the eRHIC: a proposed deep inelastic scattering experiment that could improve our knowledge of the internal structure of the proton significantly. (BROOKHAVEN NATIONAL LABORATORY-CAD ERHIC GROUP)

    The combination of this new theoretical model and the ever-improving LHC data will better enable scientists to understand the internal, fundamental structure of protons, neutrons and nuclei in general, and hence to understand where the mass of the known objects in the Universe comes from. From an experimental point of view, the greatest boon would be a next-generation electron-ion collider, which would enable us to perform deep inelastic scattering experiments to reveal the internal makeup of these particles as never before.

    But there’s another theoretical approach that can take us even farther into the realm of understanding where the proton’s mass comes from: Lattice QCD.

    A better understanding of the internal structure of a proton, including how the “sea” quarks and gluons are distributed, has been achieved through both experimental improvements and new theoretical developments in tandem. (BROOKHAVEN NATIONAL LABORATORY)

    The difficult part with the quantum field theory that describes the strong force — quantum chromodynamics (QCD) — is that the standard approach we take to doing calculations is no good. Typically, we’d look at the effects of particle couplings: the charged quarks exchange a gluon and that mediates the force. They could exchange gluons in a way that creates a particle-antiparticle pair or an additional gluon, and that should be a correction to a simple one-gluon exchange. They could create additional pairs or gluons, which would be higher-order corrections.

    We call this approach taking a perturbative expansion in quantum field theory, with the idea that calculating higher and higher-order contributions will give us a more accurate result.

    Today, Feynman diagrams are used in calculating every fundamental interaction spanning the strong, weak, and electromagnetic forces, including in high-energy and low-temperature/condensed conditions. But this approach, which relies on a perturbative expansion, is only of limited utility for the strong interactions, as this approach diverges, rather than converges, when you add more and more loops for QCD.(DE CARVALHO, VANUILDO S. ET AL. NUCL.PHYS. B875 (2013) 738–756)

    Richard Feynman © Open University

    But this approach, which works so well for quantum electrodynamics (QED), fails spectacularly for QCD. The strong force works differently, and so these corrections get very large very quickly. Adding more terms, instead of converging towards the correct answer, diverges and takes you away from it. Fortunately, there is another way to approach the problem: non-perturbatively, using a technique called Lattice QCD.

    By treating space and time as a grid (or lattice of points) rather than a continuum, where the lattice is arbitrarily large and the spacing is arbitrarily small, you overcome this problem in a clever way. Whereas in standard, perturbative QCD, the continuous nature of space means that you lose the ability to calculate interaction strengths at small distances, the lattice approach means there’s a cutoff at the size of the lattice spacing. Quarks exist at the intersections of grid lines; gluons exist along the links connecting grid points.

    As your computing power increases, you can make the lattice spacing smaller, which improves your calculational accuracy. Over the past three decades, this technique has led to an explosion of solid predictions, including the masses of light nuclei and the reaction rates of fusion under specific temperature and energy conditions. The mass of the proton, from first principles, can now be theoretically predicted to within 2%.

    As computational power and Lattice QCD techniques have improved over time, so has the accuracy to which various quantities about the proton, such as its component spin contributions, can be computed. By reducing the lattice spacing size, which can be done simply by raising the computational power employed, we can better predict the mass of not only the proton, but of all the baryons and mesons. (LABORATOIRE DE PHYSIQUE DE CLERMONT / ETM COLLABORATION)

    It’s true that the individual quarks, whose masses are determined by their coupling to the Higgs boson, cannot even account for 1% of the mass of the proton. Rather, it’s the strong force, described by the interactions between quarks and the gluons that mediate them, that are responsible for practically all of it.

    The strong nuclear force is the most powerful interaction in the entire known Universe. When you go inside a particle like the proton, it’s so powerful that it — not the mass of the proton’s constituent particles — is primarily responsible for the total energy (and therefore mass) of the normal matter in our Universe. Quarks may be point-like, but the proton is huge by comparison: 8.4 × 10^-16 m in diameter. Confining its component particles, which the binding energy of the strong force does, is what’s responsible for 99.8% of the proton’s mass.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 12:52 pm on May 4, 2019 Permalink | Reply
    Tags: "At Last, , , , , Ethan Siegel, Scientists Have Found The Galaxy’s Missing Exoplanets: Cold Gas Giants"   

    From Ethan Siegel: “At Last, Scientists Have Found The Galaxy’s Missing Exoplanets: Cold Gas Giants” 

    From Ethan Siegel
    Apr 30, 2019

    There are four known exoplanets orbiting the star HR 8799, all of which are more massive than the planet Jupiter. These planets were all detected by direct imaging taken over a period of seven years, with the periods of these worlds ranging from decades to centuries. (JASON WANG / CHRISTIAN MAROIS)

    Our outer Solar System, from Jupiter to Neptune, isn’t unique after all.

    In the early 1990s, scientists began detecting the first planets orbiting stars other than the Sun: exoplanets. The easiest ones to see had the largest masses and the shortest orbits, as those are the planets with the greatest observable effects on their parent stars. The second types of planets were at the other extreme, massive enough to emit their own infrared light but so distant from their star that they could be independently resolved by a powerful enough telescope.

    Today, there are over 4,000 known exoplanets, but the overwhelming majority either orbit very close to or very far from their parent star. At long last, however, a team of scientists has discovered a bevy of those missing worlds [Astronomy and Astrophysics]: at the same distance our own Solar System’s gas giants orbit. Here’s how they did it.

    In our own Solar System, the planets Jupiter and Saturn produce the greatest gravitational influence on the Sun, which will lead to our parent star moving relative to the Solar System’s center-of-mass by a substantial amount over the timescales it takes those giant planets to orbit. This motion results in a periodic redshift and blueshift that should be detectable over long enough observational timescales. (NASA’S THE SPACE PLACE)

    When you look at a star, you’re not simply seeing the light it emits from one constant, point-like surface. Instead, there’s a lot of physics going on inside that contributes to what you see.

    the star itself isn’t a solid surface, but emits the light you see for many layers going down hundreds or even thousands of kilometers,
    the star itself rotates, meaning one side moves towards you and the other away from you,
    the star has planets that move around it, occasionally blocking a portion of its light,
    the orbiting planets also gravitationally tug on the star, causing it to periodically “wobble” in time with the planet orbiting it,
    and the star moves throughout the galaxy, changing its motion relative to us.

    All of these, in some way, matter for detecting planets around a star.

    At the photosphere, we can observe the properties, elements, and spectral features present at the outermost layers of the Sun. The top of the photosphere is about 4400 K, while the bottom, 500 km down, is more like 6000 K. The solar spectrum is a sum of all of these blackbodies, and every star we know of has similar properties to their photospheres. (NASA’S SOLAR DYNAMICS OBSERVATORY / GSFC)


    That first point, which might seem the least important, is actually vital to the way we detect and confirm exoplanets. Our Sun, like all stars, is hotter towards the core and cooler towards the limb. At the hottest temperatures, all the atoms inside the star are fully ionized, but as you move to the outer, cooler portions, electrons remain in bound states.

    With the energy relentlessly coming from its environment, these electrons can move to different orbitals, absorbing a portion of the star’s energy. When they do, they leave a characteristic signature in the star’s light spectrum: an absorption feature. When we look at the absorption lines of stars, they can tell us what elements they’re made of, what temperature they’re emitting at, and how quickly they’re moving, both rotationally and with respect to our motion.

    The solar spectrum shows a significant number of features, each corresponding to absorption properties of a unique element in the periodic table or a molecule or ion with electrons bound to it. Absorption features are redshifted or blueshifted if the object moves towards or away from us. (NIGEL A. SHARP, NOAO/NSO/KITT PEAK FTS/AURA/NSF)

    Kitt Peak National Observatory of the Quinlan Mountains in the Arizona-Sonoran Desert on the Tohono O’odham Nation, 88 kilometers 55 mi west-southwest of Tucson, Arizona, Altitude 2,096 m (6,877 ft)

    The more accurately you can measure the wavelength of a particular absorption feature, the more accurately you can determine the star’s velocity relative to your line-of-sight. If the star you’re observing moves towards you, that light gets shifted towards shorter wavelengths: a blueshift. Similarly, if the star you’re monitoring is moving away from you, that light will be shifted towards longer wavelengths: a redshift.

    This is simply the Doppler shift, which occurs for all waves. Whenever there’s relative motion between the source and the observer, the waves received will either be stretched towards longer or shorter wavelengths compared to what was emitted. This is true for sound waves when the ice cream truck goes by, and it’s equally true for light waves when we observe another star.

    A light-emitting object moving relative to an observer will have the light that it emits appear shifted dependent on the location of an observer. Someone on the left will see the source moving away from it, and hence the light will be redshifted; someone to the right of the source will see it blueshifted, or shifted to higher frequencies, as the source moves towards it. (WIKIMEDIA COMMONS USER TXALIEN)

    When the first detection of exoplanets around stars was announced, it came from an extraordinary application of this property of matter and light. If you had an isolated star that moved through space, the wavelength of these absorption lines would only change over long periods of time: as the star we were watching moved relative to our Sun in the galaxy.

    But if the star weren’t isolated, but rather had planets orbiting it, those planets would cause the star to wobble in its orbit. As the planet moved in an ellipse around the star, the star would similarly move in a (much smaller) ellipse in time with the planet: keeping their mutual center-of-mass in the same place.

    The radial velocity (or stellar wobble) method for finding exoplanets relies on measuring the motion of the parent star, as caused by the gravitational influence of its orbiting planets. Even though the planet itself may not be visible directly, their unmistakable influence on the star leaves a measurable signal behind in the periodic relative redshift and blueshift of the photons coming from it. (ESO)

    In a system with multiple planets, these patterns would simply superimpose themselves atop one another; there would be a separate signal for every planet you could identify. The strongest signals would come from the most massive planets, and the fastest signals — from the planets orbiting most closely to their stars — would be the easiest to identify.

    These are the properties that the very first exoplanets had: the so-called “hot Jupiters” of the galaxy. They were the easiest to find because, with very large masses, they could change the motion of their stars by hundreds or even thousands of meters-per-second. Similarly, with short periods and close orbital distances, many cycles of sinusoidal motion could be revealed with only a few weeks or months of observations. Massive, inner worlds are the easiest to find.

    A composite image of the first exoplanet ever directly imaged (red) and its brown dwarf parent star, as seen in the infrared. A true star would be much physically larger and higher in mass than the brown dwarf shown here, but the large physical separation, which corresponds to a large angular separation at distances of under a few hundred light years, means that the world’s greatest current observatories make imaging like this possible. (EUROPEAN SOUTHERN OBSERVATORY (ESO))

    On the complete opposite end of the spectrum, some planets that are equal to or greater than Jupiter’s mass are extremely well-separated from their star: more distant than even Neptune is from the Sun. When you encounter a system such as this, the massive planet is so hot in its core that it can emit more infrared radiation than it reflects from the star it orbits.

    With a large enough separation, telescopes like Hubble can resolve both the main star and its large planetary companion. These two locations — the inner solar system and the extreme outer solar system — were the only places where we had found planets up until the explosion of exoplanets brought about by NASA’s Kepler spacecraft.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    Until then, it was only high-mass planets, and only in the places where they aren’t found in our own Solar System.

    Today, we know of over 4,000 confirmed exoplanets, with more than 2,500 of those found in the Kepler data. These planets range in size from larger than Jupiter to smaller than Earth. Yet because of the limitations on the size of Kepler and the duration of the mission, the majority of planets are very hot and close to their star, at small angular separations. TESS has the same issue with the first planets it’s discovering: they’re preferentially hot and in close orbits. Only through dedicated, long-period observations (or direct imaging) will we be able to detect planets with longer period (i.e., multi-year) orbits. (NASA/AMES RESEARCH CENTER/JESSIE DOTSON AND WENDY STENZEL; MISSING EARTH-LIKE WORLDS BY E. SIEGEL)

    NASA/MIT TESS replaced Kepler in search for exoplanets

    Kepler brought about a revolution because it used an entirely different method: the transit method.

    Planet transit. NASA/Ames

    When a planet passes in front of its parent star, relative to our line-of-sight, it blocks a tiny portion of the star’s light, revealing its presence to us. When the same planet transits its star multiple times, we can learn properties like its radius, orbital period, and the orbital distance from its star.

    But this was limited, too. While it was capable of revealing very low-mass planets compared to the earlier (stellar wobble/radial velocity) method, the primary mission only lasted for three years. This meant that any planet that took longer than about a year to orbit its star couldn’t be seen by Kepler. Ditto for any planet that didn’t happen to block its star’s light from our perspective, which you’re less likely to get the farther away from the star you look.

    The intermediate distance planets, at the distance of Jupiter and beyond, were still elusive.

    The planets of the Solar System are difficult to detect using present technology. Inner planets that are aligned with the observer’s line-of-sight must be large and massive enough to produce an observable effect, while outer worlds require long-period monitoring to reveal their presence. Even then, they need enough mass so that the stellar wobble technique is effective enough to reveal them. (SPACE TELESCOPE SCIENCE INSTITUTE, GRAPHICS DEPT.)

    That’s where a dedicated, long-period study of stars can come in to fill in that gap. A large team of scientists, led by Emily Rickman, conducted an enormous survey using the CORALIE spectrograph at La Silla observatory.

    ESO Swiss 1.2 meter Leonhard Euler Telescope at La Silla, using the CORALIE spectrograph

    They measured the light coming from a large number of stars within about 170 light-years on a nearly continuous basis, beginning in 1998.

    By using the same instrument and leaving virtually no long-term gaps in the data, long-term, precise Doppler measurements finally became possible. A total of five brand new planets, one confirmation of a suggested planet, and three updated planets were announced in this latest study, bringing the total number of Jupiter-or-larger planets beyond the Jupiter-Sun distance up to 26. It shows us what we’d always hoped for: that our Solar System isn’t so unusual in the Universe; it’s just difficult to observe and detect planets like the ones we have.

    While close-in planets are typically discoverable with stellar wobble or transit method observations, and extreme outer planets can be found with direct imaging, these in-between worlds require long-period monitoring that’s just beginning now. These newly-discovered worlds, down the line, may become excellent candidates for direct imaging as well. (E. L. RICKMAN ET AL., A&A ACCEPTED (2019), ARXIV:1904.01573)

    Even with these latest results, however, we still aren’t sensitive to the worlds we actually have in our Solar System. While the periods of these new worlds range from 15 to 40 years, even the smallest one is nearly three times as massive as Jupiter. Until we develop more sensitive measurement capabilities and make those observations over decadal timescales, real-life Jupiters, Saturns, Uranuses and Neptunes will remain undetected.

    Our view of the Universe will always be incomplete, as the techniques we develop will always be inherently biased to favor detections in one type of system. But the irreplaceable asset that will open up more of the Universe to us isn’t technique-based at all; it’s simply an increase in observing time. With longer and more sensitive observations of stars, closely tracking their motions, we can reveal lower-mass planets and worlds at greater distances.

    This is true of both the stellar wobble/radial velocity method and also the transit method, which hopefully will reveal even smaller-mass worlds with longer periods. There is still so much to learn about the Universe, but every step we take brings us closer to understanding the ultimate truths about reality. Although we might have worried that our Solar System was in some way unusual, we now know one more way we’re not. Having gas giant worlds in the outer solar system may pose a challenge for detections, but those worlds are out there and relatively common. Perhaps, then, so are solar systems like our own.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 2:04 pm on April 25, 2019 Permalink | Reply
    Tags: "Quarks Don’t Actually Have Colors", , Ethan Siegel,   

    From Ethan Siegel: “Quarks Don’t Actually Have Colors” 

    From Ethan Siegel
    Apr 25, 2019

    A visualization of QCD illustrates how particle/antiparticle pairs pop out of the quantum vacuum for very small amounts of time as a consequence of Heisenberg uncertainty. Note that the quarks and antiquarks themselves come with specific color assignments that are always on opposite sides of the color wheel from one another. In the rules of the strong interaction, only colorless combinations are permitted in nature. (DEREK B. LEINWEBER)

    Red, green, and blue? What we call ‘color charge’ is far more interesting than that.

    At a fundamental level, reality is determined by only two properties of our Universe: the quanta that make up everything that exists and the interactions that take place between them. While the rules that govern all of this might appear complicated, the concept is extremely straightforward. The Universe is made up of discrete bits of energy that are bound up into quantum particles with specific properties, and those particles interact with one another according to the laws of physics that underlie our reality.

    Some of these quantum properties govern whether and how a particle will interact under a certain force. Everything has energy, and therefore everything experiences gravity. Only the particles with the right kinds of charges experience the other forces, however, as those charges are necessary for couplings to occur. In the case of the strong nuclear force, particles need a color charge to interact. Only, quarks don’t actually have colors. Here’s what’s going on instead.

    The particles and antiparticles of the Standard Model are predicted to exist as a consequence of the laws of physics. Although we depict quarks, antiquarks and gluons as having colors or anticolors, this is only an analogy. The actual science is even more fascinating. (E. SIEGEL / BEYOND THE GALAXY)

    While we might not understand everything about this reality, we have uncovered all the particles of the Standard Model and the nature of the four fundamental forces — gravity, electromagnetism, the weak nuclear force, and the strong nuclear force — that govern their interactions. But not every particle experiences every interaction; you need the right type of charge for that.

    Of the four fundamental forces, every particle has an energy inherent to it, even massless particles like photons. So long as you have energy, you experience the gravitational force. Moreover, there’s only one type of gravitational charge: positive energy (or mass). For this reason, the gravitational force is always attractive, and occurs between everything that exists in the Universe.

    An animated look at how spacetime responds as a mass moves through it helps showcase exactly how, qualitatively, it isn’t merely a sheet of fabric. Instead, all of space itself gets curved by the presence and properties of the matter and energy within the Universe. Note that the gravitational force is always attractive, as there is only one (positive) type of mass/energy. (LUCASVB)

    Electromagnetism is a little more complicated. Instead of one type of fundamental charge, there are two: positive and negative electric charges. When like charges (positive and positive or negative and negative) interact, they repel, while when opposite charges (positive and negative) interact, they attract.

    This offers an exciting possibility that gravity doesn’t: the ability to have a bound state that doesn’t exert a net force on an external, separately-charged object. When equal amounts of positive and negative charges bind together into a single system, you get a neutral object: one with no net charge to it. Free charges exert attractive and/or repulsive forces, but uncharged systems do not. That’s the biggest difference between gravitation and electromagnetism: the ability to have neutral systems composed of non-zero electric charges.

    Newton’s law of universal gravitation (L) and Coulomb’s law for electrostatics (R) have almost identical forms, but the fundamental difference of one type vs. two types of charge open up a world of new possibilities for electromagnetism. (DENNIS NILSSON / RJB1 / E. SIEGEL)

    If we were to envision these two forces side-by-side, you might think of electromagnetism as having two directions, while gravitation only has a single direction. Electric charges can be positive or negative, and the various combinations of positive-positive, positive-negative, negative-positive, and negative-negative allow for both attraction and repulsion. Gravitation, on the other hand, only has one type of charge, and therefore only one type of force: attraction.

    Even though there are two types of electric charge, it only takes one particle to take care of the attractive and repulsive action of electromagnetism: the photon. The electromagnetic force has a relatively simple structure — two charges, where like ones repel and opposites attract — and a single particle, the photon, can account for both electric and magnetic effects. In theory, a single particle, the graviton, could do the same thing for gravitation.

    Today, Feynman diagrams are used in calculating every fundamental interaction spanning the strong, weak, and electromagnetic forces, including in high-energy and low-temperature/condensed conditions. The electromagnetic interactions, shown here, are all governed by a single force-carrying particle: the photon. (DE CARVALHO, VANUILDO S. ET AL. NUCL.PHYS. B875 (2013) 738–756)

    But then, on an entirely different footing, there’s the strong force. It’s similar to both gravity and electromagnetism, in the sense that there is a new type of charge and new possibilities for a force associated with it.

    If you think about an atomic nucleus, you must immediately recognize that there must be an additional force that’s stronger than the electric force is, otherwise the nucleus, made of protons and neutrons, would fly apart due to electric repulsion. The creatively-named strong nuclear force is the responsible party, as the constituents of protons and neutrons, quarks, have both electric charges and a new type of charge: color charge.

    The red-green-blue color analogy, similar to the dynamics of QCD, is how certain phenomena within and beyond the Standard Model is often conceptualized. The analogy is often taken even further than the concept of color charge, such as via the extension known as technicolor. (WIKIPEDIA USER BB3CXV)

    Contrary to what you might expect, though, there’s no color involved at all. The reason we call it color charge is because instead of one fundamental, attractive type of charge (like gravity), or two opposite types of fundamental charge (positive and negative, like electromagnetism), the strong force is governed by three fundamental types of charge, and they obey very different rules than the other, more familiar forces.

    For electric charges, a positive charge can be cancelled out by an equal and opposite charge — a negative charge — of the same magnitude. But for color charges, you have three fundamental types of charge. In order to cancel out a single color charge of one type, you need one of each of the second and third types. The combination of equal numbers of all three types results in a combination that we call “colorless,” and colorless is the only combination of composite particle that’s stable.

    Quarks and antiquarks, which interact with the strong nuclear force, have color charges that correspond to red, green and blue (for the quarks) and cyan, magenta and yellow (for the antiquarks). Any colorless combination, of either red + green + blue, cyan + yellow + magenta, or the appropriate color/anticolor combination, is permitted under the rules of the strong force. (ATHABASCA UNIVERSITY / WIKIMEDIA COMMONS)

    This works independently for quarks, which have a positive color charge, and antiquarks, which have a negative color charge. If you picture a color wheel, you might put red, green and blue at three equidistant locations, like an equilateral triangle. But between red and green would be yellow; between green and blue would be cyan; between red and blue would be magenta.

    These in-between color charges correspond to the colors of the antiparticles: the anticolors. Cyan is the same as anti-red; magenta is the same as anti-green; yellow is the same as anti-blue. Just as you could add up three quarks with red, green and blue colors to make a colorless combination (like a proton), you could add up three antiquarks with cyan, magenta and yellow colors to make a colorless combination (like an antiproton).

    Combinations of three quarks (RGB) or three antiquarks (CMY) are colorless, as are appropriate combinations of quarks and antiquarks. The gluon exchanges that keep these entities stable are quite complicated. (MASCHEN / WIKIMEDIA COMMONS)

    If you know anything about color, you might start thinking of other ways to generate a colorless combination. If three different colors or three different anticolors could work, maybe the right color-anticolor combination could get you there?

    In fact, it can. You could mix together the right combination of a quark and an antiquark to produce a colorless composite particle, known as a meson. This works, because:

    red and cyan,
    green and magenta,
    and blue and yellow

    are all colorless combinations. So long as you add up to a colorless net charge, the rules of the strong force permit you to exist.

    The combination of a quark (RGB) and a corresponding antiquark (CMY) always ensure that the meson is colorless. (ARMY1987 / TIMOTHYRIAS OF WIKIMEDIA COMMONS)

    This might start your mind down some interesting paths. If red + green + blue is a colorless combination, but red + cyan is colorless too, does that mean that green + blue is the same as cyan?

    That’s absolutely right. It means that you can have a single (colored) quark paired with any of the following:

    two additional quarks,
    one antiquark,
    three additional quarks and one antiquark,
    one additional quark and two antiquarks,
    five additional quarks,

    or any other combination that leads to a colorless total. When you hear about exotic particles like tetraquarks (two quarks and two antiquarks) or pentaquarks (four quarks and one antiquark), know that they obey these rules.

    With six quarks and six antiquarks to choose from, where their spins can sum to 1/2, 3/2 or 5/2, there are expected to be more pentaquark possibilities than all baryon and meson possibilities combined. The only rule, under the strong force, is that all such combinations must be colorless. (CERN / LHC / LHCb COLLABORATION)

    CERN/LHCb detector

    But color is only an analogy, and that analogy will actually break down pretty quickly if you start looking at it in too much detail. For example, the way the strong force works is by exchanging gluons, which carry a color-anticolor combination with them. If you are a blue quark and you emit a gluon, you might transform into a red quark, which means the gluon you emitted contained a cyan (anti-red) and a blue color charge, enabling you to conserve color.

    You might think, then, with three colors and three anticolors, that there would be nine possible types of gluon that you could have. After all, if you matched each of red, green and blue with each of cyan, magenta and yellow, there are nine possible combinations. This is a good first guess, and it’s almost right.

    The strong force, operating as it does because of the existence of ‘color charge’ and the exchange of gluons, is responsible for the force that holds atomic nuclei together. A gluon must consist of a color/anticolor combination in order for the strong force to behave as it must, and does. (WIKIMEDIA COMMONS USER QASHQAIILOVE)

    As it turns out, though, there are only eight gluons that exist. Imagine you’re a red quark, and you emit a red/magenta gluon. You’re going to turn the red quark into a green quark, because that’s how you conserve color. That gluon will then find a green quark, where the magenta will annihilate with the green and leave the red color behind. In this fashion, colors get exchanged between interacting colored particles.

    This line of thinking is only good for six of the gluons, though:

    blue/cyan, and

    When you run into the other three possibilities — red/cyan, green/magenta, and blue/yellow — there’s a problem: they’re all colorless.


    When you have three color/anticolor combinations that are possible and colorless, they will mix together, producing two ‘real’ gluons that are asymmetric between the various color/anticolor combinations, and one that’s completely symmetric. Only the two antisymmetric combinations result in real particles. (E. SIEGEL)

    In physics, whenever you have particles that have the same quantum numbers, they mix together. These three types of gluons, all being colorless, absolutely do mix together. The details of how they mix are quite deep and go beyond the scope of a non-technical article, but you wind up with two combinations that are an unequal mix of the three different colors and anticolors, along with one combination that’s a mix of all the colors/anticolor pairs equally.

    That last one is truly colorless, and cannot physically interact with any of the particles or antiparticles with color charges. Therefore, there are only eight physical gluons. The exchanges of gluons between quarks (and/or antiquarks), and of colorless particles between other colorless particles, is literally what binds atomic nuclei together.


    Individual protons and neutrons may be colorless entities, but there is still a residual strong force between them. All the known matter in the Universe can be divided into atoms, which can be divided into nuclei and electrons, where nuclei can be divided even farther. We may not have even yet reached the limit of division, or the ability to cut a particle into multiple components, but what we call color charge, or charge under the strong interactions, appears to be a fundamental property of quarks, antiquarks and gluons. (WIKIMEDIA COMMONS USER MANISHEARTH)

    We may call it color charge, but the strong nuclear force obeys rules that are unique among all the phenomena in the Universe. While we ascribe colors to quarks, anticolors to antiquarks, and color-anticolor combinations to gluons, it’s only a limited analogy. In truth, none of the particles or antiparticles have a color at all, but merely obey the rules of an interaction that has three fundamental types of charge, and only combinations that have no net charge under this system are allowed to exist in nature.

    This intricate interaction is the only known force that can overcome the electromagnetic force and keep two particles of like electric charge bound together into a single, stable structure: the atomic nucleus. Quarks don’t actually have colors, but they do have charges as governed by the strong interaction. Only with these unique properties can the building blocks of matter combine to produce the Universe we inhabit today.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 8:33 am on April 25, 2019 Permalink | Reply
    Tags: "Could An Incompleteness In Quantum Mechanics Lead To Our Next Scientific Revolution?", , Ethan Siegel, , ,   

    From Ethan Siegel: “Could An Incompleteness In Quantum Mechanics Lead To Our Next Scientific Revolution?” 

    From Ethan Siegel
    Apr 24, 2019

    The proton’s structure, modeled along with its attendant fields, show how even though it’s made out of point-like quarks and gluons, it has a finite, substantial size which arises from the interplay of the quantum forces and fields inside it. The proton, itself, is a composite, not fundamental, quantum particle. (BROOKHAVEN NATIONAL LABORATORY)

    A single thought experiment reveals a paradox. Could quantum gravity be the solution?

    Sometimes, if you want to understand how nature truly works, you need to break things down to the simplest levels imaginable. The macroscopic world is composed of particles that are — if you divide them until they can be divided no more — fundamental. They experience forces that are determined by the exchange of additional particles (or the curvature of spacetime, for gravity), and react to the presence of objects around them.

    At least, that’s how it seems. The closer two objects are, the greater the forces they exert on one another. If they’re too far away, the forces drop off to zero, just like your intuition tells you they should. This is called the principle of locality, and it holds true in almost every instance. But in quantum mechanics, it’s violated all the time. Locality may be nothing but a persistent illusion, and seeing through that facade may be just what physics needs.

    Quantum gravity tries to combine Einstein’s general theory of relativity with quantum mechanics. Quantum corrections to classical gravity are visualized as loop diagrams, as the one shown here in white. We typically view objects that are close to one another as capable of exerting forces on one another, but that might be an illusion, too. (SLAC NATIONAL ACCELERATOR LAB)

    Imagine that you had two objects located in close proximity to one another. They would attract or repel one another based on their charges and the distance between them. You might visualize this as one object generating a field that affects the other, or as two objects exchanging particles that impart either a push or a pull to one or both of them.

    You’d expect, of course, that there would be a speed limit to this interaction: the speed of light. Relativity gives you no other way out, since the speed at which the particles responsible for forces propagate is limited by the speed they can travel, which can never exceed the speed of light for any particle in the Universe. It seems so straightforward, and yet the Universe is full of surprises.

    An example of a light cone, the three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. The more you move through space, the less you move through time, and vice versa. Only things contained within your past light-cone can affect you today; only things contained within your future light-cone can be perceived by you in the future. (WIKIMEDIA COMMONS USER MISSMJ)

    We have this notion of cause-and-effect that’s been hard-wired into us by our experience with reality. Physicists call this causality, and it’s one of the rare physics ideas that actually conforms to our intuition. Every observer in the Universe, from its own perspective, has a set of events that exist in its past and in its future.

    In relativity, these are events contained within either your past light-cone (for events that can causally affect you) or your future light-cone (for events that you can causally effect). Events that can be seen, perceived, or can otherwise have an effect on an observer are known as causally-connected. Signals and physical effects, both from the past and into the future, can propagate at the speed of light, but no faster. At least, that’s what your intuitive notions about reality tell you.

    Schrödinger’s cat. Inside the box, the cat will be either alive or dead, depending on whether a radioactive particle decayed or not. If the cat were a true quantum system, the cat would be neither alive nor dead, but in a superposition of both states until observed. (WIKIMEDIA COMMONS USER DHATFIELD)

    But in the quantum Universe, this notion of relativistic causality isn’t as straightforward or universal as it would seem. There are many properties that a particle can have — such as its spin or polarization — that are fundamentally indeterminate until you make a measurement. Prior to observing the particle, or interacting with it in such a way that it’s forced to be in either one state or the other, it’s actually in a superposition of all possible outcomes.

    Well, you can also take two quantum particles and entangle them, so that these very same quantum properties are linked between the two entangled particles. Whenever you interact with one member of the entangled pair, you not only gain information about which particular state it’s in, but also information about its entangled partner.

    By creating two entangled photons from a pre-existing system and separating them by great distances, we can ‘teleport’ information about the state of one by measuring the state of the other, even from extraordinarily different locations. (MELISSA MEISTER, OF LASER PHOTONS THROUGH A BEAM SPLITTER)

    This wouldn’t be so bad, except for the fact that you can set up an experiment as follows.

    You can create your pair of entangled particles at a particular location in space and time.
    You can transport them an arbitrarily large distance apart from one another, all while maintaining that quantum entanglement.
    Finally, you can make those measurements (or force those interactions) as close to simultaneously as possible.

    In every instance where you do this, you’ll find the member you measure in a particular state, and instantly “know” some information about the other entangled member.

    A photon can have two types of circular polarizations, arbitrarily defined so that one is + and one is -. By devising an experiment to test correlations between the directional polarization of entangled particles, one can attempt to distinguish between certain formulations of quantum mechanics that lead to different experimental results.(DAVE3457 / WIKIMEDIA COMMONS)

    What’s puzzling is that you cannot check whether this information is true or not until much later, because it takes a finite amount of time for a light signal to arrive from the other member. When the signal does arrive, it always confirms what you’d known just by measuring your member of the entangled pair: your expectation for the state of the distant particle agreed 100% with what its measurement indicated.

    Only, there seems to be a problem. You “knew” information about a measurement that was taking place non-locally, which is to say that the measurement that occurred is outside of your light cone. Yet somehow, you weren’t entirely ignorant about what was going on over there. Even though no information was transmitted faster than the speed of light, this measurement describes a troubling truth about quantum physics: it is fundamentally a non-local theory.

    Schematic of the third Aspect experiment testing quantum non-locality. Entangled photons from the source are sent to two fast switches that direct them to polarizing detectors. The switches change settings very rapidly, effectively changing the detector settings for the experiment while the photons are in flight. (CHAD ORZEL)

    There are limits to this, of course.

    It isn’t as clean as you want: measuring the state of your particle doesn’t tell us the exact state of its entangled pair, just probabilistic information about its partner.

    There is still no way to send a signal faster than light; you can only use this non-locality to predict a statistical average of entangled particle properties.

    And even though it has been the dream of many, from Einstein to Schrödinger to de Broglie, no one has ever come up with an improved version of quantum mechanics that tells you anything more than its original formulation.

    But there are many who still dream that dream.

    If two particles are entangled, they have complementary wavefunction properties, and measuring one places meaningful constraints on the properties of the other. (WIKIMEDIA COMMONS USER DAVID KORYAGIN)

    One of them is Lee Smolin, who cowrote a paper [Physical Review D] way back in 2003 that showed an intriguing link between general ideas in quantum gravity and the fundamental non-locality of quantum physics. Although we don’t have a successful quantum theory of gravity, we have established a number of important properties concerning how a quantum theory of gravity will behave and still be consistent with the known Universe.

    A variety of quantum interpretations and their differing assignments of a variety of properties. Despite their differences, there are no experiments known that can tell these various interpretations apart from one another, although certain interpretations, like those with local, real, deterministic hidden variables, can be ruled out. (ENGLISH WIKIPEDIA PAGE ON INTERPRETATIONS OF QUANTUM MECHANICS)

    There are many reasons to be skeptical that this conjecture will hold up to further scrutiny. For one, we don’t truly understand quantum gravity at all, and anything we can say about it is extraordinarily provisional. For another, replacing the non-local behavior of quantum mechanics with the non-local behavior of quantum gravity is arguably making the problem worse, not better. And, as a third reason, there is nothing thought to be observable or testable about these non-local variables that Markopoulou and Smolin claim could explain this bizarre property of the quantum Universe.

    Fortunately, we’ll have the opportunity to hear the story direct from Smolin himself and evaluate it on our own. You see, at 7 PM ET (4 PM PT) on April 17, Lee Smolin is giving a public lecture on exactly this topic at Perimeter Institute, and you can watch it right here.


    I’ll be watching along with you, curious about what Smolin is calling Einstein’s Unfinished Revolution, which is the ultimate quest to supersede our two current (but mutually incompatible) descriptions of reality: General Relativity and quantum mechanics.


    Best of all, I’ll be giving you my thoughts and commentary below in the form of a live-blog, beginning 10 minutes before the start of the talk. [See the full article.]

    Find out where we are in the quest for quantum gravity, and what promises it may (or may not) have for revolutionizing one of the greatest counterintuitive mysteries about the quantum nature of reality!

    Thanks for joining me for an interesting lecture and discussions on science, and just maybe, someday, we’ll have some interesting progress to report on this topic. Until then, you don’t have to shut up, but you still do have to calculate!

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 12:30 pm on April 13, 2019 Permalink | Reply
    Tags: "Ask Ethan: What Is An Electron?", , Electrons are leptons and thus fermions, Electrons were the first fundamental particles discovered, Ethan Siegel, , , Sometimes the simplest questions of all are the most difficult to meaningfully answer.,   

    From Ethan Siegel: “Ask Ethan: What Is An Electron?” 

    From Ethan Siegel
    Apr 13, 2019

    This artist’s illustration shows an electron orbiting an atomic nucleus, where the electron is a fundamental particle but the nucleus can be broken up into still smaller, more fundamental constituents. (NICOLLE RAGER FULLER, NSF)

    Sometimes, the simplest questions of all are the most difficult to meaningfully answer.

    If you were to take any tiny piece of matter in our known Universe and break it up into smaller and smaller constituents, you’d eventually reach a stage where what you were left with was indivisible. Everything on Earth is composed of atoms, which can further be divided into protons, neutrons, and electrons. While protons and neutrons can still be divided farther, electrons cannot. They were the first fundamental particles discovered, and over 100 years later, we still know of no way to split electrons apart. But what, exactly, are they? That’s what Patreon supporter John Duffield wants to know, asking:

    “Please will you describe the electron… explaining what it is, and why it moves the way it does when it interacts with a positron. If you’d also like to explain why it moves the way that it does in an electric field, a magnetic field, and a gravitational field, that would be nice. An explanation of charge would be nice too, and an explanation of why the electron has mass.”

    Here’s what we know, at the deepest level, about one of the most common fundamental particles around.

    The hydrogen atom, one of the most important building blocks of matter, exists in an excited quantum state with a particular magnetic quantum number. Even though its properties are well-defined, certain questions, like ‘where is the electron in this atom,’ only have probabilistically-determined answers. (WIKIMEDIA COMMONS USER BERNDTHALLER)

    In order to understand the electron, you have to first understand what it means to be a particle. In the quantum Universe, everything is both a particle and a wave simultaneously, where many of its exact properties cannot be perfectly known. The more you try and pin down a particle’s position, you destroy information about its momentum, and vice versa. If the particle is unstable, the duration of its lifetime will affect how well you’re able to know its mass or intrinsic energy. And if the particle has an intrinsic spin to it, measuring its spin in one direction destroys all the information you could know about how it’s spinning in the other directions.

    Electrons, like all spin-1/2 fermions, have two possible spin orientations when placed in a magnetic field. Performing an experiment like this determines their spin orientation in one dimension, but destroys any information about their spin orientation in the other two dimensions as a result. This is a frustrating property inherent to quantum mechanics.(CK-12 FOUNDATION / WIKIMEDIA COMMONS)

    If you measure it at one particular moment in time, information about its future properties cannot be known to arbitrary accuracy, even if the laws governing it are completely understood. In the quantum Universe, many physical properties have a fundamental, inherent uncertainty to them.

    But that’s not true of everything. The quantum rules that govern the Universe are more complex than just the counterintuitive parts, like Heisenberg uncertainty.

    An illustration between the inherent uncertainty between position and momentum at the quantum level. There is a limit to how well you can measure these two quantities simultaneously, and uncertainty shows up in places where people often least expect it. (E. SIEGEL / WIKIMEDIA COMMONS USER MASCHEN)

    The Universe is made up of quanta, which are those components of reality that cannot be further divided into smaller components. The most successful model of those smallest, fundamental components that compose our reality come to us in the form of the creatively-named Standard Model.

    In the Standard Model, there are two separate classes of quanta:

    the particles that make up the matter and antimatter in our material Universe, and
    the particles responsible for the forces that govern their interactions.

    The former class of particles are known as fermions, while the latter class are known as bosons.

    The particles of the standard model, with masses (in MeV) in the upper right. The fermions make up the three leftmost columns and possess half-integer spins; the bosons populate the two columns on the right and have integer spins. While all particles have a corresponding antiparticle, only the fermions can be matter or antimatter. (WIKIMEDIA COMMONS USER MISSMJ, PBS NOVA, FERMILAB, OFFICE OF SCIENCE, UNITED STATES DEPARTMENT OF ENERGY, PARTICLE DATA GROUP)

    Even though, in the quantum Universe, many properties have an intrinsic uncertainty to them, there are some properties that we can know exactly. We call these quantum numbers, which are conserved quantities in not only individual particles, but in the Universe as a whole. In particular, these include properties like:

    electric charge,
    color charge,
    magnetic charge,
    angular momentum,
    baryon number,
    lepton number,
    and lepton family number.

    These are properties that are always conserved, as far as we can tell.

    The quarks, antiquarks, and gluons of the standard model have a color charge, in addition to all the other properties like mass and electric charge that other particles and antiparticles possess. All of these particles, to the best we can tell, are truly point-like, and come in three generations. At higher energies, it is possible that still additional types of particles will exist, but they would go beyond the Standard Model’s description. (E. SIEGEL / BEYOND THE GALAXY)

    In addition, there are a few other properties that are conserved in the strong and electromagnetic interactions, but whose conservation can be violated by the weak interactions. These include

    weak hypercharge,
    weak isospin,
    and quark flavor numbers (like strangeness, charm, bottomness, or topness).

    Every quantum particle that exists has specific values for these quantum numbers that are allowed. Some of them, like electric charge, never change, as an electron will always have an electric charge of -1 and an up quark will always have an electric charge of +⅔. But others, like angular momentum, can take on various values, which can be either +½ or -½ for an electron, or -1, 0, or +1 for a W-boson.

    The pattern of weak isospin, T3, and weak hypercharge, Y_W, and color charge of all known elementary particles, rotated by the weak mixing angle to show electric charge, Q, roughly along the vertical. The neutral Higgs field (gray square) breaks the electroweak symmetry and interacts with other particles to give them mass. (CJEAN42 OF WIKIMEDIA COMMONS)

    The particles that make up matter, known as the fermions, all have antimatter counterparts: the anti-fermions. The bosons, which are responsible for the forces and interactions between the particles, are neither matter nor antimatter, but can interact with either one, as well as themselves.

    The way we view these interactions is by exchanges of bosons between fermions and/or anti-fermions. You can have a fermion interact with a boson and give rise to another fermion; you can have a fermion and an anti-fermion interact and give rise to a boson; you can have an anti-fermion interact with a boson and give rise to another anti-fermion. As long as you conserve all the total quantum numbers you are required to conserve and obey the rules set forth by the Standard Model’s particles and interactions, anything that is not forbidden will inevitably occur with some finite probability.

    The characteristic signals of positron/electron annihilation at low energies, a 511 keV photon line, has been thoroughly measured by the ESA’s INTEGRAL satellite. (J. KNÖDLSEDER (CESR) AND SPI TEAM; THE ESA’S INTEGRAL OBSERVATORY)


    It’s important, before we enumerate what all the properties of the electron are, to note that this is merely the best understanding we have today of what the Universe is made of at a fundamental level. We do not know if there is a more fundamental description; we do not know if the Standard Model will someday be superseded by a more complete theory; we do not know if there are additional quantum numbers and when they might be (or might not be) conserved; we do not know how to incorporate gravity into the Standard Model.

    Although it should always go without saying, it warrants being stated explicitly here: these properties provide the best description of the electron as we know it today. In the future, they may turn out to be an incomplete description, or only an approximate description of what an electron (or a more fundamental entity that makes up our reality) truly is.

    This diagram displays the structure of the standard model (in a way that displays the key relationships and patterns more completely, and less misleadingly, than in the more familiar image based on a 4×4 square of particles). In particular, this diagram depicts all of the particles in the Standard Model (including their letter names, masses, spins, handedness, charges, and interactions with the gauge bosons: i.e., with the strong and electroweak forces). (LATHAM BOYLE AND MARDUS OF WIKIMEDIA COMMONS)

    With that said, an electron is:

    a fermion (and not an antifermion),
    with an electric charge of -1 (in units of fundamental electric charge),
    with zero magnetic charge
    and zero color charge,
    with a fundamental intrinsic angular momentum (or spin) of ½, meaning it can take on values of +½ or -½,
    with a baryon number of 0,
    with a lepton number of +1,
    with a lepton family number of +1 in the electron family, 0 in the muon family and 0 in the tau family,
    with a weak isospin of -½,
    and with a weak hypercharge of -1.

    Those are the quantum numbers of the electron. It does couple to the weak interaction (and hence, the W and Z bosons) and the electromagnetic interaction (and hence, the photon), and also the Higgs boson (and hence, it has a non-zero rest mass). It does not couple to the strong force, and therefore cannot interact with the gluons.

    The Positronium Beam experiment at University College London, shown here, combines electrons and positrons to create the quasi-atom known as positronium, which decays with a mean lifetime of approximately 1 microsecond. The decay products are well-predicted by the Standard Model, and usually proceed into 2 or 3 photons, depending on the relative spins of the electron and positron composing positronium. (UCL)

    If an electron and a positron (which has some of the same quantum numbers and some quantum numbers which are opposites) interact, there are finite probabilities that they will interact through either the electromagnetic or the weak force.

    Most interactions will be dominated by the possibility that electrons and positrons will attract one another, owing to their opposite electric charges. They can form an unstable atom-like entity known as positronium, where they become bound together similar to how protons and electrons bind together, except the electron and positron are of equal mass.

    However, because the electron is matter and the positron is antimatter, they can also annihilate. Depending on a number of factors, such as their relative spins, there are finite probabilities for how they will decay: into 2, 3, 4, 5, or greater numbers of photons. (But 2 or 3 are most common.)

    The rest masses of the fundamental particles in the Universe determine when and under what conditions they can be created, and also describe how they will curve spacetime in General Relativity. The properties of particles, fields, and spacetime are all required to describe the Universe we inhabit. (FIG. 15–04A FROM UNIVERSE-REVIEW.CA)

    When you subject an electron to an electric or magnetic field, photons interact with it to change its momentum; in simple terms, that means they cause an acceleration. Because an electron also has a rest mass associated with it, courtesy of its interactions with the Higgs boson, it also accelerates in a gravitational field. However, the Standard Model cannot account for this, nor can any quantum theory we know of.

    Until we have a quantum theory of gravity, we have to take the mass and energy of an electron and put it into General Relativity: our non-quantum theory of gravitation. This is sufficient to give us the correct answer for every experiment we’ve been able to design, but it’s going to break down at some fundamental level. For example, if you ask what happens to the gravitational field of a single electron as it passes through a double slit, General Relativity has no answer.

    The wave pattern for electrons passing through a double slit, one-at-a-time. If you measure “which slit” the electron goes through, you destroy the quantum interference pattern shown here. The rules of the Standard Model and of General Relativity do not tell us what happens to the gravitational field of an electron as it passes through a double slit; this would require something that goes beyond our current understanding, like quantum gravity. (DR. TONOMURA AND BELSAZAR OF WIKIMEDIA COMMONS)

    Electrons are incredibly important components of our Universe, as there are approximately 1080 of them contained within our observable Universe. They are required for the assembly of atoms, which form molecules, humans, planets and more, and are used in our world for everything from magnets to computers to the macroscopic sensation of touch.

    But the reason they have the properties they do is because of the fundamental quantum rules that govern the Universe. The Standard Model is the best description we have of those rules today, and it also provides the best description of the ways that electrons can and do interact, as well as describing which interactions they cannot undergo.

    Why electrons have these particular properties is beyond the scope of the Standard Model, though. For all that we know, we can only describe how the Universe works. Why it works the way it does is still an open question that we have no satisfactory answer for. All we can do is continue to investigate, and work towards a more fundamental answer.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: