Tagged: Astronomy Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:49 am on October 20, 2019 Permalink | Reply
    Tags: A lot in common with facial recognition at Facebook and other social media., Astronomy, , , , , , , Improving on standard methods for estimating the dark matter content of the universe through artificial intelligence., , The scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset., Using cutting-​edge machine learning algorithms for cosmological data analysis.,   

    From ETH Zürich: “Artificial intelligence probes dark matter in the universe” 

    ETH Zurich bloc

    From ETH Zürich

    Oliver Morsch

    A team of physicists and computer scientists at ETH Zürich has developed a new approach to the problem of dark matter and dark energy in the universe. Using machine learning tools, they programmed computers to teach themselves how to extract the relevant information from maps of the universe.

    Excerpt from a typical computer-​generated dark matter map used by the researchers to train the neural network. (Source: ETH Zürich)

    Understanding the how our universe came to be what it is today and what will be its final destiny is one of the biggest challenges in science. The awe-​inspiring display of countless stars on a clear night gives us some idea of the magnitude of the problem, and yet that is only part of the story. The deeper riddle lies in what we cannot see, at least not directly: dark matter and dark energy. With dark matter pulling the universe together and dark energy causing it to expand faster, cosmologists need to know exactly how much of those two is out there in order to refine their models.

    At ETH Zürich, scientists from the Department of Physics and the Department of Computer Science have now joined forces to improve on standard methods for estimating the dark matter content of the universe through artificial intelligence. They used cutting-​edge machine learning algorithms for cosmological data analysis that have a lot in common with those used for facial recognition by Facebook and other social media. Their results have recently been published in the scientific journal Physical Review D.

    Facial recognition for cosmology

    While there are no faces to be recognized in pictures taken of the night sky, cosmologists still look for something rather similar, as Tomasz Kacprzak, a researcher in the group of Alexandre Refregier at the Institute of Particle Physics and Astrophysics, explains: “Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-​tale signs of dark matter and dark energy.” As dark matter cannot be seen directly in telescope images, physicists rely on the fact that all matter – including the dark variety – slightly bends the path of light rays arriving at the Earth from distant galaxies. This effect, known as “weak gravitational lensing”, distorts the images of those galaxies very subtly, much like far-​away objects appear blurred on a hot day as light passes through layers of air at different temperatures.

    Weak gravitational lensing NASA/ESA Hubble

    Cosmologists can use that distortion to work backwards and create mass maps of the sky showing where dark matter is located. Next, they compare those dark matter maps to theoretical predictions in order to find which cosmological model most closely matches the data. Traditionally, this is done using human-​designed statistics such as so-​called correlation functions that describe how different parts of the maps are related to each other. Such statistics, however, are limited as to how well they can find complex patterns in the matter maps.

    Neural networks teach themselves

    “In our recent work, we have used a completely new methodology”, says Alexandre Refregier. “Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job.” This is where Aurelien Lucchi and his colleagues from the Data Analytics Lab at the Department of Computer Science come in. Together with Janis Fluri, a PhD student in Refregier’s group and lead author of the study, they used machine learning algorithms called deep artificial neural networks and taught them to extract the largest possible amount of information from the dark matter maps.

    Once the neural network has been trained, it can be used to extract cosmological parameters from actual images of the night sky. (Visualisations: ETH Zürich)

    In a first step, the scientists trained the neural networks by feeding them computer-​generated data that simulates the universe. That way, they knew what the correct answer for a given cosmological parameter – for instance, the ratio between the total amount of dark matter and dark energy – should be for each simulated dark matter map. By repeatedly analysing the dark matter maps, the neural network taught itself to look for the right kind of features in them and to extract more and more of the desired information. In the Facebook analogy, it got better at distinguishing random oval shapes from eyes or mouths.

    More accurate than human-​made analysis

    The results of that training were encouraging: the neural networks came up with values that were 30% more accurate than those obtained by traditional methods based on human-​made statistical analysis. For cosmologists, that is a huge improvement as reaching the same accuracy by increasing the number of telescope images would require twice as much observation time – which is expensive.

    Finally, the scientists used their fully trained neural network to analyse actual dark matter maps from the KiDS-​450 dataset. “This is the first time such machine learning tools have been used in this context,” says Fluri, “and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications.”

    As a next step, he and his colleagues are planning to apply their method to bigger image sets such as the Dark Energy Survey.

    Also, more cosmological parameters and refinements such as details about the nature of dark energy will be fed to the neural networks.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ETH Zurich campus
    ETH Zürich is one of the leading international universities for technology and the natural sciences. It is well known for its excellent education, ground-breaking fundamental research and for implementing its results directly into practice.

    Founded in 1855, ETH Zürich today has more than 18,500 students from over 110 countries, including 4,000 doctoral students. To researchers, it offers an inspiring working environment, to students, a comprehensive education.

    Twenty-one Nobel Laureates have studied, taught or conducted research at ETH Zürich, underlining the excellent reputation of the university.

  • richardmitnick 12:38 pm on October 19, 2019 Permalink | Reply
    Tags: "Ask Ethan: How Dense Is A Black Hole?", Astronomy, , , , ,   

    From Ethan Siegel: “Ask Ethan: How Dense Is A Black Hole?” 

    From Ethan Siegel
    Oct 19, 2019

    In April of 2017, all of the telescopes/telescope arrays associated with the Event Horizon Telescope pointed at Messier 87. This is what a supermassive black hole looks like, where the event horizon is clearly visible. (EVENT HORIZON TELESCOPE COLLABORATION ET AL.)

    It’s much more complex a question than dividing its mass by the volume of the event horizon. If you want to get a meaningful answer, you have to go deep.

    If you took any massive object in the Universe and compressed it into a small enough volume, you could transform it into a black hole. Mass curves the fabric of space, and if you collect enough mass in a small enough region of space, that curvature will be so severe that nothing, not even light, can escape from it. The boundary of that inescapable regions is known as an event horizon, and the more massive a black hole is, the larger its event horizon will be. But what does that imply for the density of black holes? That’s what Patreon supporter Chad Marler wants to know, asking:

    “I have read that stellar-mass black holes are enormously dense, if you consider the volume of the black hole to be that space which is delineated by the event horizon, but that super-massive black holes are actually much less dense than even our own oceans. I understand that a black hole represents the greatest amount of entropy that can be squeezed into [any] region of space expressed… [so what happens to the density and entropy of two black holes when they merge]?”

    Chad Marler

    It’s a deep but fascinating question, and if we explore the answer, we can learn an awful lot about black holes, both inside and out.

    Computer simulations enable us to predict which gravitational wave signals should arise from merging black holes. The question of what happens to the information encoded on the surfaces of the event horizons, though, is still a fascinating mystery. (WERNER BENGER, CC BY-SA 4.0)

    Entropy and density are two very different things, and they’re both counterintuitive when it comes to black holes. Entropy, for a very long time, posed a big problem for physicists when they discussed black holes. Regardless of what you make a black hole out of — stars, atoms, normal matter, antimatter, charged or neutral or even exotic particles — only three properties matter for a black hole. Under the rules of General Relativity, black holes can have mass, electric charge, and angular momentum.

    Once you make a black hole, all the information (and hence, all the entropy) associated with the components of the black hole are completely irrelevant to the end-state of a black hole that we observe. Only, if this were the true case, all black holes would have an entropy of 0, and black holes would violate the second law of thermodynamics.

    An illustration of heavily curved spacetime, outside the event horizon of a black hole. As you get closer and closer to the mass’s location, space becomes more severely curved, creating a region where even light cannot escape: the event horizon. (PIXABAY USER JOHNSONMARTIN)

    Similarly, we conventionally think of density as the amount of mass (or energy) contained within a given volume of space. For a black hole, the mass/energy content is easy to understand, since it’s the primary factor that determines the size of your black hole’s event horizon. Therefore, the minimum distance from the black hole where light (or any other) signals actually is defined by the radial distance from the black hole’s center to the edge of the event horizon.

    This appears to give a natural scale for the volume of a black hole: the volume is determined by the amount of space enclosed by the surface area of the event horizon. A black hole’s density, consequently, can be obtained by dividing the mass/energy of the black hole by the volume of a sphere (or spheroid) that is found interior to the black hole’s event horizon. This is something that, at the very least, we know how to calculate.

    Both inside and outside the event horizon, space flows like either a moving walkway or a waterfall, even through the event horizon itself. Upon crossing it, you are dragged inevitably to the central singularity. (ANDREW HAMILTON / JILA / UNIVERSITY OF COLORADO)

    The question of entropy, in particular, poses a problem for physics as we understand it all on its own. If we can form a black hole (with zero entropy) out of matter (with non-zero entropy), then that means we destroy information, we lower the entropy of a closed system, and we violate the second law of thermodynamics. Any matter that falls into a black hole sees its entropy drop to zero; two neutron stars colliding to form a black hole sees the overall system’s energy plummet. Something is amiss.

    But this was just a way of calculating a black hole’s entropy in General Relativity alone. If we add in the quantum rules that govern the particles and interactions in the Universe, we can immediately see that any particles that you’d either make a black hole from or add to the mass of a pre-existing black hole will have positive:

    and entropies.

    Since entropy can never decrease, a black hole must have finite, non-zero, and positive entropy after all.

    Once you cross the threshold to form a black hole, everything inside the event horizon crunches down to a singularity that is, at most, one-dimensional. No 3D structures can survive intact. (ASK THE VAN / UIUC PHYSICS DEPARTMENT)

    Whenever a quantum particle falls into (and passes across) a black hole’s event horizon, it will, at that moment, possess a number of particle properties inherent to it. These properties include angular momentum, charge, and mass, but they also include properties that black holes don’t appear to care about, such as polarization, baryon number, lepton number, and many others.

    If the singularity at a black hole’s center doesn’t depend on those properties, there must be some other place capable of storing that information. John Wheeler was the first person to realize where it could be encoded: on the boundary of the event horizon itself. Instead of zero entropy, the entropy of a black hole would be defined by the number of quantum “bits” (or qubits) of information that could be encoded on the event horizon itself.

    Encoded on the outermost surface of the black hole, the event horizon, is its entropy. Each bit can be encoded on a surface area of the Planck length squared (~10^-66 m²); a black hole’s total entropy is given by the Bekenstein-Hawking formula. (T.B. BAKKER / DR. J.P. VAN DER SCHAAR, UNIVERSITEIT VAN AMSTERDAM)

    Given that a black hole will have an event horizon with a surface area that’s proportional to the size of its radius squared (since mass and radius are directly proportional for black holes), and that the surface area required to encode one bit is the Planck length squared (~10^-66 m²), the entropy of even a small, low-mass black hole is enormous. If you were to double the mass of a black hole, you’d double its radius, which means its surface area would now be four times its previous value.

    If you compare the lowest-mass black holes we know of — which are somewhere in the ballpark of 3-to-5 solar masses — to the highest-mass ones (of tens of billions of solar masses), you’ll find enormous differences in entropy. Entropy, remember, is all about the number of possible quantum states a system can be configured in. For a 1 solar-mass black hole whose information is encoded on its surface, the entropy is approximately 10⁷⁸ k_b (where k_b is Boltzmann’s constant), with more massive black holes having that number increase by a factor of (M_BH/M_Sun)². For the black hole at the center of the Milky Way, the entropy is around 10⁹¹ k_b, while for the supermassive one at the center of Messier 87 — the first one imaged by the Event Horizon Telescope — the entropy is a little more than 10⁹⁷ k_b. The entropy of a black hole is, indeed, the maximum possible amount of entropy that can exist within a given particular region of space.

    The event horizon of a black hole is a spheroidal region from which nothing, not even light, can escape. Although conventional radiation originates outside the event horizon, it is unclear how the encoded entropy behaves in a merger scenario. (NASA; DANA BERRY, SKYWORKS DIGITAL, INC)

    As you can see, the more massive your black hole is, the more entropy (proportional to mass squared) it possesses.

    But then we come to density, and all our expectations break down. For a black hole of a given mass, its radius will be directly proportional to the mass, but the volume is proportional to the radius cubed. A black hole the mass of Earth would be just a little under 1 cm in radius; a black hole the mass of the Sun would be about 3 km in radius; the black hole at the center of the Milky Way is approximately 10⁷ km in radius (about 10 times the radius of the Sun); the black hole at the center of M87 weighs in at a little bit over 10¹⁰ km in radius, or about half a light-day.

    This means, if we were to calculate density by dividing the mass of a black hole by the volume it occupies, we’d find that the density of a black hole (in units of kg/m³) with the mass of:

    the Earth is 2 × 10³⁰ kg/m³,
    the Sun is 2 × 10¹⁹ kg/m³,
    the Milky Way’s central black hole is 1 × 10⁶ kg/m³, and
    M87’s central black hole is ~1 kg/m³,

    where that last value is about the same as density of air on Earth’s surface.

    Artist’s iconic conception of two merging black holes similar to those detected by LIGO Credit LIGO-Caltech/MIT/Sonoma State /Aurore Simonnet

    Are we to believe, then, that if we take two black holes of some roughly equal masses, and allow them to inspiral and merge together, that

    The entropy of the final black hole will be four times the entropy of each initial black hole,
    While the density of the final black hole will be one-fourth the density of each of the initial black holes?

    The answers, perhaps surprisingly, are “Yes” and “No,” respectively.

    For entropy, it is indeed true that merging a black hole (of mass M and entropy S) with another equal mass black hole (of mass M and entropy S) will give you a new black hole with double the mass (2M) but four times the entropy (4S), exactly as predicted by the Bekenstein-Hawking equation. If we calculate how the entropy of the Universe has evolved over time, it’s increased by approximately 15 orders of magnitude (a quadrillion) from the Big Bang until today. Almost all of that extra entropy is in the form of black holes; even the Milky Way’s central black hole has about 1,000 times the entropy of the entire Universe as it was immediately following the Big Bang.


    For density, however, it’s neither fair nor correct to take the mass of a black hole and divide it by the volume inside the event horizon. Black holes are not solid, uniform-density objects, and the laws of physics inside a black hole are expected to be no different than the laws of physics outside. The only difference is the strength of the conditions and the curvature of space, which means that any particles that fall in past the boundary of the event horizon will continue falling until they can fall no longer.

    From outside a black hole, all you can see is the boundary of the event horizon, but the most extreme conditions found in the Universe occur in the interiors of black holes. To the best of our knowledge, falling into a black hole — across the event horizon — means that you’ll inevitably head towards the central singularity in a black hole, something that’s an inescapable fate. If your black hole is non-rotating, the singularity is nothing but a mere point. If all the mass is compressed into a single, zero-dimensional point, then when you ask about density, you are asking “what happens when you divide a finite value (mass) by zero?”

    Spacetime flows continuously both outside and inside the (outer) event horizon for a rotating black hole, similar to the non-rotating case. The central singularity is a ring, rather than a point, while simulations break down at the inner horizon. (ANDREW HAMILTON / JILA / UNIVERSITY OF COLORADO)

    If you need a reminder, dividing by zero is mathematically bad; you get an undefined answer. Thankfully, perhaps, non-rotating black holes aren’t what we have in our physical Universe. Our realistic black holes rotate, and that means that the interior structure is much more complicated. Instead of a perfectly spherical event horizon, we get a spheroidal one that’s elongated along its plane of rotation. Instead of a point-like (zero-dimensional) singularity, we get a ring-like (one-dimensional) one, which is proportional to the angular momentum (and the angular momentum-to-mass) ratio.

    But perhaps most interestingly, when we examine the physics of a rotating black hole, we find that there isn’t one solution for an event horizon, but two: an inner and an outer horizon. The outer horizon is what we physically call the “event horizon” and what we observe with telescopes like the Event Horizon Telescope. But the inner horizon, if we understand our physics correctly, is actually inaccessible. Any object that falls into a black hole will see the laws of physics break down as it approaches that region of space.

    The exact solution for a black hole with both mass and angular momentum was found by Roy Kerr in 1963. Instead of a single event horizon with a point-like singularity, we get inner and outer event horizons, ergospheres, plus a ring-like singularity. (MATT VISSER, ARXIV:0706.0622)

    All the mass, charge, and angular momentum of a black hole is contained in a region even an infalling observer cannot access, but the size of that region varies dependent on how large the angular momentum is, up to some maximum value (as a percentage of mass). The black holes we’ve observed are largely consistent with having angular momenta at or near that maximum value, so even though the “volume” we cannot access inside is smaller than the event horizon, it still increases precipitously (as mass squared) as we look to more and more massive black holes. Even the size of the ring singularity increases in direct proportion to mass, so long as the mass-to-angular momentum ratio remains constant.

    But there is no contradiction here, just some counterintuitive behavior. It teaches us that we probably can’t split a black hole in two without getting a whole bunch of extra entropy out. It teaches us that using a quantity like density for a black hole means we have to be careful, and are irresponsible if we just divide its mass by the event horizon’s volume. And it teaches us, if we bother to calculate it, that the spatial curvature at the event horizon is enormous for low-mass black holes, but barely discernible for high-mass black holes. A non-rotating black hole has an infinite density, but a rotating one will have its mass spread out across a ring-like shape, with the rotational rate and the total mass determining the black hole’s linear density.

    Unfortunately for us, there’s no way we know of to test this experimentally or observationally. We might be able to calculate — to help us visualize — what we theoretically expect to happen inside of a black hole, but there’s no way to get the observational evidence.

    The closest we’ll be able to come is to look to gravitational wave detectors like LIGO, Virgo and KAGRA, and to measure the ringdowns (i.e., the physics in the immediate aftermath) of two merging black holes. It can help confirm certain details that will either validate or refute our current best picture of black hole interiors. So far, everything lines up exactly as Einstein predicted, and exactly as theorists expected.

    There’s still a lot to learn about what happens when two black holes merge, even for quantities like density and entropy, which we think we understand. With more and better data pouring in — and improved data on the near-term horizon — it’s almost time to start putting our assumptions to the ultimate experimental tests!

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

  • richardmitnick 11:17 am on October 19, 2019 Permalink | Reply
    Tags: Astronomy, , , , , Reading the Universe in Infrared,   

    From WIRED: “Space Photos of the Week: Reading the Universe in Infrared” 

    Wired logo

    From WIRED

    Telescopes that see things in a different spectrum show us the hidden secrets of the stars.

    The human eye can process light wavelengths in the range of 380 to 740 nanometers. However, there’s a whole swath of “light” that we are unable to see. Cue the fancy telescopes! This week we are going to look at photos of space that are filtered for the infrared—wavelengths from 700 nanometers to 1 millimeter in size. By filtering for infrared scientists are able to peer through the visible stuff that gets in the way, like gas and dust and other material, to see heat, and in space there’s a lot of hot stuff. This is why NASA has telescopes like Spitzer that orbit the Earth looking at the universe in infrared, showing us stuff our puny eyes could never see on their own.

    NASA/Spitzer Infrared Telescope

    Here’s a space photo cool enough to make Andy Warhol proud: This four-part series shows the Whirlpool galaxy and its partner up above, a satellite galaxy called NGC 5195. This series serves as a good example of how different features can appear when cameras filter for different wavelengths of light. The far left image is taken in visible light, a remarkable scene even though the galaxy is more than 23 million light years from Earth. The second image adds a little extra: Visible light is shown in blue and green, and the bright red streaks are infrared—revealing new star activity and hot ionized material.Photograph: NASA/JPL-Caltech

    This infrared image of the Orion nebula allows astronomers to see dust that’s aglow from star formation. The central light-blue region is the hottest part of the nebula, and as the byproducts of the star factory are ejected out, they cool off and appear red.Photograph: ESA/NASA/JPL-Caltech

    Cygnus X is a ginormous star complex containing around 3 million solar masses and also is one of the largest known protostar factories. This image shows CygnusX in infrared light, glowing hot. The bright white spots are where stars are forming, with the red tendrils showing the gas and dust being expelled after their births.Photograph: NASA Goddard

    This may appear like a scary pit of magma, we’re in fact looking at the Whirlpool galaxy seen earlier. By filtering out visible light and showing only the near-infrared, researchers can see the skeletal structure of the center of the galaxy, made of bending smooth dust lanes. This dust clumps around stars, so an image like this can give researchers a good idea of how much dust is lingering in a galaxy.Photograph: NASA Goddard

    Talk about a butterfly effect: This space oddity is actually a busy stellar nursery called W40. The butterfly “wings” are large bubbles of hot interstellar gas blowing out from the violent births of these stars. Some stars in this region are so large they are 10 times the mass of our Sun.Photograph: NASA/JPL-Caltech

    At the center of our Milky Way galaxy is the galactic core, glowing brightly with the many stars located there. Unencumbered by all the gas and dust, NASA’s Spitzer Space Telescope reveals the red glow of hot ionized material. In addition to a wealth of stars, the center of our galaxy boasts a massive black hole, 4 million times the mass of our Sun. As stars pass by this behemoth, they get devoured and hot energy is spat out—and that radiance helps us know what’s cooking in this active area.Photograph: NASA, JPL-Caltech, Susan Stolovy (SSC/Caltech) et al.

    Want to see things in a different light? Check out WIRED’s full collection of photos here.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:52 am on October 19, 2019 Permalink | Reply
    Tags: Astronomy, , , , , , NASA must meet extraordinary cleanliness measures to avoid the possibility of contaminating Martian samples with terrestrial contaminants.   

    From NASA JPL-Caltech: “Mars 2020 Unwrapped and Ready for Testing” 

    NASA JPL Banner

    From NASA JPL-Caltech

    October 18, 2019

    DC Agle
    Jet Propulsion Laboratory, Pasadena, Calif.

    Alana Johnson
    NASA Headquarters, Washington

    NASA Mars 2020 rover schematic

    NASA Mars 2020 Rover

    Bunny-suited engineers remove the inner layer of protective foil on NASA’s Mars 2020 rover after it was moved to a different building at JPL for testing.

    “The Mars 2020 rover will be collecting samples for future return to Earth, so it must meet extraordinary cleanliness measures to avoid the possibility of contaminating Martian samples with terrestrial contaminants,” said Paul Boeder, contamination control lead for Mars 2020 at JPL. “To ensure we maintain cleanliness at all times, we need to keep things clean not only during assembly and testing, but also during the moves between buildings for these activities.”

    After removing the first layer of antistatic foil, the teams used 70% isopropyl alcohol to meticulously wipe down the remaining layer, seen here, along with the trailer carrying the rover. Later that day, the rover was moved into the larger main room of the Simulator Building. In the coming weeks, the rover will enter a massive vacuum chamber for surface thermal testing – a weeklong evaluation of how its instruments, systems and subsystems operate in the frigid, near-vacuum environment it will face on Mars.

    JPL is building and will manage operations of the Mars 2020 rover for NASA. The rover will launch on a United Launch Alliance Atlas V rocket in July 2020 from Space Launch Complex 41 at Cape Canaveral Air Force Station. NASA’s Launch Services Program, based at the agency’s Kennedy Space Center in Florida, is responsible for launch management.

    When the rover lands at Jezero Crater on Feb. 18, 2021, it will be the first spacecraft in the history of planetary exploration with the ability to accurately retarget its point of touchdown during the landing sequence.

    Charged with returning astronauts to the Moon by 2024, NASA’s Artemis lunar exploration plans will establish a sustained human presence on and around the Moon by 2028. We will use what we learn on the Moon to prepare to send astronauts to Mars.

    Interested K-12 students in U.S. public, private and home schools can enter the Mars 2020 Name the Rover essay contest. One grand prize winner will name the rover.

    For more information about the name contest, go to:


    For more information about the mission, go to:

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL)) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge, on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

  • richardmitnick 4:11 pm on October 18, 2019 Permalink | Reply
    Tags: Astronomy, , , , ,   

    From ESOblog: “50 years of CCDs” 

    ESO 50 Large

    From ESOblog

    18 October 2019

    The story of a detector that changed the course of astronomy.

    HighTech ESO

    In 1969, two researchers designed the basic structure of a CCD — or charge-coupled device — and defined its operating principles. These devices have since played a very important role in astronomy, as well as in our daily lives. We talk to Olaf Iwert, Josh Hopgood and Mark Downing, three CCD (detector systems) experts working here at ESO, to find out how CCDs revolutionised astronomy, how they have evolved during the last half century, and what their future might look like.

    A technician holds Hubble’s ACS WFC instrument, which contains a CCD device. Credit: NASA/ESA and the ACS Science Team

    Q. How were CCDs invented and why was there a need for them at the time?

    Olaf Iwert (OI): The CCD was originally developed as a memory device, but people quickly realised that they could also be used for imaging. They are now used in most astronomical imaging instruments, as well as in many digital cameras and early smartphone cameras.

    Interestingly, Kodak’s management initially completely ignored CCDs as they wanted to promote the film segment of their company. The strongest driver was actually most likely the Cold War and the resulting reconnaissance applications; the CCD technology used in the Hubble Space Telescope was not the first of its kind but a “left-over” of the United States’ Cold War reconnaissance programme! Surely the Cold War and the push for Hubble were very strong drivers of scientific CCD technology development.

    CCD-guru Jim Janesick was doing CCD research for Hubble at NASA’s Jet Propulsion Laboratory, but we don’t know whether the military were secretly using even more advanced devices before that time. We do know for sure that companies involved in military business played an important role in CCD development, and in my view, from then on civil astronomy and military CCD applications went hand-in-hand, as the requirements were quite similar. Now, to my knowledge all ESO instruments observing visible light use CCDs, as these detectors continue to be the state-of-the-art in this field.

    The use of CCDs in everyday photography happened about 20 years later than the application of scientific CCDs to astronomy; the astronomical CCDs were the pioneers of digital photography. But there is a big difference between the scientific CCD image sensors that we use at ESO for astronomy and commercial CCDs such as the ones used in video cameras. Scientific CCDs are thinned, backside illuminated, and surface-treated to collect as much information as possible about the observed object. Scientific CCDs are also typically monochrome, meaning that they don’t filter colours so they collect all of the available light with the highest possible efficiency. Commercial CCDs, on the other hand, mostly produce colour images.

    Q. What made the manufacturing of CCDs possible?

    Josh Hopgood (JH): The first CCD was developed by two physicists at Bell Labs, Willard Boyle and George Smith, who were originally interested in transistor technology, which was invented around 20 years before the CCD. As the first CCD was produced using a transistor manufacturing facility, I would argue that the invention of the CCD at least required the invention of the transistor. As is usually the case, the detectors themselves and the manufacturing technologies used are developed in parallel ⁠— scientists and engineers (and the occasional entrepreneur!) are constantly pushing the boundaries of what is possible. Modern CCDs rely heavily on a number of very well-developed manufacturing technologies, and it will probably be the case that novel uses of these technologies give rise to the next generation of detectors for astronomy.

    The “Conveyor Belt” analogy to explain how CCDs work. A more thorough explanation of this can be found here. Credit: From: slideplayer.com/slide/4990634/

    Q. So how exactly does a CCD work?

    OI: A CCD is a two-dimensional array of millions of pixels, each of which collects photons of light and converts them into an electric charge when the CCD is exposed to light. Instead of using a wire to sense the charge from each pixel, the charge is first transferred vertically and then horizontally to reach a single output amplifier that measures the amount of charge from each pixel. The classic analogy is to think of a CCD as a set of rain-collecting buckets along a series of conveyor belts; first the conveyor belts move the buckets in one direction, onto a single conveyor belt that moves all the buckets in a perpendicular direction to pour the water into a measuring cylinder that measures the amount of water in each bucket one-by-one.

    JH: An interesting side note: CCDs convert light into an electrical signal by means of a physical principle called the “photoelectric effect”. It was discovering this effect that led Einstein to figuring out the foundations of quantum physics!

    Q. What advantage do CCDs have over other types of detectors?

    OI: CCDs were the first two-dimensional array semiconductor imaging devices to be invented. Compared to their predecessors, they have a much higher spatial resolution, are better at imaging bright sources of light, are more rugged, and consume less power. And as they started being mass produced for commercial uses, they also became much cheaper.

    Every detector has several sources of noise, but CCDs have less noise than their predecessors. For example, converting electrons into a voltage at the output amplifier always creates some noise, but because CCDs “read out” the electrons produced by each pixel slowly and the geometry of the output transistor is optimised, this source of noise is very low.

    A CCD on the Very Large Telescope (VLT)’s ESPRESSO instrument. Containing 81 million pixels, this is one of the world’s largest monolithic CCDs. Credit: ESO/Olaf Iwert

    Q. How have CCDs changed over the last fifty years?

    OI: They have improved greatly in so many ways. For a start, they now contain more pixels. The pixels can also be larger or smaller depending on the instrument’s optical design. We now also have optimised output transistors with less “readout noise” whilst operating at a higher speed, more efficient charge transfer, improved mechanical packaging for better cooling, higher quantum efficiency, less dark current noise, fewer defects inside the imaging area, better optical coatings, higher reliability…the list goes on! Another development is that CCDs can now be specially designed to be optimally sensitive to specific wavelengths of light. An example for this is the use of optimised detectors for the blue and red ends of the visible spectrum, as for example used in the Very Large Telescope’s ESPRESSO instrument.

    Q. Josh, you previously worked at one of the major world producers of CCDs. What is it like to now work at an observatory where the same CCDs are used?

    JH: From a personal perspective I find it very rewarding to see the detectors put into use, especially for such grand-scale purposes! I would say that it’s often easy to contain oneself within a bubble, and then forget how your work impacts other people around the world. Taking the position at ESO was a real eye-opener for me!

    Q. Olaf mentioned CCDs being used in digital cameras; what other applications are there?

    JH: CCDs are used for lots of different things! Here’s a quick selection of some of the more interesting applications:

    CCDs perform particularly well whenever you want to take a picture of something bright and something faint at the same time. For this reason, they are employed not only in astronomy, but also in life sciences research. For example, it has been possible to use a CCD to image the fluorescence from a marker molecule inside the brain of a living mouse!
    As well as being sensitive to visible light, CCDs are also sensitive to X-rays, and therefore the dental market is quite significant.
    Line-scan CCDs have extremely fast frame rates, and are used for quality-control checks on production lines for items such as circuit boards.
    When combined with other technologies, CCDs make extremely good night vision cameras, and are therefore employed in search and rescue cameras, as well as military applications.
    Though becoming less favoured, CCDs are indeed still employed in many high-end digital photography/videography systems.

    Q. Do you know approximately how many CCDs are sold worldwide every year?

    JH: For large detectors that are at the core of space-based and ground-based astronomical research, such as the ones used at ESO, I would estimate that a few dozen detectors are delivered worldwide to customers each year; perhaps more than 50, but probably not as many as 100. For other applications, the number could vary from a few hundred to a few thousand per year, but this is steadily decreasing as new technologies offer cheaper solutions with similar performances. We’re already seeing the gradual decline of the CCD market due to other competing technologies, which has led to the closure of some CCD manufacturing lines.

    The OmegaCAM camera lies at the heart of the VST. This view shows its 32 CCD detectors that together create incredibly detailed 268-megapixel images. Each detector measures about 6 cm by 3 cm. Credit: ESO/INAF-VST/OmegaCAM/O.Iwert

    Detail view of the upper left corner of the CCD. The individual pixels are clearly visible. Image taken in 1994. Credit: ESO/H.H.Heyer

    Q. So do you think that another type of detector will become more common in the future?

    JH: CCDs are certainly still relevant for ground-based astronomy, and will be for at least another 5–10 years because their large number of pixels and high dynamic range are practically unrivalled by the current generation of new technologies. However, the ground-based astronomy market is a very small one, and it therefore tends to be a technology-follower rather than a technology-driver.

    What I mean by this is that larger markets such as space-based astronomy tend to dictate developments in sensor technology, and ground-based astronomy will be forced to adopt these new technologies as the older sensor types become obsolete and/or no longer produced. The most likely successor to CCD technology is CMOS Image Sensors, however I would argue that a significant amount of development is required to bring this technology up to the standards that astronomers are used to when making observations with CCD-based systems. In the longer-term future, I am looking forward to the development of an MKID-based imaging sensor for astronomy, as these should be able to tell us not only the intensity of a light source, but also the colour and arrival time of each photon!

    Mark Downing: I agree. Unfortunately, technology moves on and while CCDs are almost perfect detectors, newer cheaper technologies such as CMOS Image Sensors are on the horizon, and these will replace our scientific CCDs in the next few years. This has already happened in commercial cameras and mobile phones. The mobile phone industry is very large and leads to technology innovation in so many areas, but ESO is also at the forefront of innovation with its own CMOS Image Sensor development programmes.

    Q: Mark, you work with a type of CCD specially designed for adaptive optics. Could you explain what this means and why CCDs are the right tool for the job?

    MD: Some ESO telescopes make use of deformable mirrors to reduce the distorting effect of the atmosphere on starlight. We use these high-speed CCDs to detect the “twinkling of the stars”. The images we observe with the CCDs tell us how to move our deformable mirrors to take out the “twinkling” to obtain very sharp images. This is called adaptive optics. Without this improvement in image quality, large telescopes like the Extremely Large Telescope (ELT) would not be feasible.

    High frame rates are required to track the profile of the atmosphere which changes on very short timescales. The technologies we use for these types of CCDs are very special because they allow us to read them out at high frame rates with almost zero noise. This is essential because we choose “guide stars” close to objects we want to observe to correct for atmospheric distortions, and these are often very faint.

    The ELT with two of its cameras, ALICE and LISA. ALICE will use a CCD developed by Teledyne e2v on behalf of ESO. LISA will use a new CMOS Image Sensor which is currently under development also by Teledyne e2v. Credit: ESO

    Q. Do CCDs require special conditions to work? If so, how does ESO ensure these conditions?

    MD: To get the best performance out of our CCDs, we cool them to very low temperatures in the range of -40°C to -120°C to reduce what we call “dark current”. Dark current is created by thermally-excited electrons (rather than light-excited electrons), and is the noise signal you get when there is no light. The lower the temperature the lower the dark current. If you want to see the faintest objects (which are often the most interesting!) then you want to reduce the dark current to almost zero.

    To achieve this, we mount the CCDs inside a sealed metal housing called a cryostat, in which there is no air. The principle is similar to keeping a hot drink warm in a thermos, except instead of keeping something warm, we want to keep our CCDs cold. Inside the cryostat with the CCDs is a cooling source such as a tank of liquid nitrogen, an electrical cooling device, or an electro-mechanical machine called a cryocooler.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Visit ESO in Social Media-




    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT, a major asset of the Adaptive Optics system

    ESO LaSilla
    ESO/Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT 4 lasers on Yepun

    ESO Vista Telescope
    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO/NTT at Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Survey telescope
    VLT Survey Telescope at Cerro Paranal with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).

    APEX Atacama Pathfinder 5,100 meters above sea level, at the Llano de Chajnantor Observatory in the Atacama desert.

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    ESO Speculoos telescopes four 1m-diameter robotic telescopes at ESO Paranal Observatory 2635 metres 8645 ft above sea level

    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

  • richardmitnick 3:10 pm on October 18, 2019 Permalink | Reply
    Tags: "Does Io Have a Magma Ocean?", Astronomy, , , , , Future space missions will further our knowledge of tidal heating and orbital resonances- processes thought to create spectacular volcanism and oceans of magma or water on other worlds.   

    From Eos: “Does Io Have a Magma Ocean?” 

    Eos news bloc

    From Eos

    Alfred McEwen
    Katherine de Kleer
    Ryan Park

    Future space missions will further our knowledge of tidal heating and orbital resonances, processes thought to create spectacular volcanism and oceans of magma or water on other worlds.

    Ganymede, Europa, and Io (from left to right) are in resonant orbits around Jupiter, leading to intense tidal heating of Io, moderate heating of Europa, and perhaps past heating of Ganymede. Credit: NASA/Jet Propulsion Laboratory

    The evolution and habitability of Earth and other worlds are largely products of how much these worlds are warmed by their parent stars, by the decay of radioactive elements in their interiors, and by other external and internal processes.

    Of these processes, tidal heating caused by gravitational interactions among nearby stars, planets, and moons is key to the way that many worlds across our solar system and beyond have developed. Jupiter’s intensely heated moon Io, for example, experiences voluminous lava eruptions like those associated with mass extinctions on ancient Earth, courtesy of tidal heating. Meanwhile, less intense tidal heating of icy worlds sometimes maintains subsurface oceans—thought to be the case on Saturn’s moon Enceladus and elsewhere—greatly expanding the habitable zones around stars.

    Tidal heating results from the changing gravitational attraction between a parent planet and a close-in moon that revolves around that planet in a noncircular orbit. (The same goes for planets in close noncircular orbits around parent stars.) Because its orbit is not circular, the distance between such a moon and its parent planet varies depending on where it is in its orbit, which means it experiences stronger or weaker gravitational attraction to its parent body at different times. These tightening and relaxing responses of the gravitational attraction change the orbiting moon’s shape over the course of each orbit and generate friction and heat internally as rock, ice, and viscous magma are pushed and pulled. (The same process causes Earth’s ocean tides, although the reshaping of the ocean generates relatively little heat because of water’s low viscosity.)

    The magnitude and phase of a moon’s tidally induced deformation depend on its interior structure. Bodies with continuous liquid regions below the surface, such as a subsurface water or magma ocean, are expected to show larger tidal responses and perhaps distinctive rotational parameters compared with bodies without these large fluid regions. Tidal deformation is thus central to understanding a moon’s energy budget and probing its internal structure.

    The dissipation of tidal energy (or the conversion of orbital energy into heat) within a parent planet causes the planet’s moons to migrate outward. This process frequently drives the satellites into what are called mean-motion resonances with each other, in which their orbital periods—the time it takes a satellite to complete a revolution around its parent—are related by integer ratios. The multiple satellites within such orbital resonances exert periodic gravitational influences on each other that serve to maintain noncircular orbits (orbits with nonzero eccentricities), which drive tidal heating. Simultaneously, tidal dissipation and heating within orbiting satellites damp the orbital eccentricity excited by mean-motion resonances, move orbits inward, and power tectonism and potential volcanic activity. Without resonances, continued tidal energy dissipation would eventually lead to circular orbits that would minimize tidal heating.

    For as much as we know, there remain fundamental gaps in our understanding of tidal heating. At a Keck Institute for Space Studies workshop [de Kleer et al., 2019] in October 2018, participants discussed the current state of knowledge about tidal heating as well as how future spacecraft missions to select solar system targets could help address these gaps.

    Jupiter and the Galilean Satellites

    Each time Ganymede orbits Jupiter once, Europa completes two orbits, and Io completes four orbits (Figure 1). This 1:2:4 resonance was discovered by Pierre-Simon Laplace in 1771, but its significance was realized only 200 years later when Peale et al. [1979] published their prediction that the resonance would lead to tidal heating and melting of Io, just before the Voyager 1 mission discovered Io’s active volcanism. The periodic alignment of these three large moons results in forced eccentric orbits, so the shapes of these moons periodically change as they orbit massive Jupiter, with the most intense deformation and heating occurring at innermost Io. Meanwhile, tidal heating of Europa (and of Saturn’s moon Enceladus) maintains a subsurface ocean that’s below a relatively thin ice shell and in contact with the moon’s silicate core, providing key ingredients for habitability.

    Fig. 1. The Jovian and Saturnian systems: The top of each diagram shows the orbital architecture of the system, with the host planet and orbits to scale. Relevant mean-motion resonances are identified in red. The bottom of each diagram shows the moons to scale with one another. Physical parameters listed for each planet and moon include the diameter d, bulk density ρ, and rotational period P, which for all the moons is equal to the orbital period, as they are tidally locked with their host planet. Credit: James Tuttle Keane/Keck Institute for Space Studies

    Although Peale et al. [1979] [Science] predicted the presence of a thin lithosphere over a magma ocean on Io, Voyager 1 revealed mountains more than 10 kilometers high (Figure 2).

    NASA/Voyager 1

    This suggests that Io has a thick, cold lithosphere formed by rapid volcanic resurfacing and subsidence of crustal layers. The idea of a magma ocean inside Io generally lost favor in subsequent studies, until Khurana et al. [2011] [Science] presented evidence from Galileo mission data of an induced magnetic signature from Io.

    NASA/Galileo 1989-2003

    Induced signatures from Europa, Ganymede, and Callisto (another of Jupiter’s moons that with Io, Europa, and Ganymede, make up what are known as the Galilean satellites) had previously been interpreted as being caused by salty oceans, which are electrically conducting—and molten silicates are also electrically conducting. Considerable debate persists about whether Io has a magma ocean.

    The Jovian system provides the greatest potential for advances in our understanding of tidal heating in the next few decades. This is because NASA’s Europa Clipper and the European Space Agency’s Jupiter Icy Moons Explorer (JUICE) will provide in-depth studies of Europa and Ganymede in the 2030s, and the Juno mission orbiting Jupiter may have close encounters with the Galilean satellites in an extended mission.

    NASA/Europa Clipper annotated

    ESA JUICE Schematic


    However, our understanding of this system will continue to be limited unless there is also a dedicated mission with close encounters of Io. The easily observed heat flow on Io (at least 20 times greater than that on Earth) from hundreds of continually erupting volcanoes makes it the ideal target for further investigation and key to understanding the Laplace resonance and tidal heating.

    Advances from the Saturnian System

    As discovered by Hermann Struve in 1890, the Saturnian system contains two pairs of satellites that each display 1:2 orbital resonance (Figure 1): Tethys-Mimas and Dione-Enceladus. More recently, the Cassini mission discovered that Enceladus and Titan are ocean worlds, hosting large bodies of liquid water beneath icy crusts.

    NASA/ESA/ASI Cassini-Huygens Spacecraft

    Precise measurements of the Saturnian moon orbits, largely based on Cassini radio tracking during close encounters, have revealed outward migration rates much faster than expected. But extrapolating the Cassini migration measurement backward in time while using the conventional assumption of a constant tidal dissipation parameter Q, which measures a body’s response to tidal distortion, implies that the Saturnian moons would have, impossibly, been inside Saturn in far less time than the lifetime of the solar system. To resolve this contradiction, Fuller et al. [2016] proposed a new theory for tidally excited systems that describes how orbital migrations could accelerate over time.

    The theory is based on the idea that the internal structures of gas giant planets can evolve on timescales comparable to their ages, causing the frequencies of a planetary oscillation mode (i.e., the planet’s vibrations) to gradually change. This evolution enables “resonance locking” in which a planetary oscillation mode stays nearly resonant with the forcing created by a moon’s orbital period, producing outward migration of the moon that occurs over a timescale comparable to the age of the solar system. This model predicts similar migration timescales but different Q values for each moon. Among other results, this hypothesis explains the present-day heat flux of Enceladus without requiring it to have formed recently, a point relevant to its current habitability and a source of debate among researchers.

    Observing Tidally Heated Exoplanets

    Beyond our solar system, tidal heating of exoplanets and their satellites significantly enlarges the total habitable volume in the galaxy. And as exoplanets continue to be confirmed, researchers are increasingly studying the process in distant star systems. For example, seven roughly Earth-sized planets orbit close to TRAPPIST-1 (Figure 3), a low-mass star about 40 light-years from us, with periods of a few Earth days and with nonzero eccentricities. Barr et al. [2018] concluded that two of these planets undergo sufficient tidal heating to support magma oceans and the other five could maintain water oceans.

    Fig. 3. The TRAPPIST-1 system includes seven known Earth-sized planets. Intense tidal heating of the innermost planets is likely. The projected habitable zone is shaded in green for the TRAPPIST-1 system, and the solar system is shown for comparison. Credit: NASA/JPL-Caltech

    Highly volcanic exoplanets are considered high-priority targets for future investigations because they likely exhibit diverse compositions and volcanic eruption styles. They are also relatively easy to characterize because of how readily volcanic gases can be studied with spectroscopy, their bright flux in the infrared spectrum, and their preferential occurrence in short orbital periods. The latter point means that they can be observed relatively often as they frequently transit their parent stars, resulting in a periodic slight dimming of the starlight.

    Directions in Tidal Heating Research

    The Keck Institute of Space Studies workshop identified five key questions to drive future research and exploration:

    1. What do volcanic eruptions tell us about planetary interiors? Active eruptions in the outer solar system are found on Io and Enceladus, and there are suggestions of such activity on Europa and on Neptune’s large moon Triton. Volcanism is especially important for the study of planetary interiors, as it provides samples from depth and shows that there is sufficient internal energy to melt the interior. Eruption styles place important constraints on the density and stress distribution in the subsurface. And for tidally heated bodies, the properties of the erupted material can place strong constraints on the temperature and viscosity structure of a planet with depth, which is critical information for modeling the distribution and extent of tidal dissipation.

    2. How is tidal dissipation partitioned between solid and liquid materials? Tidal energy can be dissipated as heat in both the solid and liquid regions of a body. The dissipation response of planetary materials depends on their microstructural characteristics, such as grain size and melt distribution, as well as on the timescales of forcing. If forcing occurs at high frequency, planetary materials respond via instantaneous elastic deformation. If forcing occurs at very low frequency, in a quasi steady state manner, materials respond with permanent viscous deformation. Between these ends of the spectrum, on timescales most relevant to tidal flexing of planetary materials, the response is anelastic, with a time lag between an applied stress and the resulting deformation.

    Decades of experimental studies have focused on studying seismic wave attenuation here on Earth. However, seismic waves have much smaller stress amplitudes and much higher frequencies than tidal forcing, so the type of forcing relevant to tidally heated worlds remains poorly explored experimentally. For instance, it is not clear under what conditions tidal stress could alter existing grain sizes and/or melt distributions within the material being stressed.

    3. Does Io have a magma ocean? To understand Io’s dynamics, such as where tidal heating occurs in the interior, we need to better understand its interior structure. Observations collected during close spacecraft flybys can determine whether Io has a magma ocean or another melt distribution (Table 1 [see full article] and Figure 4). One means to study this is from magnetic measurements. Such measurements would be similar to the magnetic field measurements made by the Galileo spacecraft near Io but with better data on Io’s plasma environment (which is a major source of noise), flybys optimized to the best times and places for measuring variations in the magnetic field, and new laboratory measurements of electrical conductivities of relevant planetary materials.

    A second method to investigate Io’s interior is with gravity science, in which the variables k2 and h2 (Table 1), called Love numbers, express how a body’s gravitational potential responds on a tidal timescale and its radial surface deformation, respectively. Each of these variables alone can confirm or reject the hypothesis of a liquid layer decoupled from the lithosphere because their values are roughly 5 times larger for a liquid than a solid body. Although k2 can be measured through radio science (every spacecraft carries a radio telecommunication system capable of this), the measurement of h2 requires an altimeter or high-resolution camera as well as good knowledge of the spacecraft’s position in orbit and orientation.

    Libration amplitude provides an independent test for a detached lithosphere. The orbit of Io is eccentric, which causes its orbital speed to vary as it goes around Jupiter. Its rotational speed, on the other hand, is nearly uniform. Therefore, as seen from Jupiter, Io appears to wobble backward and forward, as the Moon does from the vantage of Earth. Longitudinal libration arises in Io’s orbit because of the torque applied by Jupiter on Io’s static tidal and rotational bulge while it is misaligned with the direction toward Jupiter. If there is a continuous liquid layer within Io and the overlying lithosphere is rigid (as is thought to be needed to support tall mountains), libration amplitudes greater than 500 meters are expected—a scale easily measurable with repeat images taken by a spacecraft.

    Fig. 4. Four scenarios for the distribution of heating and melt in Io. Credit: Chuck Carter and James Tuttle Keane/Keck Institute for Space Studies

    4. Is Jupiter’s Laplace system in equilibrium? The Io-Europa-Ganymede system is a complex tidal engine that powers Io’s extreme volcanism and warms Europa’s water ocean. Ultimately, Jupiter’s rotational energy is converted into a combination of gravitational potential energy (in the orbits of the satellites) and heat via dissipation in both the planet and its satellites. However, we do not know whether this system is currently in equilibrium or whether tidal migration and heating rates and volcanic activity vary over time.

    The orbital evolution of the system can be determined from observing the positions of the Galilean satellites over time. A way of verifying that the system is in equilibrium is to measure the rate of change of the semimajor axis for the three moons in the Laplace resonance. If the system is in equilibrium, the tidal migration timescale must be identical for all three moons. Stability of the Laplace resonance implies a specific equilibrium between energy exchanges in the whole Jovian system and has implications for its past and future evolution.

    5. Can stable isotopes inform our understanding of the long-term evolution of tidal heating? We lack knowledge about the long-term evolution of tidally heated systems, in part because their geologic activity destroys the older geologic record. Isotopic ratios, which preserve long-term records of processes, provide a potential window into these histories. If processes like volcanic eruptions and volatile loss lead to the preferential loss of certain isotopes from a moon or planet, significant fractionation of a species may occur over the age of the solar system. However, to draw robust conclusions, we must understand the current and past processes that affect the fractionation of these species (Figure 5), as well as the primordial isotopic ratios from the body of interest. Measurements of isotopic mass ratios—in, for example, the atmospheres and volcanic plumes of moons or planets of interest—in combination with a better understanding of these fractionation processes can inform long-term evolution.

    Fig. 5. There are many potential sources, sinks, and transport processes affecting chemical and isotopic species at Io. Credit: Keck Institute for Space Studies

    Missions to the Moons

    Both the Europa Clipper and JUICE are currently in development and are expected to arrive at Jupiter in the late 2020s or early 2030s. One of the most important measurements made during these missions could be precision ranging during close flybys to detect changes in the orbits of Europa, Ganymede, and Callisto, which would provide a key constraint on equilibrium of the Jovian system if we can acquire comparable measurements of Io. JUICE will be the first spacecraft to orbit a satellite (Ganymede), providing excellent gravity, topography, magnetic induction, and mass spectrometry measurements.

    The Dragonfly mission to Titan includes a seismometer and electrodes on the landing skids to sense electric fields that may probe the depth to Titan’s interior water ocean.

    NASA The Dragonfly mission to Titan

    Potential Europa and Enceladus landers could also host seismometers. The ice giants Uranus and Neptune may also finally get a dedicated mission in the next decade. The Uranian system contains six medium-sized moons and may provide another test of the resonance locking hypothesis suggested by Fuller et al. [2016] [MNRAS], and Neptune’s active moon Triton is another strong candidate to host an ocean.

    The most promising avenue to address the five key questions noted at the Keck Institute of Space Studies workshop is a new spacecraft mission that would make multiple close flybys of Io [McEwen et al., 2019], combined with laboratory experiments and Earth-based telescopic observations. An Io mission could characterize volcanic processes to address question 1, test interior models via geophysical measurements coupled with laboratory experiments and theory to address questions 2 and 3, measure the rate of Io’s orbital migration to determine whether the Laplace resonance is in equilibrium to address question 4, and determine neutral compositions and measure stable isotopes in Io’s atmosphere and plumes to address question 5.


    Barr, A. C., V. Dobos, and L. L. Kiss (2018), Interior structures and tidal heating in the TRAPPIST-1 planets, Astron. Astrophys., 613, A37, https://doi.org/10.1051/0004-6361/201731992.

    de Kleer, K., et al. (2019), Tidal heating: Lessons from Io and the Jovian system, final report, Keck Inst. for Space Studies, Pasadena, Calif., kiss.caltech.edu/final_reports/Tidal_Heating_final_report.pdf.

    Fuller, J., J. Luan, and E. Quataert (2016), Resonance locking as the source of rapid tidal migration in the Jupiter and Saturn moon systems, Mon. Not. R. Astron. Soc., 458(4), 3,867–3,879, https://doi.org/10.1093/mnras/stw609.

    Khurana, K. K., et al. (2011), Evidence of a global magma ocean in Io’s interior, Science, 332(6034), 1,186–1,189, https://doi.org/10.1126/science.1201425.

    McEwen, A. S., et al. (2019), The Io Volcano Observer (IVO): Follow the heat!, Lunar Planet. Sci. Conf., 50, Abstract 1316, http://archive.space.unibe.ch/staff/wurz/McEwen_LPSC_1316.pdf.

    Peale, S. J., P. Cassen, and R. T. Reynolds (1979), Melting of Io by tidal dissipation, Science, 203(4383), 892–894, https://doi.org/10.1126/science.203.4383.892.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

  • richardmitnick 1:35 pm on October 18, 2019 Permalink | Reply
    Tags: "Did comet impacts jump-start life on Earth?", , Astronomy, , ,   

    From Astrobiology Magazine: “Did comet impacts jump-start life on Earth?” 

    Astrobiology Magazine

    From Astrobiology Magazine

    Oct 18, 2019

    Comets screaming through the atmosphere of early Earth at tens of thousands of miles per hour likely contained measurable amounts of protein-forming amino acids. Upon impact, these amino acids self-assembled into significantly larger nitrogen-containing aromatic structures that are likely constituents of polymeric biomaterials.

    Cometary impacts can produce complex carbon-rich prebiotic materials from simple organic precursors such as the amino acid glycine. Image by Liam Kraus/LLNL

    That is the conclusion of a new study by Lawrence Livermore National Laboratory (LLNL) researchers who explored the idea that the extremely high pressures and temperatures induced by shock impact can cause small biomolecules to condense into larger life-building compounds. The research appears in the journal Chemical Science and will be highlighted on the back cover of an upcoming issue.

    Glycine is the simplest protein-forming amino acid and has been detected in cometary dust samples and other astrophysical icy materials. However, the role that extraterrestrial glycine played in the origins of life is largely unknown, in part because little is known about its survivability and reactivity during impact with a planetary surface.

    To address this question, the LLNL team used quantum simulations to model water-glycine mixtures at impact conditions reaching 480,000 atmospheres of pressure and more than 4,000 degrees Fahrenheit (approximating probable pressures and temperatures of a planetary impact). The intense heat and pressure caused the glycine molecules to condense into carbon-rich clusters that tended to exhibit a diamond-like, three-dimensional geometry.

    Upon expanding and cooling to ambient conditions, these clusters chemically rearranged as they unfolded into a number of large, planar molecules. Many of these molecules were nitrogen-containing polycyclic aromatic hydrocarbons (NPAHs), which can be larger and more chemically complex than those formed in other prebiotic synthesis scenarios. A number of the predicted products had different functional groups and embedded bonded regions akin to chains of amino acids (also called oligo-peptides). Other small organic molecules with prebiotic relevance also were predicted to form, including known metabolic products, such as guanidine, urea and carbamic acid.

    “NPAHs are important prebiotic precursors in the synthesis of nucleobases and could constitute significant aerosol intermediates in the atmosphere of Titan (the largest moon of Saturn),” said LLNL scientist Matthew Kroonblawd, lead author of the study. “The recovery products predicted by our study could have been a first step in creating biologically relevant materials with increased complexity, such as polypeptides and nucleic acids upon exposure to the harsh conditions likely present on ancient Earth and other rocky planets and moons.”

    “We used a high-throughput quantum molecular dynamics approach to ascertain the dominant chemical trends of simple life-building precursors like amino acids in impacting astrophysical icy mixtures,” said LLNL scientist Nir Goldman, a co-author of the study. “Our work presents a novel synthetic route for large molecules like NPAHs and highlights the importance of both the thermodynamic path and local chemical self-assembly in forming prebiotic species during shock synthesis.”

    “Beyond the broader scientific impact of this research, our work also emphasizes the importance of generating statistically meaningful data when studying such complicated phenomena,” said LLNL scientist Rebecca Lindsey, also a co-author of the study.

    The work was funded by the NASA Astrobiology: Exobiology and Evolutionary Biology Program and LLNL’s Laboratory Directed Research and Development Program.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:22 pm on October 18, 2019 Permalink | Reply
    Tags: "Unexamined lunar rocks indicate early bombardment", Astronomy, , , ,   

    From Lawrence Livermore National Laboratory: “Unexamined lunar rocks indicate early bombardment” 

    From Lawrence Livermore National Laboratory


    Anne M Stark

    Astronaut and geologist Harrison Schmitt landed on the moon with Eugene Cernan on Dec. 11, 1972, and began collecting rocks. Lawrence Livermore is studying rocks from that Apollo 16 mission. Photo courtesy of NASA.

    A team of Lawrence Livermore National Laboratory (LLNL) scientists has challenged the long-standing theory that the moon experienced a period of intense meteorite bombardment about 3.8 billion years ago, when the first forms of life appeared on Earth.

    This theory is known as the Late Heavy Bombardment and is thought to have resulted from disturbance of the asteroid belt due to the outward migration of the giant planets. The Late Heavy Bombardment hypothesis was predicated on evidence from numerous impacts on the lunar surface around 3.8 billion years ago and suggested that there were essentially no prior impacts.

    However, Lawrence Livermore cosmochemists have examined a rock collected during the Apollo 16 mission in 1971 and found that a very large impact occurred around 4.3 billion years ago, thus challenging the Late Heavy Bombardment hypothesis. The research appears in the Journal of Geophysical Research.

    By looking at a previously unstudied Apollo 16 lunar rock, the cosmochemists discovered that the sample came from deep in the lunar crust, solidifying at a depth greater than 20 kilometers.

    Earth’s moon is believed to have formed when a Mars-sized body slammed into Earth and caused a piece of what was then Earth to separate from the planet and form the moon. The moon originally was a lunar magma ocean before it congealed into what we now know as Earth’s moon. Eventually the magma on the outer layers started to solidify.

    Harrison Schmitt observes a split lunar boulder during the third Apollo 17 extravehicular activity at the Taurus-Littrow landing site. Photo courtesy of NASA.

    The Livermore team applied numerous dating techniques to the samples that are based on the natural decay of long-lived isotopes such as potassium-40 to argon, rubidium-87 to strontium and samarium-147 to neodymium. Each of these geologic clocks record the age when a sample was at a specific temperature between 300 degrees Celsius to 850 degrees.

    “The interesting observation is that the same 4.3-billion-year age is recorded by all isotopic systems,” said LLNL cosmochemist Naomi Marks, lead author of the paper. “This implies that large basin-forming impact events occurred on the moon 4.3 billion years ago, and that these types of events did not occur only at 3.8 billion years ago during the Late Heavy Bombardment.”

    Using a scaling algorithm, the group estimated that the sample was brought to the surface 4.3 billion years ago by an impact that produced a crater on the surface at least 700 kilometers across.

    LLNL partnered with a University of New Mexico team to complete this work. Other LLNL authors include Lars Borg and Bill Cassata. The work was funded by both NASA cosmochemistry grants and LLNL’s Laboratory Directed Research and Development program.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration
    Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center (FFRDC), it is primarily funded by the U.S. Department of Energy (DOE) and managed and operated by Lawrence Livermore National Security, LLC (LLNS), a partnership of the University of California, Bechtel, BWX Technologies, AECOM, and Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it.
    LLNL is self-described as “a premier research and development institution for science and technology applied to national security.” Its principal responsibility is ensuring the safety, security and reliability of the nation’s nuclear weapons through the application of advanced science, engineering and technology. The Laboratory also applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.

    The Laboratory is located on a one-square-mile (2.6 km2) site at the eastern edge of Livermore. It also operates a 7,000 acres (28 km2) remote experimental test site, called Site 300, situated about 15 miles (24 km) southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of roughly 5,800 employees.

    LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley. It was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence,[2] director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility.

    The new laboratory was sited at a former naval air station of World War II. It was already home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions (i.e. fusion). About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus.

    Lawrence tapped 32-year-old Herbert York, a former graduate student of his, to run Livermore. Under York, the Lab had four main programs: Project Sherwood (the magnetic-fusion program), Project Whitney (the weapons-design program), diagnostic weapon experiments (both for the Los Alamos and Livermore laboratories), and a basic physics program. York and the new lab embraced the Lawrence “big science” approach, tackling challenging projects with physicists, chemists, engineers, and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university’s board of regents named both laboratories for him, as the Lawrence Radiation Laboratory.

    Historically, the Berkeley and Livermore laboratories have had very close relationships on research projects, business operations, and staff. The Livermore Lab was established initially as a branch of the Berkeley laboratory. The Livermore lab was not officially severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, and LLNL’s remote test location as Site 300.[3]

    The laboratory was renamed Lawrence Livermore Laboratory (LLL) in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had exclusively managed and operated the Laboratory since its inception 55 years before. The laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008.[4] The jury found that LLNS breached a contractual obligation to terminate the employees only for “reasonable cause.”[5] The five plaintiffs also have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial.[6] There are 125 co-plaintiffs awaiting trial on similar claims against LLNS.[7] The May 2008 layoff was the first layoff at the laboratory in nearly 40 years.[6]

    On March 14, 2011, the City of Livermore officially expanded the city’s boundaries to annex LLNL and move it within the city limits. The unanimous vote by the Livermore city council expanded Livermore’s southeastern boundaries to cover 15 land parcels covering 1,057 acres (4.28 km2) that comprise the LLNL site. The site was formerly an unincorporated area of Alameda County. The LLNL campus continues to be owned by the federal government.


    DOE Seal

  • richardmitnick 12:39 pm on October 18, 2019 Permalink | Reply
    Tags: "Ancient stars shed light on Earth’s similarities to other planets", Astronomy, , , ,   

    From UCLA: “Ancient stars shed light on Earth’s similarities to other planets” 

    UCLA bloc

    From UCLA

    October 17, 2019
    Stuart Wolpert

    An artist’s rendering shows a white dwarf star with a planet in the upper right. Mark Garlick

    Earth-like planets may be common in the universe, a new UCLA study implies. The team of astrophysicists and geochemists presents new evidence that the Earth is not unique. The study was published in the journal Science on Oct. 18.

    “We have just raised the probability that many rocky planets are like the Earth, and there’s a very large number of rocky planets in the universe,” said co-author Edward Young, UCLA professor of geochemistry and cosmochemistry.

    The scientists, led by Alexandra Doyle, a UCLA graduate student of geochemistry and astrochemistry, developed a new method to analyze in detail the geochemistry of planets outside of our solar system. Doyle did so by analyzing the elements in rocks from asteroids or rocky planet fragments that orbited six white dwarf stars.

    “We’re studying geochemistry in rocks from other stars, which is almost unheard of,” Young said.

    “Learning the composition of planets outside our solar system is very difficult,” said co-author Hilke Schlichting, UCLA associate professor of astrophysics and planetary science. “We used the only method possible — a method we pioneered — to determine the geochemistry of rocks outside of the solar system.”

    White dwarf stars are dense, burned-out remnants of normal stars. Their strong gravitational pull causes heavy elements like carbon, oxygen and nitrogen to sink rapidly into their interiors, where the heavy elements cannot be detected by telescopes. The closest white dwarf star Doyle studied is about 200 light-years from Earth and the farthest is 665 light-years away.

    “By observing these white dwarfs and the elements present in their atmosphere, we are observing the elements that are in the body that orbited the white dwarf,” Doyle said. The white dwarf’s large gravitational pull shreds the asteroid or planet fragment that is orbiting it, and the material falls onto the white dwarf, she said. “Observing a white dwarf is like doing an autopsy on the contents of what it has gobbled in its solar system.”

    The data Doyle analyzed were collected by telescopes, mostly from the W.M. Keck Observatory in Hawaii, that space scientists had previously collected for other scientific purposes.

    Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, 4,207 m (13,802 ft)

    “If I were to just look at a white dwarf star, I would expect to see hydrogen and helium,” Doyle said. “But in these data, I also see other materials, such as silicon, magnesium, carbon and oxygen — material that accreted onto the white dwarfs from bodies that were orbiting them.”

    When iron is oxidized, it shares its electrons with oxygen, forming a chemical bond between them, Young said. “This is called oxidation, and you can see it when metal turns into rust,” he said. “Oxygen steals electrons from iron, producing iron oxide rather than iron metal. We measured the amount of iron that got oxidized in these rocks that hit the white dwarf. We studied how much the metal rusts.”

    Rocks from the Earth, Mars and elsewhere in our solar system are similar in their chemical composition and contain a surprisingly high level of oxidized iron, Young said. “We measured the amount of iron that got oxidized in these rocks that hit the white dwarf,” he said.

    The sun is made mostly of hydrogen, which does the opposite of oxidizing — hydrogen adds electrons.

    The researchers said the oxidation of a rocky planet has a significant effect on its atmosphere, its core and the kind of rocks it makes on its surface. “All the chemistry that happens on the surface of the Earth can ultimately be traced back to the oxidation state of the planet,” Young said. “The fact that we have oceans and all the ingredients necessary for life can be traced back to the planet being oxidized as it is. The rocks control the chemistry.”

    Until now, scientists have not known in any detail whether the chemistry of rocky exoplanets is similar to or very different from that of the Earth.

    How similar are the rocks the UCLA team analyzed to rocks from the Earth and Mars?

    “Very similar,” Doyle said. “They are Earth-like and Mars-like in terms of their oxidized iron. We’re finding that rocks are rocks everywhere, with very similar geophysics and geochemistry.”

    “It’s always been a mystery why the rocks in our solar system are so oxidized,” Young said. “It’s not what you expect. A question was whether this would also be true around other stars. Our study says yes. That bodes really well for looking for Earth-like planets in the universe.”

    White dwarf stars are a rare environment for scientists to analyze.

    The researchers studied the six most common elements in rock: iron, oxygen, silicon, magnesium, calcium and aluminum. They used mathematical calculations and formulas because scientists are unable to study actual rocks from white dwarfs. “We can determine the geochemistry of these rocks mathematically and compare these calculations with rocks that we do have from Earth and Mars,” said Doyle, whose background is in geology and mathematics. “Understanding the rocks is crucial because they reveal the geochemistry and geophysics of the planet.”

    “If extraterrestrial rocks have a similar quantity of oxidation as the Earth has, then you can conclude the planet has similar plate tectonics and similar potential for magnetic fields as the Earth, which are widely believed to be key ingredients for life,” Schlichting said. “This study is a leap forward in being able to make these inferences for bodies outside our own solar system and indicates it’s very likely there are truly Earth analogs.”

    Young said his department has both astrophysicists and geochemists working together.

    “The result,” he said, “is we are doing real geochemistry on rocks from outside our solar system. Most astrophysicists wouldn’t think to do this, and most geochemists wouldn’t think to ever apply this to a white dwarf.”

    Co-authors are Benjamin Zuckerman, a UCLA professor of physics and astronomy, and Beth Klein, a UCLA astronomy researcher.

    The research was funded by NASA.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    UC LA Campus

    For nearly 100 years, UCLA has been a pioneer, persevering through impossibility, turning the futile into the attainable.

    We doubt the critics, reject the status quo and see opportunity in dissatisfaction. Our campus, faculty and students are driven by optimism. It is not naïve; it is essential. And it has fueled every accomplishment, allowing us to redefine what’s possible, time after time.

    This can-do perspective has brought us 12 Nobel Prizes, 12 Rhodes Scholarships, more NCAA titles than any university and more Olympic medals than most nations. Our faculty and alumni helped create the Internet and pioneered reverse osmosis. And more than 100 companies have been created based on technology developed at UCLA.

  • richardmitnick 5:08 pm on October 17, 2019 Permalink | Reply
    Tags: "The Clumpy and Lumpy Death of a Star", Astronomy, , , , , The Tycho supernova remnant   

    From NASA Chandra: “The Clumpy and Lumpy Death of a Star” 

    NASA Chandra Banner

    NASA/Chandra Telescope

    From NASA Chandra

    October 17, 2019




    A new image of the Tycho supernova remnant from Chanda shows a pattern of bright clumps and fainter holes in the X-ray data.

    Scientists are trying to determine if this ‘clumpiness’ was caused by the supernova explosion itself or something in its aftermath.

    By comparing Chandra data to computer simulations, researchers found evidence that the explosion was likely the source of this lumpy distribution.

    The original supernova was first seen by skywatchers in 1572, including the Danish astronomer Tycho Brahe who the object was eventually named after.

    In 1572, Danish astronomer Tycho Brahe was among those who noticed a new bright object in the constellation Cassiopeia. Adding fuel to the intellectual fire that Copernicus started, Tycho showed this “new star” was far beyond the Moon, and that it was possible for the Universe beyond the Sun and planets to change.

    Astronomers now know that Tycho’s new star was not new at all. Rather it signaled the death of a star in a supernova, an explosion so bright that it can outshine the light from an entire galaxy. This particular supernova was a Type Ia, which occurs when a white dwarf star pulls material from, or merges with, a nearby companion star until a violent explosion is triggered. The white dwarf star is obliterated, sending its debris hurtling into space.

    As with many supernova remnants, the Tycho supernova remnant, as it’s known today (or “Tycho,” for short), glows brightly in X-ray light because shock waves — similar to sonic booms from supersonic aircraft — generated by the stellar explosion heat the stellar debris up to millions of degrees. In its two decades of operation, NASA’s Chandra X-ray Observatory has captured unparalleled X-ray images of many supernova remnants.

    Chandra reveals an intriguing pattern of bright clumps and fainter areas in Tycho. What caused this thicket of knots in the aftermath of this explosion? Did the explosion itself cause this clumpiness, or was it something that happened afterward?

    This latest image of Tycho from Chandra is providing clues. To emphasize the clumps in the image and the three-dimensional nature of Tycho, scientists selected two narrow ranges of X-ray energies to isolate material (silicon, colored red) moving away from Earth, and moving towards us (also silicon, colored blue). The other colors in the image (yellow, green, blue-green, orange and purple) show a broad range of different energies and elements, and a mixture of directions of motion. In this new composite image, Chandra’s X-ray data have been combined with an optical image of the stars in the same field of view from the Digitized Sky Survey.

    By comparing the Chandra image of Tycho to two different computer simulations, researchers were able to test their ideas against actual data. One of the simulations began with clumpy debris from the explosion. The other started with smooth debris from the explosion and then the clumpiness appeared afterwards as the supernova remnant evolved and tiny irregularities were magnified.

    A statistical analysis using a technique that is sensitive to the number and size of clumps and holes in images was then used. Comparing results for the Chandra and simulated images, scientists found that the Tycho supernova remnant strongly resembles a scenario in which the clumps came from the explosion itself. While scientists are not sure how, one possibility is that star’s explosion had multiple ignition points, like dynamite sticks being set off simultaneously in different locations.

    Understanding the details of how these stars explode is important because it may improve the reliability of the use of Type Ia supernovas “standard candles” — that is, objects with known inherent brightness, which scientists can use to determine their distance. This is very important for studying the expansion of the universe. These supernovae also sprinkle elements such as iron and silicon, that are essential for life as we know it, into the next generation of stars and planets.

    A paper describing these results appeared in the July 10th, 2019 issue of The Astrophysical Journal. The authors are Toshiki Sato (RIKEN in Saitama, Japan, and NASA’s Goddard Space Flight Center in Greenbelt, Maryland), John (Jack) Hughes (Rutgers University in Piscataway, New Jersey), Brian Williams, (NASA’s Goddard Space Flight Center), and Mikio Morii (The Institute of Statistical Mathematics in Tokyo, Japan).

    3D printed model of Tycho’s Supernova Remnant

    Another team of astronomers, led by Gilles Ferrand of RIKEN in Saitama, Japan, has constructed their own three-dimensional computer models of a Type Ia supernova remnant as it changes with time. Their work shows that initial asymmetries in the simulated supernova explosion are required so that the model of the ensuing supernova remnant closely resembles the Chandra image of Tycho, at a similar age. This conclusion is similar to that made by Sato and his team.

    A paper describing the results by Ferrand and co-authors appeared in the June 1st, 2019 issue of The Astrophysical Journal.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA’s Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra’s science and flight operations from Cambridge, Mass.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: