Updates from June, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:29 pm on June 25, 2019 Permalink | Reply
    Tags: "Galaxy Clusters Caught in a First Kiss", , , , , , Giant Metrewave Radio Telescope, JAXA/Suzaku satellite, , SDSS Telescope at Apache Point Observatory, SKA LOFAR core near Exloo Netherlands   

    From NASA Chandra: “Galaxy Clusters Caught in a First Kiss” 

    NASA Chandra Banner

    NASA/Chandra Telescope


    From NASA Chandra

    June 25, 2019
    Media contacts:
    Megan Watzke
    Chandra X-ray Center, Cambridge, Mass.
    617-496-7998
    mwatzke@cfa.harvard.edu

    1
    Composite

    2
    X-ray

    3
    Optical

    4
    Radio

    Credit: X-ray: NASA/CXC/RIKEN/L. Gu et al; Radio: NCRA/TIFR/GMRT; Optical: SDSS
    Press Image, Caption, and Videos

    Giant Metrewave Radio Telescope, an array of thirty telecopes, located near Pune in India

    SDSS Telescope at Apache Point Observatory, near Sunspot NM, USA, Altitude 2,788 meters (9,147 ft)

    For the first time, astronomers have found two giant clusters of galaxies that are just about to collide. This observation can be seen as a missing ‘piece of the puzzle’ in our understanding of the formation of structure in the Universe, since large-scale structures—such as galaxies and clusters of galaxies—are thought to grow by collisions and mergers. The result was published in Nature Astronomy on June 24th, 2019 and used data from NASA’s Chandra X-ray Observatory and other X-ray missions.

    Clusters of galaxies are the largest known bound objects and consist of hundreds of galaxies that each contain hundreds of billions of stars. Ever since the Big Bang, these objects have been growing by colliding and merging with each other. Due to their large size, with diameters of a few million light years, these collisions can take about a billion years to complete. Eventually the two colliding clusters will have merged into one bigger cluster.

    Because the merging process takes much longer than a human lifetime, we only see snapshots of the various stages of these collisions. The challenge is to find colliding clusters that are just at the stage of first touching each other.

    3

    In theory, this stage has a relatively short duration and is therefore hard to find. It is like finding a raindrop that just touches the water surface in a photograph of a pond during a rain shower. Obviously, such a picture would show a lot of falling droplets and ripples on the water surface, but only few droplets in the process of merging with the pond. Similarly, astronomers found a lot of single clusters and merged clusters with outgoing ripples indicating a past collision, but until now no two clusters that are just about to touch each other.

    An international team of astronomers now announced the discovery of two clusters on the verge of colliding. This enabled astronomers to test their computer simulations, which show that in the first moments a shock wave, analogous to the sonic boom produced by supersonic motion of an airplane, is created in between the clusters and travels out perpendicular to the merging axis. “These clusters show the first clear evidence for this type of merger shock,” says first author Liyi Gu from RIKEN national science institute in Japan and SRON Netherlands Institute for Space Research. “The shock created a hot belt region of 100-million-degree gas between the clusters, which is expected to extend up to, or even go beyond the boundary of the giant clusters. Therefore, the observed shock has a huge impact on the evolution of galaxy clusters and large scale structures.”

    Astronomers are planning to collect more ‘snapshots’ to ultimately build up a continuous model describing the evolution of cluster mergers. SRON-researcher Hiroki Akamatsu: “More merger clusters like this one will be found by eROSITA, an X-ray all-sky survey mission that will be launched this year.

    eRosita DLR MPG

    Two other upcoming X-ray missions, XRISM and Athena, will help us understand the role of these colossal merger shocks in the structure formation history.”

    JAXA XRSM spacecraft schematic

    ESA Athena

    Liyi Gu and his collaborators studied the colliding pair during an observation campaign, carried out with three X-ray satellites (ESA’s XMM-Newton satellite, NASA’s Chandra, and JAXA’s Suzaku satellite) and two radio telescopes (the Low-Frequency Array, a European project led by the Netherlands, and the Giant Metrewave Radio Telescope operated by National Centre for Radio Astrophysics of India).

    ESA/XMM Newton

    JAXA/Suzaku satellite

    SKA LOFAR core (“superterp”) near Exloo, Netherlands

    Other materials about the findings are available at:
    http://chandra.si.edu

    For more Chandra images, multimedia and related materials, visit:
    http://www.nasa.gov/chandra

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA’s Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra’s science and flight operations from Cambridge, Mass.

     
  • richardmitnick 1:38 pm on June 25, 2019 Permalink | Reply
    Tags: "Large earthquake on Japan’s west coast points to a profound shortcoming in the national seismic hazard model", , , ,   

    From temblor: “Large earthquake on Japan’s west coast points to a profound shortcoming in the national seismic hazard model” 

    1

    From temblor

    June 24, 2019
    Sara E. Pratt, M.A., M.S.
    @Geosciencesara

    On Tuesday, June 18, 2019, a magnitude-6.4 quake struck the west coast of Honshu along the eastern Sea of Japan. The quake was shallow — 12 kilometers (7.5 miles) deep — and only 6 kilometers (3.7 kilometers) offshore, according to the U.S. Geological Survey. Its proximity to the cities of Tsuruoka and Sakata, both of which have populations of about 100,000, meant many were exposed to shaking. No one was killed, 21 people were injured, and despite the shallow depth, infrastructure damage was minimal. But the quake was a reminder that this region has experienced several large inland quakes over the last 15 years, and could again. In fact, two magnitude-6.8 earthquakes struck near the hypocenter of this week’s quake in Niigata in 2004 and 2007. The 2004 Niigata-Chuetsu quake killed 40 people, injured 3,000 and damaged more than 6,000 homes, and the 2007 Niigata quake killed seven people, injured more than 830 and destroyed 500 houses.

    1
    In the hours that followed the June 18 Tsuruoka quake, aftershocks ranging from magnitude-2.7 to magnitude-4.1 were recorded around Yamagata and Niigata. Credit: HI-Net/NIED

    “The tectonic situation, epicenter offshore near the coast, and the size of the quakes are quite similar,” says Prof. Shinji Toda, a geophysicist at the International Research Institute of Disaster Science at Tohoku University who studies inland quakes.

    Crucially, the hazard of large earthquakes striking off the coasts of Yamagata and Niigata prefectures is being underestimated by Japan’s national earthquake hazard models, according to some seismologists.

    “The government is underestimating the probability of magnitude-7.5 to -7.8 events along the eastern Sea of Japan,” says Prof. Toda. “It misleads the general public [that] we will not have any large events near the coast of Yamagata and Niigata.”

    The June 18 thrust fault rupture (where the crust is being compressed horizontally) occurred on the eastern margin of the Sea of Japan in a seismic zone where numerous active faults accommodate the strain of east-west crustal shortening transmitted from the subduction of the Pacific Plate, says Prof. Toda.

    During the past 5-25 million years (the Miocene epoch), this region underwent ‘backarc’ extension (stretching), opening what is now the eastern Sea of Japan. Those tensional faults have now been reactivated, with their sense of slip reversed, as thrust faults. Thus, “the hazard of inland large quakes is always high,” Prof. Toda says.

    Although the country’s east coast, where the Pacific Plate subducts beneath the North American and Eurasian plates in the Japan Trench, is more prone to large thrust quakes like the March 2011 magnitude-9 Tohoku megathrust quake, the west coast of Japan also is quite seismically active, a fact that is not being adequately accounted for in Japan’s earthquake hazard model, says geophysicist and Temblor CEO Ross Stein.

    2
    When compared to Japan’s national earthquake model, the GEAR model indicates a higher rate of earthquake activity on the eastern margin of the Sea of Japan, with a significant lifetime likelihood of experiencing a magnitude-7 or -7.5 quake.

    Japan’s earthquake hazard models are released by the Japan Seismic Hazard Information Station (J-SHIS). The J-SHIS model uses inputs based on known faults, historical quakes and assumes fairly regular recurrence intervals. It has been criticized for underestimating the hazard of future the Tohoku quake, whose tsunami killed more than 18,000 people.

    Scientists and officials in “Japan have done their very best to create a model that they think reflects future earthquake occurrence based on the expectation of regularity in the size and recurrence behavior of earthquakes. They have also built in the expectation that the longer it’s been since the last large earthquake, the more likely the next one is,” Stein says.

    The J-SHIS model thus anticipates a strong likelihood that the next megaquake will occur in the Nankai Trough, off the southeast coast of Honshu, where two deadly magnitude-8.1 quakes struck in the 1940s. The 1944 Tōnankai and the 1946 Nankaidō quakes both triggered tsunamis and killed more than 1,200 and 1,400 people, respectively. “The Japanese model is putting all of its weight on this area, southeast of Tokyo and Nagoya,” Stein says.

    Another model, the Global Earthquake Activity Rate (GEAR) forecast, that was developed by a team from UCLA, University of Nevada Reno, and Temblor, and is used in the Temblor app, indicates that quakes on the west coast of Honshu could likely reach magnitude-7 or magnitude-7.5 in the typical resident’s lifetime.

    Unlike traditional earthquake hazard models, GEAR does not include active faults or historical earthquakes, which are not uniformly available around the globe. Instead, GEAR takes a global approach that uses only two factors: the stress that drives quakes (measured by GPS) and the events that release that stress, represented in the model by a complete global record of all quakes greater than magnitude-5.7 that have occurred over the past 40 years (from the Global CMT catalog).

    “What the GEAR model says is that the Tohoku coast is a lot more likely to produce a large earthquake than the Japan Sea side, but the Japan Sea side is still quite active,” Stein says. “It should produce large earthquakes and has.”

    Significant historical earthquakes in the shear zone along the eastern Sea of Japan include the 1964 magnitude-7.5 Niigata earthquake, the 1983 magnitude-7.7 Nihonkai-Chubu earthquake and the 1993 magnitude-7.8 Hokkaido-Nansei-Oki earthquake.

    References

    USGS Event Pages – https://earthquake.usgs.gov/earthquakes/eventpage/us600042fx/executive

    https://earthquake.usgs.gov/earthquakes/eventpage/us600042fx/pager

    Bird, P., D. D. Jackson, Y. Y. Kagan, C. Kreemer, and R. S. Stein (2015). GEAR1: A global earthquake activity rate model constructed from geodetic strain rates and smoothed seismicity, Bull. Seismol. Soc. Am. 105, no. 5, 2538–2554.

    Toda and Enescu, (2011). Rate/state Coulomb stress transfer model for the CSEP Japan seismicity forecast. Earth, Planetary and Science, 63: 2. https://doi.org/10.5047/eps.2011.01.004 https://link.springer.com/article/10.5047/eps.2011.01.004

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network project

    Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States
    1

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

     
  • richardmitnick 1:06 pm on June 25, 2019 Permalink | Reply
    Tags: "The Low Density of Some Exoplanets is Confirmed", , , , , , Kepler-9 and its planets Kepler-9b and Kepler-9c   

    From Harvard-Smithsonian Center for Astrophysics: “The Low Density of Some Exoplanets is Confirmed” 

    Harvard Smithsonian Center for Astrophysics


    From Harvard-Smithsonian Center for Astrophysics

    June 21, 2019

    The Kepler mission and its extension, called K2, discovered thousands of exoplanets.

    NASA/Kepler Telescope, and K2 March 7, 2009 until November 15, 2018

    It detected them using the transit technique, measuring the dip in light intensity whenever an orbiting planet moved across the face of its host star as viewed from Earth.

    Planet transit. NASA/Ames

    Transits can not only measure the orbital period, they often can determine the size of the exoplanet from the detailed depth and shape of its transit curve and the host star’s properties. The transit method, however, does not measure the mass of the planet. The radial velocity method, by contrast, which measures the wobble of a host star under the gravitational pull of an orbiting exoplanet, allows for the measurement of its mass. Knowing a planet’s radius and mass allows for the determination of its average density, and hence clues to its composition.

    Radial Velocity Method-Las Cumbres Observatory

    About fifteen years ago, CfA astronomers and others realized that in planetary systems with multiple planets, the periodic gravitational tug of one planet on another will alter their orbital parameters. Although the transit method cannot directly measure exoplanet masses, it can detect these orbital variations and these can be modeled to infer masses. Kepler has identified hundreds of exoplanet systems with transit-timing variations, and dozens have been successfully modeled. Surprisingly, this procedure seemed to find a prevalence of exoplanets with very low densities. The Kepler-9 system, for example, appears to have two planets with densities respectively of 0.42 and 0.31 grams per cubic centimeter. (For comparison, the rocky Earth’s average density is 5.51 grams per cubic centimeter, water is, by definition, 1.0 grams per cubic centimeter, and the gas giant Saturn is 0.69 grams per cubic centimeter.) The striking results cast some doubt on one or more parts of the transit timing variation methodology and created a long-standing concern.

    CfA astronomers David Charbonneau, David Latham, Mercedes Lopez-Morales, and David Phillips, and their colleagues tested the reliability of the method by measuring the densities of the Kepler-9 planets using the radial velocity method, its two Saturn-like planets being among a small group of exoplanets whose masses can be measured (if just barely) with either technique.

    2
    An artist’s depiction of Kepler-9 and its planets Kepler-9b and Kepler-9c. NASA

    They used the HARPS-N spectrometer on the Telescopio Nazionale Galileo in La Palma in sixteen observing epochs; HARPS-N can typically measure velocity variations with an error as tiny as about twenty miles an hour. Their results confirm the very low densities obtained by the transit-timing method, and verify the power of the transit-variation method.

    Harps North at Telescopio Nazionale Galileo –

    Telescopio Nazionale Galileo a 3.58-meter Italian telescope, located at the Roque de los Muchachos Observatory on the island of La Palma in the Canary Islands, Spain, Altitude 2,396 m (7,861 ft)

    Science paper:
    HARPS-N Radial Velocities Confirm the Low Densities of the Kepler-9 Planets
    MNRAS

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Center for Astrophysics combines the resources and research facilities of the Harvard College Observatory and the Smithsonian Astrophysical Observatory under a single director to pursue studies of those basic physical processes that determine the nature and evolution of the universe. The Smithsonian Astrophysical Observatory (SAO) is a bureau of the Smithsonian Institution, founded in 1890. The Harvard College Observatory (HCO), founded in 1839, is a research institution of the Faculty of Arts and Sciences, Harvard University, and provides facilities and substantial other support for teaching activities of the Department of Astronomy.

     
  • richardmitnick 12:40 pm on June 25, 2019 Permalink | Reply
    Tags: "NASA Technology Missions Launch on SpaceX Falcon Heavy", , , , ,   

    From JPL-Caltech: “NASA Technology Missions Launch on SpaceX Falcon Heavy” 

    NASA JPL Banner

    From JPL-Caltech

    June 25, 2019

    Arielle Samuelson
    Jet Propulsion Laboratory, Pasadena, Calif.
    818-354-0307
    arielle.a.samuelson@jpl.nasa.gov

    Clare Skelly
    Headquarters, Washington
    202-358-4273
    clare.a.skelly@nasa.gov

    Karen Fox
    Goddard Space Flight Center, Greenbelt, Md.
    301-286-6284
    karen.c.fox@nasa.gov

    1
    A SpaceX Falcon Heavy rocket is ready for launch on the pad at Launch Complex 39A at NASA’s Kennedy Space Center in Florida on June 24, 2019. SpaceX and the U.S. Department of Defense will launch two dozen satellites to space, including four NASA payloads that are part of the Space Test Program-2, managed by the U.S. Photo Credit: NASA/Kim Shiflett

    NASA technology demonstrations, which one day could help the agency get astronauts to Mars, and science missions, which will look at the space environment around Earth and how it affects us, have launched into space on a Falcon Heavy rocket.

    The NASA missions – including the Deep Space Atomic Clock and two instruments from NASA’S Jet Propulsion Laboratory in Pasadena, California – lifted off at 11:30 p.m. PDT (2:30 a.m. EDT) Tuesday from NASA’s Kennedy Space Center in Florida, as part of the Department of Defense’s Space Test Program-2 (STP-2) launch.

    “This launch was a true partnership across government and industry, and it marked an incredible first for the U.S. Air Force Space and Missile Systems Center,” said Jim Reuter, associate administrator for NASA’s Space Technology Mission Directorate. “The NASA missions aboard the Falcon Heavy also benefited from strong collaborations with industry, academia and other government organizations.”

    The missions, each with a unique set of objectives, will aid in smarter spacecraft design and benefit the agency’s Moon to Mars exploration plans by providing greater insight into the effects of radiation in space and testing an atomic clock that could change how spacecraft navigate.

    With launch and deployments complete, the missions will start to power on, communicate with Earth and collect data. They each will operate for about a year, providing enough time to mature the technologies and collect valuable science data. Below is more information about each mission, including notional timelines for key milestones.

    Enhanced Tandem Beacon Experiment

    Two NASA CubeSats making up the Enhanced Tandem Beacon Experiment (E-TBEx) deployed at 12:08 and 12:13 a.m. PDT (3:08 and 3:13 a.m. EDT). Working in tandem with NOAA’s COSMIC-2 mission – six satellites that each carry a radio occultation (GPS) receiver developed at JPL – E-TBEx will explore bubbles in the electrically-charged layers of Earth’s upper atmosphere, which can disrupt communications and GPS signals that we rely on every day. The CubeSats will send signals in several frequencies down to receiving stations on Earth. Scientists will measure any disruptions in these signals to determine how they’re being affected by the upper atmosphere.

    One to three weeks after launch: E-TBEx operators “check out” the CubeSats to make sure power, navigation/guidance and data systems are working in space as expected.
    Approximately three weeks after launch: Science beacons that send signals to antennas on Earth power up and begin transmitting to ground stations.
    About one year after launch: The E-TBEx mission ends.

    Deep Space Atomic Clock

    NASA’s Deep Space Atomic Clock is a toaster oven-size instrument traveling aboard a commercial satellite that was released into low-Earth orbit at 12: 54 a.m. PDT (3:54 a.m. EDT). The unique atomic clock will test a new way for spacecraft to navigate in deep space. The technology could make GPS-like navigation possible at the Moon and Mars.

    NASA Deep Space Atomic Clock

    Two to four weeks after launch: The ultra-stable oscillator, part of the Deep Space Atomic Clock that keeps precise time, powers on to warm up in space.
    Four to seven weeks after launch: The full Deep Space Atomic Clock powers on.
    Three to four months after launch: Preliminary clock performance results are expected.
    One year after full power on: The Deep Space Atomic Clock mission ends, final data analysis begins.

    Green Propellant Infusion Mission

    The Green Propellant Infusion Mission (GPIM) deployed at 12:57 a.m. PDT (3:57 a.m. EDT) and immediately began to power on. GPIM will test a new propulsion system that runs on a high-performance and non-toxic spacecraft fuel. This technology could help propel constellations of small satellites in and beyond low-Earth orbit.

    Within a day of launch: Mission operators check out the small spacecraft.
    One to three weeks after launch: Mission operators ensure the propulsion system heaters and thrusters are operating as expected.
    During the first three months after launch: To demonstrate the performance of the spacecraft’s thrusters, GPIM performs three lowering burns that place it in an elliptical orbit; each time GPIM gets closer to Earth at one particular point in its orbit.
    Throughout the mission: Secondary instruments aboard GPIM measure space weather and test a system that continuously reports the spacecraft’s position and velocity.
    About 12 months after launch: Mission operators command a final thruster burn to deplete the fuel tank, a technical requirement for the end of mission.
    About 13 months after launch: The GPIM mission ends.

    Space Environment Testbeds

    The U.S. Air Force Research Laboratory’s Demonstration and Science Experiments (DSX) was the last spacecraft to be released from STP-2 at 3:04 a.m. PDT (6:04 a.m. EDT) Onboard is an instrument designed by JPL to measure spacecraft vibrations, and four NASA experiments that make up the Space Environment Testbeds (SET). SET will study how to better protect satellites from space radiation by analyzing the harsh environment of space near Earth and testing various strategies to mitigate the impacts. This information can be used to improve spacecraft design, engineering and operations in order to protect spacecraft from harmful radiation driven by the Sun.

    Three weeks after launch: SET turns on for check out and testing of all four experiments.
    Eight weeks after launch: Anticipated start of science data collection.
    About 12 months after check-out: The SET mission ends.

    n all, STP-2 delivered about two dozen satellites into three separate orbits around Earth. Kennedy Space Center engineers mentored Florida high school students who developed and built a CubeSat that also launched on STP-2.

    “It was gratifying to see 24 satellites launch as one,” said Nicola Fox, director of the Heliophysics Division in NASA’s Science Mission Directorate. “The space weather instruments and science CubeSats will teach us how to better protect our valuable hardware and astronauts in space, insights useful for the upcoming Artemis program and more.”

    GPIM and the Deep Space Atomic Clock are both part of the Technology Demonstration Missions program within NASA’s Space Technology Mission Directorate. The Space Communications and Navigation program within NASA’s Human Exploration and Operations Mission Directorate also provided funding for the atomic clock. SET and E-TBEx were both funded by NASA’s Science Mission Directorate.

    Learn more about NASA technology:

    https://www.nasa.gov/spacetech

    Find out how NASA is sending astronaut back to the Moon and on to Mars at:

    https://www.nasa.gov/topics/moon-to-mars

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL)) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge, on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

     
  • richardmitnick 12:19 pm on June 25, 2019 Permalink | Reply
    Tags: "The future of particle accelerators may be autonomous", , Fermilab | FAST Facility, FNAL PIP-II Injector Test (PIP2IT) facility, In December 2018 operators at LCLS at SLAC successfully tested an algorithm trained on simulations and actual data from the machine to tune the beam., ,   

    From Symmetry: “The future of particle accelerators may be autonomous” 

    Symmetry Mag
    From Symmetry

    06/25/19
    Caitlyn Buongiorno

    1
    Illustration by Sandbox Studio, Chicago with Ana Kova

    Particle accelerators are some of the most complicated machines in science. Scientists are working on ways to run them with a diminishing amount of direction from humans.

    In 2015, operators at the Linac Coherent Light Source particle accelerator looked into how they were spending their time managing the machine.

    SLAC/LCLS

    They tracked the hours they spent on tasks like investigating problems and orchestrating new configurations of the particle beam for different experiments.

    They discovered that, if they could automate the process of tuning the beam—tweaking the magnets that keep the LCLS particle beam on its course through the machine—it would free up a few hundred hours each year.

    Scientists have been working to automate different aspects of the operation of accelerators since the 1980s. In today’s more autonomous era of self-driving cars and vacuuming robots, efforts are still going strong, and the next generation of particle accelerators promises to be more automated than ever. Scientists are using machine learning to optimize beamlines more efficiently, detect problems more effectively and create the simulations they need in real-time.

    Quicker fixes

    With any machine, there is a chance that a part might malfunction or break. In the case of an accelerator, that part might be one of the many magnets that direct the particle beam.

    If one magnet stops working, there are ways to circumvent the problem using the magnets around it. But it’s not easy. A particle accelerator is a nonlinear system; when an operator makes a change to it, all of the possible downstream effects of that change can be difficult to predict.

    “The human brain isn’t good at that kind of optimization,” says Dan Ratner, the leader of the strategic initiative for machine learning at the US Department of Energy’s SLAC National Accelerator Laboratory in California.

    An operator can find the solution by trial and error, but that can take some time. With machine learning, an autonomous accelerator could potentially do the same task many times faster.

    In December 2018, operators at LCLS at SLAC successfully tested an algorithm trained on simulations and actual data from the machine to tune the beam.

    Ratner doesn’t expect either LCLS or its upgrade, LCLS-II, scheduled to come online in 2021, to run without human operators, but he’s hoping to give operators a new tool. “Ultimately, we’re trying to free up operators for tasks that really need a human,” he says.

    SLAC/LCLS II projected view

    Practical predictions

    At Fermi National Accelerator Laboratory in Illinois, physicist Jean-Paul Carneiro is working on an upgrade to the lab’s accelerator complex in the hopes that it will one day run with little to no human intervention.

    He was recently awarded a two-year grant for the project through the University of Chicago’s FACCTS program—France And Chicago Collaborating in The Sciences. He is integrating a code developed by scientist Dider Uriot at France’s Saclay Nuclear Research Center into the lab’s PIP-II Injector Test (PIP2IT) facility.

    2
    FNAL PIP-II Injector Test (PIP2IT) facility

    PIP2IT is the proving ground for technologies intended for PIP-II, the upgrade to Fermilab’s accelerator complex that will supply the world’s most intense beams of neutrinos for the international Deep Underground Neutrino Experiment.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    Carneiro says autonomous accelerator operation would increase the usability of the beam for experiments by drastically reducing the accelerator’s downtime. On average, accelerators can currently expect to run at about 90% usability, he says. “If you want to achieve a 98 or 99% availability, the only way to do it is with a computer code.”

    Beyond quickly fixing tuning problems, another way to increase the availability of beam is to detect potential complications before they happen.

    Even in relatively stable areas, the Earth is constantly shifting under our feet—and shifting underground particle accelerators as well. People don’t feel these movements, but an accelerator beam certainly does. Over the course of a few days, these shifts can cause the beam to begin creeping away from its intended course. An autonomous accelerator could correct the beam’s path before a human would even notice the problem.

    Lia Merminga, PIP-II project director at Fermilab, says she thinks the joint project with CEA Saclay is a fantastic opportunity for the laboratory. “Part of our laboratory’s mission is to advance the science and technology of particle accelerators. These advancements will free up accelerator physicists to focus their talent more on developing new ideas and concepts, while providing users with higher reliability and more efficient beam delivery, ultimately increasing the scientific output.”

    Speedy simulations

    Accelerator operators don’t spend all of their time trouble-shooting; they also make changes to the beam to optimize it for specific experiments. Scientists can apply for time on an accelerator to conduct a study. The parameters they originally wanted sometimes change as they begin to conduct their experiment. Finding ways to automate this process would save operators and experimental physicists countless hours.

    Auralee Edelen, a research associate at SLAC, is doing just that by exploring how scientists can improve their models of different beam configurations and how to best achieve them.

    To map the many parameters of an entire beam line from start to end, scientists have thus far needed to use thousands of hours on a supercomputer—not always ideal for online adjustments or finding the best way to obtain a particular beam configuration. A machine learning model, on the other hand, could be trained to simulate what would happen if variables were changed, in under a second.

    “This is one of the new capabilities of machine learning that we want to leverage,” Edelen says. “We’re just now getting to a point where we can integrate these models into the control system for operators to use.”

    In 2016 a neural network—a machine learning algorithm designed to recognize patterns—put this idea to the test at the Fermilab Accelerator Science and Technology facility [FAST].

    Fermilab | FAST Facility

    It completed what had been a 20-minute process to compare a few different simulations in under a millisecond. Edelen is expanding on her FAST research at LCLS, pushing the limits of what is currently possible.

    Simulations also come in handy when it isn’t possible for a scientist to take a measurement they want, because doing so would interfere with the beam. To get around this, scientists can use an algorithm to correlate the measurement with others that don’t affect the beam and infer what the desired measurement would have shown.

    Initial studies at FAST demonstrated that a neural network could use this technique to predict meaurements. Now, SLAC’s Facility for Advanced Accelerator and Experimental Tests, or FACET, and its successor, FACET-II, are leading SLAC’s effort to refine this technique for the scientists that use their beam line.

    SLAC FACET

    3
    FACET-II Design, Parameters and Capabilities

    “It’s an exciting time,” says Merminga. “Any one of these improvements would help advance the field of accelerator physics. I am delighted that PIP2IT is being used to test new concepts in accelerator operation.”

    Who knows—within the next few decades, autonomous accelerators may seem as mundane as roaming robotic vacuums

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 11:17 am on June 25, 2019 Permalink | Reply
    Tags: "Is This The Most Massive Star In The Universe?", , , , ,   

    From Ethan Siegel: “Is This The Most Massive Star In The Universe?” 

    From Ethan Siegel

    June 24, 2019

    1
    The largest group of newborn stars in our Local Group of galaxies, cluster R136, contains the most massive stars we’ve ever discovered: over 250 times the mass of our Sun for the largest. The brightest of the stars found here are more than 8,000,000 times as luminous as our Sun. And yet, there are still likely even more massive ones out there. (NASA, ESA, AND F. PARESCE, INAF-IASF, BOLOGNA, R. O’CONNELL, UNIVERSITY OF VIRGINIA, CHARLOTTESVILLE, AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)

    At the core of the largest star-forming region of the Local Group sits the biggest star we know of.

    Mass is the single most important astronomical property in determining the lives of stars.
    2
    The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. Our Sun is a G-class star, producing light with an effective temperature of around 5800 K and a brightness of 1 solar luminosity. Stars can be as low in mass as 8% the mass of our Sun, where they’ll burn with ~0.01% our Sun’s brightness and live for more than 1000 times as long, but they can also rise to hundreds of times our Sun’s mass, with millions of times our Sun’s luminosity. (WIKIMEDIA COMMONS USER LUCASVB, ADDITIONS BY E. SIEGEL)

    Greater masses generally lead to higher temperatures, greater brightnesses, and shorter lifetimes.

    3
    The active star-forming region, NGC 2363, is located in a nearby galaxy just 10 million light-years away. The brightest star visible here is NGC 2363-V1, visible as the isolated, bright star in the dark void at left. Despite being 6,300,000 times as bright as our Sun, it’s only 20 times as massive, having likely brightened recently as the result of an outburst. (LAURENT DRISSEN, JEAN-RENE ROY AND CARMELLE ROBERT (DEPARTMENT DE PHYSIQUE AND OBSERVATOIRE DU MONT MEGANTIC, UNIVERSITE LAVAL) AND NASA)

    Since massive stars burn through their fuel so quickly, the record holders are found in actively star-forming regions.

    4
    The ‘supernova impostor’ of the 19th century precipitated a gigantic eruption, spewing many Suns’ worth of material into the interstellar medium from Eta Carinae. High mass stars like this within metal-rich galaxies, like our own, eject large fractions of mass in a way that stars within smaller, lower-metallicity galaxies do not. Eta Carinae might be over 100 times the mass of our Sun and is found in the Carina Nebula, but it is not among the most massive stars in the Universe. (NATHAN SMITH (UNIVERSITY OF CALIFORNIA, BERKELEY), AND NASA)

    Luminosity isn’t enough, as short-lived outbursts can cause exceptional, temporary brightening in typically massive stars.

    5
    The star cluster NGC 3603 is located a little over 20,000 light-years away in our own Milky Way galaxy. The most massive star inside it is, NGC 3603-B, which is a Wolf-Rayet star located at the centre of the HD 97950 cluster which is contained within the large, overall star-forming region. (NASA, ESA AND WOLFGANG BRANDNER (MPIA), BOYKE ROCHAU (MPIA) AND ANDREA STOLTE (UNIVERSITY OF COLOGNE))

    Within our own Milky Way, massive star-forming regions, like NGC 3603, house many stars over 100 times our Sun’s mass.

    6
    The star at the center of the Heart Nebula (IC 1805) is known as HD 15558, which is a massive O-class star that is also a member of a binary system. With a directly-measured mass of 152 solar masses, it is the most massive star we know of whose value is determined directly, rather than through evolutionary inferences. (S58Y / FLICKR)

    As a member of a binary system, HD 15558 A is the most massive star with a definitive value: 152 solar masses.

    7
    The Large Magellanic Cloud, the fourth largest galaxy in our local group, with the giant star-forming region of the Tarantula Nebula (30 Doradus) just to the right and below the main galaxy. It is the largest star-forming region contained within our Local Group. (NASA, FROM WIKIMEDIA COMMONS USER ALFA PYXISDIS)

    However, all stellar mass records originate from the star forming region 30 Doradus in the Large Magellanic Cloud.

    8
    A large section of the Tarantula Nebula, the largest star-forming region in the Local Group, imaged by the Ciel Austral team. At top, you can see the presence of hydrogen, sulfur, and oxygen, which reveals the rich gas and plasma structure of the LMC, while the lower view shows an RGB color composite, revealing reflection and emission nebulae. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

    Known as the Tarantula Nebula, it has a mass of ~450,000 Suns and contains over 10,000 stars.

    9
    The star forming region 30 Doradus, in the Tarantula Nebula in one of the Milky Way’s satellite galaxies, contains the largest, highest-mass stars known to humanity. The largest collection of bright, blue stars shown here is the ultra-dense star cluster R136, which contains nearly 100 stars that are approximately 100 solar masses or greater. Many of them have brightnesses that exceed a million solar luminosities. (NASA, ESA, AND E. SABBI (ESA/STSCI); ACKNOWLEDGMENT: R. O’CONNELL (UNIVERSITY OF VIRGINIA) AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)

    The central star cluster, R136, contains 72 of the brightest, most massive classes of star.

    10
    The cluster RMC 136 (R136) in the Tarantula Nebula in the Large Magellanic Cloud, is home to the most massive stars known. R136a1, the greatest of them all, is over 250 times the mass of the Sun. While professional telescopes are ideal for teasing out high-resolution details such as these stars in the Tarantula Nebula, wide-field views are better with the types of long-exposure times only available to amateurs. (EUROPEAN SOUTHERN OBSERVATORY/P. CROWTHER/C.J. EVANS)

    The record-holder is R136a1, some 260 times our Sun’s mass and 8,700,000 times as bright.

    11
    An ultraviolet image and a spectrographic pseudo-image of the hottest, bluest stars at the core of R136. In this small component of the Tarantula Nebula alone, nine stars over 100 solar masses and dozens over 50 are identified through these measurements. The most massive star of all in here, R136a1, exceeds 250 solar masses, and is a candidate, later in its life, for photodisintegration. (ESA/HUBBLE, NASA, K.A. BOSTROEM (STSCI/UC DAVIS))

    Stars such as this cannot be individually resolved beyond our Local Group.

    12
    An illustration of the first stars turning on in the Universe. Without metals to cool down the stars, only the largest clumps within a large-mass cloud can become stars. Until enough time has passes for gravity to affect larger scales, only the small-scales can form structure early on. Without heavy elements to facilitate cooling, stars are expected to routinely exceed the mass thresholds of the most massive ones known today. (NASA)

    With NASA’s upcoming James Webb Space Telescope, we may discover Population III stars, which could reach thousands of solar masses.

    NASA/ESA/CSA Webb Telescope annotated

    13

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    “Starts With A Bang! is a blog/video blog about cosmology, physics, astronomy, and anything else I find interesting enough to write about. I am a firm believer that the highest good in life is learning, and the greatest evil is willful ignorance. The goal of everything on this site is to help inform you about our world, how we came to be here, and to understand how it all works. As I write these pages for you, I hope to not only explain to you what we know, think, and believe, but how we know it, and why we draw the conclusions we do. It is my hope that you find this interesting, informative, and accessible,” says Ethan

     
  • richardmitnick 8:58 am on June 25, 2019 Permalink | Reply
    Tags: , , Formeric spin-off,   

    From Science and Technology Facilities Council: “UK start-up meets manufacturers’ need for speed in new product innovation” 


    From Science and Technology Facilities Council

    24 June 2019

    Wendy Ellison
    STFC Communications
    Daresbury Laboratory
    Tel: 01925 603232
    Wendy Ellison

    For chemical manufacturing companies, speed to market for developing, testing and improving product formulations is critical against a tough, highly competitive market environment. Access to high performance computing can drastically speed up time-to-market, but is a complex process and can be a daunting task without an in-house specialist.

    1
    Formeric Infographic. (Credit: Formeric)

    Eco-friendly cleaning products and fuels, more sustainable crop protection products and breakthrough personal care products – these are just some of the consumer and industrial goods that will benefit from this capability. Company needs can be very different, but they all have in common the need to understand the ingredients they use as quickly and efficiently as possible.

    Now, UK start-up company Formeric is meeting this need for speed with a revolutionary cloud-based app that puts supercomputing into the hands of manufacturers to develop new products, with no supercomputer specialist required.

    Formeric is a spin out of the world leading expertise and supercomputing technologies of the Hartree Centre, part of the Science and Technology Facilities Council (STFC).

    1

    Located at STFC’s Daresbury Laboratory, at Sci-Tech Daresbury in the Liverpool City Region, the Hartree Centre’s key mission is to transform the UK industry through high performance computing, data analytics and artificial intelligence technologies.

    Daresbury Laboratory at Sci-Tech Daresbury in the Liverpool City Region

    Formeric’s platform application, which connects to the Hartree Centre, enables manufacturers and materials scientists to use the latest high performance and cloud computing technologies to accurately predict the behaviour and structure of different concentrations of liquid compounds. It will also show how they will interact with each other, both in the packaging, throughout shelf-life and in use. It means that a single simulation can be requested in seconds, helping researchers to plan fewer and more focussed experiments, reducing time to market.

    STFC’s Dr Rick Anderson, a founder of Formeric, said: “STFC, through its Scientific Computing Department and Hartree Centre, is well known for its expertise in modelling and simulation that can be used to benefit UK companies competing on an international scale. Formeric has been a few years in the planning since concept, so I’m thrilled that our cloud-based app is now ready to speed up design processes and reduce manufacturing costs. The resulting advances in materials chemistry will bring significant benefits to consumers, the environment and the wider economy.”

    Dr Elizabeth Kirby, Director of Innovation at STFC, said: “Manufacturing companies are seeking to embrace digital technologies more and more in their efforts to deliver increasingly efficient and profitable products in a global market. Formeric can now provide these companies with valuable access to supercomputing capabilities, without the need for the specialist skills, in their efforts to embrace digital transformation. I’m excited that we have harnessed the commercial potential for digital transformation from our innovative research by creating this new business.”

    Daresbury Laboratory is part of the Science and Technology Facilities Council. Further information at the website.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    STFC Hartree Centre

    Helping build a globally competitive, knowledge-based UK economy

    We are a world-leading multi-disciplinary science organisation, and our goal is to deliver economic, societal, scientific and international benefits to the UK and its people – and more broadly to the world. Our strength comes from our distinct but interrelated functions:

    Universities: we support university-based research, innovation and skills development in astronomy, particle physics, nuclear physics, and space science
    Scientific Facilities: we provide access to world-leading, large-scale facilities across a range of physical and life sciences, enabling research, innovation and skills training in these areas
    National Campuses: we work with partners to build National Science and Innovation Campuses based around our National Laboratories to promote academic and industrial collaboration and translation of our research to market through direct interaction with industry
    Inspiring and Involving: we help ensure a future pipeline of skilled and enthusiastic young people by using the excitement of our sciences to encourage wider take-up of STEM subjects in school and future life (science, technology, engineering and mathematics)

    We support an academic community of around 1,700 in particle physics, nuclear physics, and astronomy including space science, who work at more than 50 universities and research institutes in the UK, Europe, Japan and the United States, including a rolling cohort of more than 900 PhD students.

    STFC-funded universities produce physics postgraduates with outstanding high-end scientific, analytic and technical skills who on graduation enjoy almost full employment. Roughly half of our PhD students continue in research, sustaining national capability and creating the bedrock of the UK’s scientific excellence. The remainder – much valued for their numerical, problem solving and project management skills – choose equally important industrial, commercial or government careers.

    Our large-scale scientific facilities in the UK and Europe are used by more than 3,500 users each year, carrying out more than 2,000 experiments and generating around 900 publications. The facilities provide a range of research techniques using neutrons, muons, lasers and x-rays, and high performance computing and complex analysis of large data sets.

    They are used by scientists across a huge variety of science disciplines ranging from the physical and heritage sciences to medicine, biosciences, the environment, energy, and more. These facilities provide a massive productivity boost for UK science, as well as unique capabilities for UK industry.

    Our two Campuses are based around our Rutherford Appleton Laboratory at Harwell in Oxfordshire, and our Daresbury Laboratory in Cheshire – each of which offers a different cluster of technological expertise that underpins and ties together diverse research fields.

    The combination of access to world-class research facilities and scientists, office and laboratory space, business support, and an environment which encourages innovation has proven a compelling combination, attracting start-ups, SMEs and large blue chips such as IBM and Unilever.

    We think our science is awesome – and we know students, teachers and parents think so too. That’s why we run an extensive Public Engagement and science communication programme, ranging from loans to schools of Moon Rocks, funding support for academics to inspire more young people, embedding public engagement in our funded grant programme, and running a series of lectures, travelling exhibitions and visits to our sites across the year.

    Ninety per cent of physics undergraduates say that they were attracted to the course by our sciences, and applications for physics courses are up – despite an overall decline in university enrolment.

     
  • richardmitnick 8:34 am on June 25, 2019 Permalink | Reply
    Tags: Applying the latest techniques in machine learning and artificial intelligence to address important science and exploration research challenges, NASA’s Frontier Development Lab,   

    From SETI Institute: “NASA Frontier Development Lab Returns to Silicon Valley to Solve New Challenges with AI” 

    SETI Logo new
    From SETI Institute

    1

    Next week, NASA’s Frontier Development Lab, the SETI Institute and FDL’s private sector and space agency partners will kick off its fourth annual summer research accelerator, applying the latest techniques in machine learning and artificial intelligence to address important science and exploration research challenges. This year, 24 early career Ph.Ds in AI and interdisciplinary natural science domains will be working in six interdisciplinary teams on challenge questions in the areas of space weather, lunar resources, Earth observation and astronaut health.

    “Since its inception, FDL has proven the efficacy of interdisciplinary research and the power of public-private partnership,” said Bill Diamond, president and CEO of the SETI Institute. “We are building on the extraordinary accomplishments of the researchers and mentors from the first three years and are excited to welcome another international group of amazing young scientists for this year’s program. We are also extremely grateful to all our private sector partners and especially to Google Cloud for their leadership role.”

    Partner organizations support FDL by providing funding, supplying hardware, AI/ML algorithms, datasets, software and cloud-compute resources. They also support working teams with mentors and subject matter experts and hosting key events, such as the first-week AI boot camp and the final public team presentations. This year, FDL is pleased to welcome back partners Google Cloud, Intel, IBM, KX, Lockheed Martin, Luxembourg Space Agency, and NVIDIA. We are also pleased to welcome our new partners Canadian Space Agency, HPE and Element AI.

    For the past three years, FDL has demonstrated the potential of applied AI to deliver important results to the space program in a very intense sprint, when supported in this way by a consortium of motivated partners. This approach has proven critical in unlocking meaningful progress in the complex and often systemic nature of AI problems.

    “NASA has been at the forefront of machine learning – e.g. robotics,” said Madhulika Guhathakurta, program scientist and heliophysicist on detail at NASA’s Ames Research Center in Silicon Valley. “But we’re now witnessing an inflection point, where AI promises to become a tool for discovery – where the ability to process vast amount of heterogeneous data, as well as massive amount of data collected over decades, allows us to revisit the physics-based models of the past – to better understand underlying principles and radically improve time to insight”

    Each team is comprised of two Ph.D. or postdoc researchers from the space sciences and two data scientists, supported by mentors in each area. This year’s participants come from 13 countries and will be working on these challenges:

    Disaster prevention, progress and response (floods)
    Lunar resource mapping/super resolution
    Expanding the capabilities of NASA’s solar dynamics observatory
    Super-resolution maps of the solar magnetic field covering 40 years of space weather events
    Enhanced Predictability of GNSS Disturbances
    Generation of simulated biosensor data

    Additionally, three teams in Europe will be addressing disaster prevention, progress and response (floods), ground station pass scheduling and assessing the changing nature of atmospheric phenomena, in partnership with the European Space Agency (ESA).

    FDL 2019 kicks off next week at NVIDIA headquarters in Santa Clara, California, where teams will participate in a one-week intensive boot camp. The program concludes on August 15 at Google in Mountain View, California where teams will present the results of their work. Throughout the summer, teams will be working at the SETI Institute and NASA’s Ames Research Center near Mountain View.

    About the NASA Frontier Development Lab (FDL)

    Hosted in Silicon Valley by the SETI Institute, NASA FDL is an applied artificial intelligence research accelerator developed in partnership with NASA’s Ames Research Center. Founded in 2016, the NASA FDL aims to apply AI technologies to challenges in space exploration by pairing machine learning expertise with space science and exploration researchers from academia and industry. These interdisciplinary teams address tightly defined problems and the format encourages rapid iteration and prototyping to create outputs with meaningful application to the space program and humanity.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    SETI Institute – 189 Bernardo Ave., Suite 100
    Mountain View, CA 94043
    Phone 650.961.6633 – Fax 650-961-7099
    Privacy PolicyQuestions and Comments

     
  • richardmitnick 8:16 am on June 25, 2019 Permalink | Reply
    Tags: "The highest-energy photons ever seen hail from the Crab Nebula", , , , , , , , , The Tibet AS-gamma experiment, When a high-energy photon hits Earth’s atmosphere it creates a shower of other subatomic particles that can be detected on the ground.   

    From Science News: “The highest-energy photons ever seen hail from the Crab Nebula” 

    From Science News

    June 24, 2019
    Emily Conover

    Some of the supernova remnant’s gamma rays have more than 100 trillion electron volts of energy.

    1
    CRAB FISHING Scientists hunting for high-energy photons raining down on Earth from space have found the most energetic light yet detected. It’s from the Crab Nebula, a remnant of an exploded star (shown in an image combining light seen by multiple telescopes).

    Physicists have spotted the highest-energy light ever seen. It emanated from the roiling remains left behind when a star exploded.

    This light made its way to Earth from the Crab Nebula, a remnant of a stellar explosion, or supernova, about 6,500 light-years away in the Milky Way. The Tibet AS-gamma experiment caught multiple particles of light — or photons — from the nebula with energies higher than 100 trillion electron volts, researchers report in a study accepted in Physical Review Letters. Visible light, for comparison, has just a few electron volts of energy.

    Tibet AS Gamma Expeiment

    “This energy regime has not been accessible before,” says astrophysicist Petra Huentemeyer of Michigan Technological University in Houghton, who was not involved with the research. For physicists who study this high-energy light, known as gamma rays, “it’s an exciting time,” she says.

    In space, supernova remnants and other cosmic accelerators can boost subatomic particles such as electrons, photons and protons to extreme energies, much higher than those achieved in the most powerful earthly particle accelerators (SN: 10/1/05, p. 213). Protons in the Large Hadron Collider in Geneva, for example, reach a comparatively wimpy 6.5 trillion electron volts. Somehow, the cosmic accelerators vastly outperform humankind’s most advanced machines.

    “The question is: How does nature do it?” says physicist David Hanna of McGill University in Montreal.

    In the Crab Nebula, the initial explosion set up the conditions for acceleration, with magnetic fields and shock waves plowing through space, giving an energy boost to charged particles such as electrons. Low-energy photons in the vicinity get kicked to high energies when they collide with the speedy electrons, and ultimately, some of those photons make their way to Earth.

    When a high-energy photon hits Earth’s atmosphere, it creates a shower of other subatomic particles that can be detected on the ground. To capture that resulting deluge, Tibet AS-gamma uses nearly 600 particle detectors spread across an area of more than 65,000 square meters in Tibet. From the information recorded by the detectors, researchers can calculate the energy of the initial photon.

    But other kinds of spacefaring particles known as cosmic rays create particle showers that are much more plentiful. To select photons, cosmic rays, which are mainly composed of protons and atomic nuclei, need to be weeded out. So the researchers used underground detectors to look for muons — heavier relatives of electrons that are created in cosmic ray showers, but not in showers created by photons.

    Previous experiments have glimpsed photons with nearly 100 TeV, or trillion electron volts. Now, after about three years of gathering data, the researchers found 24 seemingly photon-initiated showers above 100 TeV, and some with energies as high as 450 TeV. Because the weeding out process isn’t perfect, the researchers estimate that around six of those showers could have come from cosmic rays mimicking photons, but the rest are the real deal.

    Researchers with Tibet AS-gamma declined to comment for this story, as the study has not yet been published.

    Looking for photons of ever higher energies could help scientists nail down the details of how the particles are accelerated. “There has to be a limit to how high the energy of the photons can go,” Hanna says. If scientists can pinpoint that maximum energy, that could help distinguish between various theoretical tweaks to how the particles get their oomph.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 4:40 pm on June 24, 2019 Permalink | Reply
    Tags: "The Interiors of Exoplanets May Well Hold the Key to Their Habitability", , , “The heart of habitability is in planetary interiors” concluded Carnegie geochemist George Cody, , Cosmochemistry, , Deep Carbon Observatory’s Biology Meets Subduction project, Findings from the Curiosity rover that high levels of the gas methane had recently been detected on Mars., , , PREM-Preliminary Reference Earth Model, This idea that subsurface life on distant planets could be identified by their byproducts in the atmosphere has just taken on a new immediacy, We’ve only understood the Earth’s structure for the past hundred years.   

    From Many Worlds: “The Interiors of Exoplanets May Well Hold the Key to Their Habitability” 

    NASA NExSS bloc

    NASA NExSS

    Many Words icon

    From Many Worlds

    June 23, 2019
    Marc Kaufman

    1
    Scientists have had a working — and evolving — understanding of the interior of the Earth for only a century or so. But determining whether a distant planet is truly habitable may require an understanding of its inner dynamics — which will for sure be a challenge to achieve. (Harvard-Smithsonian Center for Astrophysics)

    The quest to find habitable — and perhaps inhabited — planets and moons beyond Earth focuses largely on their location in a solar system and the nature of its host star, the eccentricity of its orbit, its size and rockiness, and the chemical composition of its atmosphere, assuming that it has one.

    Astronomy, astrophysics, cosmochemistry and many other disciplines have made significant progress in characterizing at least some of the billions of exoplanets out there, although measuring the chemical makeup of atmospheres remains a immature field.

    But what if these basic characteristics aren’t sufficient to answer necessary questions about whether a planet is habitable? What if more information — and even more difficult to collect information — is needed?

    That’s the position of many planetary scientists who argue that the dynamics of a planet’s interior are essential to understand its habitability.

    With our existing capabilities, observing an exoplanet’s atmospheric composition will clearly be the first way to search for signatures of life elsewhere. But four scientists at the Carnegie Institution of Science — Anat Shahar, Peter Driscoll, Alycia Weinberger, and George Cody — argued in a recent perspective article in Science that a true picture of planetary habitability must consider how a planet’s atmosphere is linked to and shaped by what’s happening in its interior.

    They argue that on Earth, for instance, plate tectonics are crucial for maintaining a surface climate where life can fill every niche. And without the cycling of material between the planet’s surface and interior, the convection that drives the Earth’s magnetic field would not be possible and without a magnetic field, we would be bombarded by cosmic radiation.

    1
    What makes a planet potentially habitable and what are signs that it is not. This graphic from the Carnegie paper illustrates the differences (Shahar et al.)

    “The perspective was our way to remind people that the only exoplanet observable right now is the atmosphere, but that the atmospheric composition is very much linked to planetary interiors and their evolution,” said lead author Shahar, who is trained in geological sciences. “If there is a hope to one day look for a biosignature, it is crucial we understand all the ways that interiors can influence the atmospheric composition so that the observations can then be better understood.”

    “We need a better understanding of how a planet’s composition and interior influence its habitability, starting with Earth,” she said. “This can be used to guide the search for exoplanets and star systems where life could thrive, signatures of which could be detected by telescopes.”

    It all starts with the formation process. Planets are born from the rotating ring of dust and gas that surrounds a young star.

    The elemental building blocks from which rocky planets form–silicon, magnesium, oxygen, carbon, iron, and hydrogen–are universal. But their abundances and the heating and cooling they experience in their youth will affect their interior chemistry and, in turn, defining factors such ocean volume and atmospheric composition.

    “One of the big questions we need to ask is whether the geologic and dynamic features that make our home planet habitable can be produced on planets with different compositions,” Carnegie planetary scientist Peter Driscoll explained in a release.

    In the next decade as a new generation of telescopes come online, scientists will begin to search in earnest for biosignatures in the atmospheres of rocky exoplanets. But the colleagues say that these observations must be put in the context of a larger understanding of how a planet’s total makeup and interior geochemistry determines the evolution of a stable and temperate surface where life could perhaps arise and thrive.

    “The heart of habitability is in planetary interiors,” concluded Carnegie geochemist George Cody.

    Our knowledge of the Earth’s interior starts with these basic contours: it has a thin outer crust, a thick mantle, and a core the size of Mars. A basic question that can be asked and to some extent answered now is whether this structure is universal for small rocky planets. Will these three layers be present in some form in many other rocky planets as well?

    Earlier preliminary research published in the The Astrophysical Journal suggests that the answer is yes – they will have interiors very similar to Earth.

    “We wanted to see how Earth-like these rocky planets are. It turns out they are very Earth-like,” said lead author Li Zeng of the Harvard-Smithsonian Center for Astrophysics (CfA)

    To reach this conclusion Zeng and his co-authors applied a computer model known as the Preliminary Reference Earth Model (PREM), which is the standard model for Earth’s interior. They adjusted it to accommodate different masses and compositions, and applied it to six known rocky exoplanets with well-measured masses and physical sizes.

    They found that the other planets, despite their differences from Earth, all should have a nickel/iron core containing about 30 percent of the planet’s mass. In comparison, about a third of the Earth’s mass is in its core. The remainder of each planet would be mantle and crust, just as with Earth.

    “We’ve only understood the Earth’s structure for the past hundred years. Now we can calculate the structures of planets orbiting other stars, even though we can’t visit them,” adds Zeng.

    The model assumes that distant exoplanets have chemical compositions similar to Earth. This is reasonable based on the relevant abundances of key chemical elements like iron, magnesium, silicon, and oxygen in nearby systems. However, planets forming in more or less metal-rich regions of the galaxy could show different interior structures.

    While thinking about exoplanetary interiors—and some day finding ways to investigate them — is intriguing and important, it’s also apparent that there’s a lot more to learn about role of the Earth’s interior in making the planet habitable.

    In 2017, for instance, an interdisciplinary group of early career scientists visited Costa Rica’s subduction zone, (where the ocean floor sinks beneath the continent) to find out if subterranean microbes can affect geological processes that move carbon from Earth’s surface into the deep interior.

    3
    Donato Giovannelli and Karen Lloyd collect samples from the crater lake in Poás Volcano in Costa Rica. (Katie Pratt)

    The study shows that microbes consume and trap a small but measurable amount of the carbon sinking into the trench off Costa Rica’s Pacific coast. The microbes may also be involved in chemical processes that pull out even more carbon, leaving cement-like veins of calcite in the crust.

    According to their new study in Nature, the answer is yes.

    In all, microbes and calcite precipitation combine to trap about 94 percent of the carbon squeezed out from the edge of the oceanic plate as it sinks into the mantle during subduction. This carbon remains naturally sequestered in the crust, where it cannot escape back to the surface through nearby volcanoes in the way that much carbon ultimately recycles.

    These unexpected findings have important implications for how much carbon moves from Earth’s surface into the interior, especially over geological timescales. The research is part of the Deep Carbon Observatory’s Biology Meets Subduction project.

    Overall, the study shows that biology has the power to affect carbon recycling and thereby deep Earth geology.

    “We already knew that microbes altered geological processes when they first began producing oxygen from photosynthesis,” said Donato Giovannelli of University of Naples, Italy (and who I knew from time spent at the Earth-Life Science Institute Tokyo.) He is a specialist in extreme environments and researches what they can tell us about early Earth and possibly other planets.

    “I think there are probably even more ways that biology has had an outsized impact on geology, we just haven’t discovered them yet.”

    The findings also shows, Giovanelli told me, that subsurface microbes might have a similarly outsized effect on the composition and balancing of atmospheres—“hinting to the possibility of detecting the indirect effect of subsurface life through atmosphere measurements of exoplanets,” he said.

    5
    The 2003 finding by Michael Mumma and Geronimo Villanueva of NASA Goddard Space Flight Center showing signs of major plumes of methane on Mars. While some limited and seasonably determined concentrations of methane have been detected since, there has been nothing to compare with the earlier high methane readings Mars — until just last week. (NASA/ M. Mumma et al)

    This idea that subsurface life on distant planets could be identified by their byproducts in the atmosphere has just taken on a new immediacy with findings from the Curiosity rover that high levels of the gas methane had recently been detected on Mars. Earlier research had suggested that Mars had some subsurface methane, but the amount appeared to be quite minimal — except as detected once back in 2003 by NASA scientists.

    None of the researchers now or in the past have claimed that they know the origin of the methane — whether it is produced biologically or through other planetary processes. But on Earth, some 90 percent of methane comes from biology — bacteria, plants, animals.

    Could, then, these methane plumes be a sign that life exists (or existed) below the surface of Mars? It’s possible, and highlights the great importance of what goes on below the surface of planets and moons.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Many Worlds
    There are many worlds out there waiting to fire your imagination.

    Marc Kaufman is an experienced journalist, having spent three decades at The Washington Post and The Philadelphia Inquirer, and is the author of two books on searching for life and planetary habitability. While the “Many Worlds” column is supported by the Lunar Planetary Institute/USRA and informed by NASA’s NExSS initiative, any opinions expressed are the author’s alone.

    This site is for everyone interested in the burgeoning field of exoplanet detection and research, from the general public to scientists in the field. It will present columns, news stories and in-depth features, as well as the work of guest writers.

    About NExSS

    The Nexus for Exoplanet System Science (NExSS) is a NASA research coordination network dedicated to the study of planetary habitability. The goals of NExSS are to investigate the diversity of exoplanets and to learn how their history, geology, and climate interact to create the conditions for life. NExSS investigators also strive to put planets into an architectural context — as solar systems built over the eons through dynamical processes and sculpted by stars. Based on our understanding of our own solar system and habitable planet Earth, researchers in the network aim to identify where habitable niches are most likely to occur, which planets are most likely to be habitable. Leveraging current NASA investments in research and missions, NExSS will accelerate the discovery and characterization of other potentially life-bearing worlds in the galaxy, using a systems science approach.
    The National Aeronautics and Space Administration (NASA) is the agency of the United States government that is responsible for the nation’s civilian space program and for aeronautics and aerospace research.

    President Dwight D. Eisenhower established the National Aeronautics and Space Administration (NASA) in 1958 with a distinctly civilian (rather than military) orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA’s predecessor, the National Advisory Committee for Aeronautics (NACA). The new agency became operational on October 1, 1958.

    Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program (LSP) which provides oversight of launch operations and countdown management for unmanned NASA launches. Most recently, NASA announced a new Space Launch System that it said would take the agency’s astronauts farther into space than ever before and lay the cornerstone for future human space exploration efforts by the U.S.

    NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate’s Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories [Hubble, Chandra, Spitzer, and associated programs. NASA shares data with various national and international organizations such as from the [JAXA]Greenhouse Gases Observing Satellite.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: