Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:24 am on September 22, 2018 Permalink | Reply
    Tags: , , ,   

    From U Tokyo via ScienceAlert: “Scientists Just Created a Magnetic Field That Takes Us Closer Than Ever Before to Harnessing Nuclear Fusion” 

    From University of Tokyo



    (Zoltan Tasi/Unsplash)

    22 SEP 2018

    They were able to control it without destroying any equipment this time.

    Inexpensive clean energy sounds like a pipe dream. Scientists have long thought that nuclear fusion, the type of reaction that powers stars like the Sun, could be one way to make it happen, but the reaction has been too difficult to maintain.

    Now, we’re closer than ever before to making it happen — physicists from the University of Tokyo (UTokyo) say they’ve produced the strongest-ever controllable magnetic field.

    “One way to produce fusion power is to confine plasma — a sea of charged particles — in a large ring called a tokamak in order to extract energy from it,” said lead researcher Shojiro Takeyama in a press release.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    September 18, 2018

    Physicists from the Institute for Solid State Physics at the University of Tokyo have generated the strongest controllable magnetic field ever produced. The field was sustained for longer than any previous field of a similar strength. This research could lead to powerful investigative tools for material scientists and may have applications in fusion power generation.

    Magnetic fields are everywhere. From particle smashers to the humble compass, our capacity to understand and control these fields crafted much of the modern world. The ability to create stronger fields advances many areas of science and engineering. UTokyo physicist Shojiro Takeyama and his team created a large sophisticated device in a purpose-built lab, capable of producing the strongest controllable magnetic field ever using a method known as electromagnetic flux compression.

    “Decades of work, dozens of iterations and a long line of researchers who came before me all contributed towards our achievement,” said Professor Takeyama. “I felt humbled when I was personally congratulated by directors of magnetic field research institutions around the world.”

    Physicists from the Institute for Solid State Physics at the University of Tokyo have generated the strongest controllable magnetic field ever produced. The field was sustained for longer than any previous field of a similar strength. This research could lead to powerful investigative tools for material scientists and may have applications in fusion power generation.

    Magnetic fields are everywhere. From particle smashers to the humble compass, our capacity to understand and control these fields crafted much of the modern world. The ability to create stronger fields advances many areas of science and engineering. UTokyo physicist Shojiro Takeyama and his team created a large sophisticated device in a purpose-built lab, capable of producing the strongest controllable magnetic field ever using a method known as electromagnetic flux compression.

    “Decades of work, dozens of iterations and a long line of researchers who came before me all contributed towards our achievement,” said Professor Takeyama. “I felt humbled when I was personally congratulated by directors of magnetic field research institutions around the world.”

    The megagauss generator just before it’s switched on. Some parts for the device are exceedingly rare and very few companies around the world are capable of producing them. Image: ©2018 Shojiro Takeyama

    Sparks fly at the moment of activation. Four million amps of current feed the megagauss generator system, hundreds of times the current of a typical lightning bolt. Image: ©2018 Shojiro Takeyama

    But what is so interesting about this particular magnetic field?

    At 1,200 teslas – not the brand of electric cars, but the unit of magnetic field strength – the generated field dwarfs almost any artificial magnetic field ever recorded; however, it’s not the strongest overall. In 2001, physicists in Russia produced a field of 2,800 teslas, but their explosive method literally blew up their equipment and the uncontrollable field could not be tamed. Lasers can also create powerful magnetic fields, but in experiments they only last a matter of nanoseconds.

    The magnetic field created by Takeyama’s team lasts thousands of times longer, around 100 microseconds, about one-thousandth of the time it takes to blink. It’s possible to create longer-lasting fields, but these are only in the region of hundreds of teslas. The goal to surpass 1,000 teslas was not just a race for the sake of it, that figure represents a significant milestone.

    Earth’s own magnetic field is 25 to 65 microteslas. The megagauss generator system creates a field of 1,200 teslas, about 20 million to 50 million times stronger. Image: ©2018 Shojiro Takeyama

    “With magnetic fields above 1,000 Teslas, you open up some interesting possibilities,” says Takeyama. “You can observe the motion of electrons outside the material environments they are normally within. So we can study them in a whole new light and explore new kinds of electronic devices. This research could also be useful to those working on fusion power generation.”

    This is an important point, as many believe fusion power is the most promising way to provide clean energy for future generations. “One way to produce fusion power is to confine plasma – a sea of charged particles – in a large ring called a tokamak in order to extract energy from it,” explains Takeyama. “This requires a strong magnetic field in the order of thousands of teslas for a duration of several microseconds. This is tantalizingly similar to what our device can produce.”

    The magnetic field that a tokamak would require is “tantalizingly similar to what our device can produce,” he said.

    To generate the magnetic field, the UTokyo researchers built a sophisticated device capable of electromagnetic flux-compression (EMFC), a method of magnetic field generation well-suited for indoor operations.

    They describe the work in a new paper published Monday in the Review of Scientific Instruments.

    Using the device, they were able to produce a magnetic field of 1,200 teslas — about 120,000 times as strong as a magnet that sticks to your refrigerator.

    Though not the strongest field ever created, the physicists were able to sustain it for 100 microseconds, thousands of times longer than previous attempts.

    They could also control the magnetic field, so it didn’t destroy their equipment like some past attempts to create powerful fields.

    As Takeyama noted in the press release, that means his team’s device can generate close to the minimum magnetic field strength and duration needed for stable nuclear fusion — and it puts us all one step closer to the unlimited clean energy we’ve been dreaming about for nearly a century.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The University of Tokyo aims to be a world-class platform for research and education, contributing to human knowledge in partnership with other leading global universities. The University of Tokyo aims to nurture global leaders with a strong sense of public responsibility and a pioneering spirit, possessing both deep specialism and broad knowledge. The University of Tokyo aims to expand the boundaries of human knowledge in partnership with society. Details about how the University is carrying out this mission can be found in the University of Tokyo Charter and the Action Plans.

  • richardmitnick 9:33 am on September 22, 2018 Permalink | Reply
    Tags: , , , , , , NASA/MIT TESS finds 1st two exoplanet candidates during first science orbit, , TESS in excellent health, Transit method   

    From NASA Spaceflight: “TESS in excellent health, finds 1st two exoplanet candidates during first science orbit” 

    NASA Spaceflight

    From NASA Spaceflight

    September 20, 2018
    Chris Gebhardt

    The joint NASA / Massachusetts Institute of Technology (MIT) Transiting Exoplanet Survey Satellite, or TESS, has completed its first science orbit after launch and orbital activations/checkouts. Unsurprisingly given TESS’s wide range of view, a team of scientists have already identified the planet-hunting telescope’s first two exoplanet candidates.
    No image credit.

    The yet-to-be-confirmed exoplanets are located 59.5 light years from Earth in the Pi Mensae system and 49 light years away in the LHS 3844 system.

    TESS’s overall health:

    Following a successful launch on 18 April 2018 aboard a SpaceX Falcon 9 rocket from SLC-40 at the Cape Canaveral Air Force Station, Florida, TESS was injected into an orbit aligned for a gravity assist maneuver one month later with the Moon to send the telescope into its operational 13.65-day orbit of Earth.

    TESS’s orbit is highly unique, with the trajectory designed so the telescope is in a 2:1 resonance with the Moon at a 90° phase offset at apogee (meaning the telescope maintains a separation from the Moon so the lunar gravity field doesn’t perturb TESS’ orbit but at the same time keeps the orbit stable) to allow the spacecraft to use as little of its maneuvering fuel as possible to achieve a hoped-for 20 year life.

    At the time of launch, mission scientists and operators noted that first light images were expected from TESS in June 2018 following a 60-day commissioning phase.

    While it is not entirely clear what happened after launch, what is known is that the commissioning phase lasted 27 days longer than expected, stretching to the end of July. TESS’ first science and observational campaign began not in June but on 25 July 2018.

    By 7 August, the halfway point in the first science observation period, TESS took what NASA considers to be the ceremonial “first light” images of the telescope’s scientific ventures.

    TESS acquired the image using all four cameras during a 30-minute period on Tuesday, 7 August. The images include parts of a dozen constellations from Capricornus to Pictor, both the Large and Small Magellanic Clouds, and the galaxies nearest to our own.

    Ceremonial first light image captured by TESS on 7 August 2018 showing the full Sector 1 image (center) and close-ups of each of the four camera groups (left and right) Credit NASA/MIT/TESS

    “In a sea of stars brimming with new worlds, TESS is casting a wide net and will haul in a bounty of promising planets for further study,” said Paul Hertz, astrophysics division director at NASA Headquarters. “This first light science image shows the capabilities of TESS’ cameras and shows that the mission will realize its incredible potential in our search for another Earth.”

    George Ricker, TESS’ principal investigator at the Massachusetts Institute of Technology’s Kavli Institute for Astrophysics and Space Research, added, “This swath of the sky’s southern hemisphere includes more than a dozen stars we know have transiting planets based on previous studies from ground observatories.”

    While TESS orbits Earth every 13.65 days, its data collection phase for each of its 26-planned observation sectors of near-Earth sky lasts for two orbits so the telescope can collect light data from each section for a total of 27.4 days.

    With science operations formerly commencing on 25 July, the first observational campaign stretched to 22 August.

    Unlike some missions which only transmit data back to Earth after observational campaigns end, TESS transmits its data both in the middle and at the end of each campaign when the telescope swings past its perigee (closest orbital approach to Earth).

    On 22 August, after TESS completed its first observation campaign of a section of the Southern Hemisphere sky, the telescope transmitted the second batch of light data to Earth through the Deep Space Network.

    From there, the information was processed and analyzed at NASA’s Science Processing and Operations Center at the Ames Research Center in California – which provided calibrated images and refined light curves for scientists to analyze and find promising exoplanet transit candidates.

    NASA and MIT then made that data available to scientists as they search for the more than 22,000 exoplanets (most of those within a 300 light-year radius of Earth) that TESS is expected to find during the course of its two-year primary mission.

    First TESS exoplanet candidate:

    Given the sheer number of exoplanets TESS is expected to find in the near-Earth neighborhood, it is not surprising that the first observation campaign has already returned potential exoplanet candidates – the first of which was confirmed by NASA via a tweet on Wednesday, 19 September.

    TESS’ first exoplanet candidate is Pi Mensae c – a super-Earth with an orbital period of 6.27 days. According to a draft of the paper announcing the discovery, several methods were used to eliminate the possibility of this being a false detection or the detection of a previously unknown companion star.

    The Pi Mensae system is located 59.5 light years from Earth, and the new exoplanet – if confirmed – would be officially classified Pi Mensae c, the second known exoplanet of the system.

    Exoplanet’s official classifications derive from the name of the star they orbit followed by a lowercase letter indicating the order in which they were discovered in a particular system.

    The order in which exoplanets are discovered does not necessarily match the order (distance from closest to farthest) in which they orbit their parent star.

    Moreover, the lowercase letter designation begins with the letter “b”, not the letter “a”. Thus, the first discovered exoplanet in a particular system will bear the name of its parent star followed by a lowercase “b”.

    Subsequent exoplanets orbiting the same start or stars (as the case may be), regardless of whether they orbit closer to or farther away from the parent star than the first discovered exoplanet will then bear the letters c, d, e, etc.

    NASA/Ames – Wendy Stenzel

    Therefore, confirmation of the new exoplanet candidate in the Pi Mensae system would make the planet Pi Mensae c.

    Pi Mensae b, a superjovian, was discovered on 15 October 2001 using the radial-velocity method of detection via the Anglo-Australian Telescope operated by the Australian Astronomical Observatory at Siding Spring Observatory.

    In the search for exoplanets, two general methods of detection are used – direct observation of a transiting exoplanet that passes between its star and the observation point on or near Earth (the method employed by TESS) and the radial-velocity, or doppler spectroscopy, method of detection which measures the wobble or gravitational tug on a parent star caused by an orbiting planet that does not pass between the star and the observation point on or near Earth.

    Overall, roughly 30% of the total number of known exoplanets have been discovered via the radial-velocity method, with the other 70% being discovered via the transiting method of detection.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http://

    Planet transit. NASA/Ames

    Upon Pi Mensae b’s discovery in 2001, the planet was found to be in a highly eccentric 5.89 Earth-year (2,151 day) orbit – coming as close at 1.21 AU and passing as far as 5.54 AU from its star.

    Artist’s depiction of a Super-Juiter orbiting its host star

    With a 1.21 AU periastron, Pi Mensae b passes through its parent star’s habitable zone before arcing out to apastron (which lies farther out than Jupiter’s orbit of our Sun).

    Given the extreme eccentricity and the fact that the planet passes through the habitable zone during each orbit, it would likely have disrupted the orbit of any potentially Earth-like planet in that zone due to its extreme mass of more than 10 times that of Jupiter.

    As for Pi Mensae itself, the star is a 3.4 billion year old (roughly 730 million years younger than the Sun) yellow dwarf that is 1.11 times the mass of the Sun, 1.15 times the Sun’s radius, and 1.5 times the Sun’s luminosity.

    Due to its proximity to Earth and its high luminosity, the star has an apparent magnitude of 5.67 and is visible to the naked eye in dark, clear skies.

    The star’s brightness – unsurprisingly – gives a potential instant “win” for the TESS team, whose stated pre-mission goal was to find near-Earth transiting exoplanets around exceptionally bright stars.

    Pi Mensae is currently the second brightest star to host a confirmed transiting exoplanet, Pi Mensae b.

    As an even greater testament to TESS’ power, just hours before publication of this article, the TESS team confirmed a second exoplanet candidate from the first observation campaign.

    The second exoplanet candidate is LHS 3844 b. It orbits its parent star – an M dwarf – every 11 hours and is located 49 light years from Earth.

    The exoplanet candidate is described by NASA and the TESS team as a “hot Earth.”

    Given the wealth of light data for scientists to pour through from the now-completed first two of 26 observation sectors, it is highly likely that hundreds if not thousands of exoplanets candidates will be identified in the coming months and years — with tens of thousands of candidate planets to follow in the remaining 24 sectors of sky to be searched.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition, now in its eighth year of operations, is already the leading online news resource for everyone interested in space flight specific news, supplying our readership with the latest news, around the clock, with editors covering all the leading space faring nations.

    Breaking more exclusive space flight related news stories than any other site in its field, is dedicated to expanding the public’s awareness and respect for the space flight industry, which in turn is reflected in the many thousands of space industry visitors to the site, ranging from NASA to Lockheed Martin, Boeing, United Space Alliance and commercial space flight arena.

    With a monthly readership of 500,000 visitors and growing, the site’s expansion has already seen articles being referenced and linked by major news networks such as MSNBC, CBS, The New York Times, Popular Science, but to name a few.

  • richardmitnick 2:56 pm on September 21, 2018 Permalink | Reply
    Tags: , , , , , , GeMS “SERVS” Up Sharp Views of Young Galaxies in Early Universe, GeMS-Gemini Multi-Conjugate Adaptive Optics System, GSAOI-Gemini South Adaptive Optics Imager   

    From Gemini Observatory: “GeMS “SERVS” Up Sharp Views of Young Galaxies in Early Universe” 


    Gemini Observatory
    From Gemini Observatory

    GeMS/GSAOI K-band image of one of the three fields targeted from the Spitzer Extragalactic Representative Volume Survey. The insets show detailed views of several distant galaxies in this field.

    Multi-conjugate adaptive optics technology at Gemini South reveals that young galaxies, with large amounts of star formation, and actively growing central black holes, were relatively compact in the early Universe.

    Gemini South Adaptive Objects laser guide star

    A team of astronomers led by Dr. Mark Lacy (National Radio Astronomy Observatory, USA) used advanced adaptive optics on the Gemini South telescope in Chile to obtain high-resolution near-infrared images of three fields from the Spitzer Extragalactic Representative Volume Survey (SERVS). Their sample includes several ultra-luminous infrared galaxies (ULIRGs) which the Herschel Space Observatory found to be undergoing large bursts of star formation within the first few billion years of the Big Bang.

    ESA/Herschel spacecraft active from 2009 to 2013

    Such galaxies have hundreds of times the infrared luminosity of a normal galaxy such as the Milky Way.

    The high-resolution GeMS images reveal that the ULIRGs have messy, irregular structures indicating that they are the product of recent galactic interactions and mergers. Lacy explains, “The fact that the disturbed morphologies of these galaxies persist into the infrared suggests that their appearance is not dominated by clumpy extinction from dust, but reflects the irregular distribution of stellar light.” Dust is highly effective at obscuring ultraviolet and blue light, but effects red and infrared light less. “These GeMS observations help reveal the physical mechanisms by which massive galaxies evolve into the objects we see today,” added Lacy.

    The team used the Gemini South Adaptive Optics Imager (GSAOI) with the Gemini Multi-Conjugate Adaptive Optics System (GeMS) to obtain K-band observations. Lacy’s team combined these Gemini data with other multiwavelength data at optical, far-infrared, and radio wavelengths to study the masses, morphologies, and star formation rates of the galaxies.

    The results of the GeMS study support previous results using the Hubble Space Telescope indicating that massive compact galaxies were more common in the early Universe than they are today.

    NASA/ESA Hubble Telescope

    The fraction of galaxies with compact structures is even higher in the GeMS data, but it is unclear whether this is due to improved resolution with GeMS or a tendency to miss the more diffuse galaxies in ground-based images that must contend with the infrared glow from the Earth’s atmosphere.

    Some of the galaxies in the study also harbor active galactic nuclei (AGN), luminous central engines powered by supermassive black holes that are actively accreting mass. The researchers found that star-forming galaxies with AGN tend to have more compact structures than ULIRGs that lack active nuclei. The team also found what appears to be a rare triple AGN system, a close grouping of three galaxies with actively growing supermassive black holes that may be headed for an imminent collision.

    “Among the sources that we examined, we have one close pair, one candidate triple black hole, and other objects with disturbed morphologies that might be late-stage mergers,” said Lacy. “Observations such as theses can therefore significantly improve the constraints on galaxy and supermassive black hole merger rates.”

    In the future, the James Webb Space Telescope (JWST) will enable studies at similar angular resolution and very high sensitivity at near-infrared wavelengths of large numbers of galaxies at these early cosmic times. However, the recent delays, and anticipated high demand for JWST, means that ground-based Multi-Conjugate Adaptive Optics systems like GeMS will continue to play an important role in targeted studies of rare infrared-bright objects in the early Universe.

    This work is published in

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Gemini/North telescope at Maunakea, Hawaii, USA,4,207 m (13,802 ft) above sea level

    Gemini South telescope, Cerro Tololo Inter-American Observatory (CTIO) campus near La Serena, Chile, at an altitude of 7200 feet

    AURA Icon

    Gemini’s mission is to advance our knowledge of the Universe by providing the international Gemini Community with forefront access to the entire sky.

    The Gemini Observatory is an international collaboration with two identical 8-meter telescopes. The Frederick C. Gillett Gemini Telescope is located on Mauna Kea, Hawai’i (Gemini North) and the other telescope on Cerro Pachón in central Chile (Gemini South); together the twin telescopes provide full coverage over both hemispheres of the sky. The telescopes incorporate technologies that allow large, relatively thin mirrors, under active control, to collect and focus both visible and infrared radiation from space.

    The Gemini Observatory provides the astronomical communities in six partner countries with state-of-the-art astronomical facilities that allocate observing time in proportion to each country’s contribution. In addition to financial support, each country also contributes significant scientific and technical resources. The national research agencies that form the Gemini partnership include: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Comisión Nacional de Investigación Cientifica y Tecnológica (CONICYT), the Australian Research Council (ARC), the Argentinean Ministerio de Ciencia, Tecnología e Innovación Productiva, and the Brazilian Ministério da Ciência, Tecnologia e Inovação. The observatory is managed by the Association of Universities for Research in Astronomy, Inc. (AURA) under a cooperative agreement with the NSF. The NSF also serves as the executive agency for the international partnership.

  • richardmitnick 2:23 pm on September 21, 2018 Permalink | Reply
    Tags: , , , , , , , , The Rise of Astrotourism in Chile   

    From ESOblog: “The Rise of Astrotourism in Chile” 

    ESO 50 Large

    From ESOblog

    21 September 2018


    For the ultimate stargazing experience, Chile is an unmissable destination. The skies above the Atacama Desert are clear for about 300 nights per year, so this high, dry and dark environment offers the perfect window to the Universe. Hundreds of thousands of tourists flock to Chile each year to take advantage of the incredible stargazing conditions, and to visit the scientific observatories — including ESO’s own — that use these skies as a natural astronomical laboratory. But one challenge now affecting Chile’s world-renowned dark skies is that of light pollution.

    The intense Sun beats down on the tourists’ cars as they climb the dusty desert road up Cerro Paranal. The 130-kilometre journey from the closest city of Antofagasta will be worth it because waiting at the top is ESO’s Paranal Observatory.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo

    The tourists have been eagerly awaiting their tour of this incredible site since they booked it a month ago. Every Saturday, two of ESO’s Chile-based observatories — Paranal and La Silla — open their doors for organised tours led by ESO’s education and Public Outreach Department on behalf of the ESO Representation Office.

    ESO/Cerro LaSilla, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    Tourists come from far and wide to find out about the technology behind ESO’s world-class telescopes — how they are built and operated, and how astronomers use them to make groundbreaking discoveries. Each tour begins at the visitor centres, which are currently being upgraded with new content designed for the ESO Supernova Planetarium & Visitor Centre, before the guests are taken to see what they really came for: the telescopes.

    ESO Supernova Planetarium, Garching Germany

    Visits to Paranal are centred around ESO’s Very Large Telescope, the world’s most advanced optical instrument and the flagship facility of European optical astronomy. Visitors also see the control room where astronomers work, and the Paranal Residencia — the astronomers’ “home away from home” when they are observing in Chile.

    ESO Paranal Residencia exterior

    ESO Paranal Residencia inside near the swimming pool

    ESO Paranal Residencia dining room

    At La Silla, on the other hand, visitors spend time at the ESO 3.6-metre telescope and the New Technology Telescope before ending the day at the Swedish–ESO Submillimetre Telescope.

    ESO 3.6m telescope & HARPS at Cerro LaSilla, Chile, 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO/NTT at Cerro La Silla, Chile, at an altitude of 2400 metres

    ESO Swedish Submillimetre Telescope at La Silla at 2400 meters

    Astronomy enthusiasts can also visit the Operational Support Facility for the impressive Atacama Large Millimeter/submillimeter Array (ALMA).

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    The word “alma” means “soul” in Spanish, and there is definitely something spiritual about this extraordinary location. With its 66 antennas spreading across the desert, ALMA is a hugely popular observatory to visit — tourists book at least two months in advance for an eye-opening tour of the control room, laboratories, and antennas under maintenance.

    The tours at each of these three sites are led by a team of enthusiastic guides. Most are local students who love to share their passion for astronomy. Gonzalo Aravena, a guide at Paranal, thinks that “being a small part of the great astrotourism that exists in Chile today is something to be proud of”, and Jermy Barraza, a La Silla guide, believes that guiding visitors is “a great support to our country’s culture, and encourages awareness of the natural resources that should be protected”.

    Tourists visiting ESO’s Paranal Observatory pose for a snapshot in front of two of the VLT Unit Telescopes.
    Credit: ESO

    With almost 10,000 visitors a year to Paranal and 4000 to La Silla, these ESO observatories are the most popular Chilean sites for astrotourists, especially those who want to visit scientific facilities. Francisco Rodríguez, ESO’s Press Officer in Chile, explains, “Astrotourists are increasingly enthusiastic about experiencing dark skies and impressive astronomical observatories, and ESO sees this reflected in the growing number of visitors that arrive each year — over the last four years we’ve seen the numbers double”. This value is especially impressive considering how difficult the observatories are to get to.

    ESO avoids organising tours and events at night, leaving astronomers undisturbed and able to focus on their scientific research. Usually daytime tours are the only way to visit an ESO observatory, however, the doors are often opened for special events; for example Mercury’s transit of the Sun in 2003 and the partial solar eclipse in 2010. Visitors come to ESO to see the impressive technology and to understand how a professional observatory works, which often leads them to make nighttime visits to other stargazing locations.

    “Chile is an amazing country for astrotourism,” says Rodríguez. “Visitors can combine day visits to the most impressive telescopes in the world, with nighttime views of the stars at tourism observatories across the country.”

    Observatories such as the Collowara Tourism Observatory are popping up specifically for amateur stargazers, and many hotels provide telescopes for their guests to enjoy the beautiful skies. Elqui Domos Hotel has gone even further — dome-shaped rooms feature removable ceilings that open onto the sky, and guests can sleep in observatory cabins with glass roofs. Various astronomical museums have also been opened, including the San Pedro Meteorite Museum, which also conducts stargazing tours.

    Recently, ESO actively collaborated with other governmental, academic, and scientific groups to support a governmental initiative called Astroturismo Chile. Its aim is to “transform Chile into an astrotouristic destination of excellence, to be admired and recognised throughout the world for its attractiveness, quality, variety and sustainability”. Fernando Comerón, the former ESO representative in Astroturismo Chile, elaborates that the strategy “aims to improve the quality and competitiveness of existing astrotourism activities, in addition to preparing the Chilean astrotourism roadmap for 2016–2025”.

    But Chile’s dark skies are facing a growing challenge. La Serena, the closest major city to La Silla Observatory, is expanding rapidly; the region’s population has swelled to over 700 000, growing by more than 200 000 people in the last 20 years. Although some of these people are astronomers and dark sky lovers, increased development can mean increased light pollution if not carefully handled.

    Light pollution is artificial light that shines where it is neither wanted or needed, arising from poorly-designed, incorrectly-directed light fixtures. Light that shines into the sky is scattered by air molecules, moisture and aerosols in the atmosphere, causing the night sky to light up. This phenomenon is known as skyglow. Solutions include power limits for public lighting; shielding street lamps, neon signs, and plasma screens; and stricter guidelines for sport and recreational facilities.

    The arch of the Milky Way emerges from the Cerro Paranal on the left, and sinks into the bright lights of Antofagasta, the closest city to Paranal Observatory.
    Credit: Bruno Gilli/ ESO

    Dark skies are incredibly important to ESO Photo Ambassador, Petr Horálek, who reflects, “I remember a law called Norma Lumínica was signed in 1999 requiring that lighting in the three astronomically-sensitive regions of Chile be directed downwards instead of into the sky… Of course, there are no lamps along the roads close to the observatories”.

    The Norma Lumínica, which establishes protocols for lighting regulations in Chile, was recently updated in 2013 to adapt to new technologies.

    The spectacularly clear skies over the ESO 3.6-metre telescope at La Silla show the Milky Way and its galactic bulge.
    Credit: Y. Beletsky (LCO)/ESO

    Chile is also working with international observatories to encourage UNESCO to add major astronomy sites such as Paranal Observatory to its World Heritage List.
    “By promoting the preservation of natural conditions, particularly the dark skies, astronomy contributes to the formation of an environmentally-aware society”, says Comerón.

    Over the next ten years, Chile plans to invest in many new observatories.


    LSST Camera, built at SLAC

    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    Giant Magellan Telescope, to be at the Carnegie Institution for Science’s Las Campanas Observatory, to be built some 115 km (71 mi) north-northeast of La Serena, Chile, over 2,500 m (8,200 ft) high

    Currently, more than 50% of the world’s large telescopes are located there, and the Chilean government believe that by 2020 that value could rise to more than 70%. IndexMundi, a data portal that gathers statistics from around the world, suggests the annual number of visitors to Chile has more than quadrupled in the past 15 years In 2017, 6.45 million visitors arrived in Chile, many of whom were enticed by the incredible night skies, and the reports from the Astroturismo Chile initiative estimate that in the next decade, the number of astrotourists visiting Chile will triple.

    Chile has its work cut out to limit the impact of light pollution on its magnificent skies, but if successful the country will benefit greatly — as will the visitors who continue to flock there. As La Silla guide Yilin Kong says, “Astrotourism helps teach people about the importance of astronomy, and to encourage the next generations to participate in it”.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Visit ESO in Social Media-




    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO LaSilla
    ESO/Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO Vista Telescope
    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO/NTT at Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Survey telescope
    VLT Survey Telescope at Cerro Paranal with an elevation of 2,635 metres (8,645 ft) above sea level.

    ALMA Array
    ALMA on the Chajnantor plateau at 5,000 metres.

    ESO/E-ELT to be built at Cerro Armazones at 3,060 m.

    APEX Atacama Pathfinder 5,100 meters above sea level, at the Llano de Chajnantor Observatory in the Atacama desert.

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    SPECULOOS four 1m-diameter robotic telescopes 2016 in the ESO Paranal Observatory, 2,635 metres (8,645 ft) above sea level

    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

  • richardmitnick 12:17 pm on September 21, 2018 Permalink | Reply
    Tags: , , LLNL/LBNL team named as Gordon Bell Award finalists for work on modeling neutron lifespans, , , ,   

    From Lawrence Livermore National Laboratory: “LLNL/LBNL team named as Gordon Bell Award finalists for work on modeling neutron lifespans” 

    From Lawrence Livermore National Laboratory

    Sept. 20, 2018
    Jeremy Thomas

    Beta decay, the decay of a neutron (n) to a proton (p) with the emission of an electron (e) and an electron-anti-neutrino (ν). In the figure gA is depicted as the white node on the red line. The square grid indicates the lattice. Image by Evan Berkowitz/Forschungszentrum Jülich/Institut für Kernphysik /Institute for Advanced Simulation

    A team of scientists and physicists headed by the Lawrence Livermore and Lawrence Berkeley national laboratories has been named as one of six finalists for the prestigious 2018 Gordon Bell Award, one of the world’s top honors in supercomputing.

    Using the Department of Energy’s newest supercomputers, LLNL’s Sierra and Oak Ridge’s Summit, a team led by computational theoretical physicists Pavlos Vranas of LLNL and André Walker-Loud of LBNL developed an improved algorithm and code that can more precisely determine the lifetime of a neutron, an achievement that could lead to discovering new, previously unknown physics, researchers said.

    LLNL SIERRA IBM supercomputer

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The team’s approach involves simulating the fundamental theory of quantum chromodynamics (QCD) on a fine grid of space-time points called the lattice. QCD theory describes how particles like quarks and gluons make up protons and neutrons.

    The lifetime of a neutron, which begins to decay after about 15 minutes, is important because it has a profound effect on the mass composition of the universe, Vranas explained. Using previous generation supercomputers at ORNL and LLNL, the team was the first to calculate the nucleon axial coupling, a figure (denoted by gA) directly related to the neutron lifetime, at 1 percent precision. Two different real-world experiments have measured neutron lifetime with results that differ at an experimental accuracy of about 0.1 percent, which researchers believe may be related to new physics affecting each experiment.

    To resolve this discrepancy, Vranas and his team have advanced their calculation onto the new generation supercomputers Sierra and Summit, aiming to improve their precision to less than 1 percent and get closer to the experimental results. The team has fully optimized their codes on the new CPU (Central Processing Unit)/GPU (Graphics Processing Unit) architectures of the two supercomputers, which involved developing an algorithm that exponentially speeds up calculations, a method for optimally distributing GPU resources and a job manager that allows CPU and GPU jobs to be interleaved.

    “New machines like Sierra and Summit are disruptively fast and require the ability to manage and process more tasks, amounting to about a factor of 10 increase. As we move toward exascale, job management is becoming a huge factor for success. With Sierra and Summit, we will be able to run hundreds of thousands of jobs and generate several petabytes of data in a few days — a volume that is too much for the current standard management methods,” said LBNL’s Walker-Loud. “The fact that we have an extremely fast GPU code (QUDA) and were able to wrap our entire lattice QCD scientific application with new job managers we wrote (METAQ and MPI_JM) got us to the Gordon Bell finalist stage, I believe.”

    The resulting axial coupling calculation, Vranas said, will provide the neutron lifetime that the fundamental theory of QCD predicts. Any deviations from the theory may be signs of new physics beyond current understanding of nature and the reach of the Large Hadron Collider.

    “We’ve demonstrated that we can use this next generation of computers efficiently, at about 15-20 percent of peak speed,” Vranas said. “This research takes us further, and now with these computers we can move forward with precision better than one percent, in an attempt to find new physics. This is an exciting time.”

    On Sierra and Summit, the team was able to reach sustained performance of about 20 petaFLOPS (FLOPS stands for floating-point operations per second), or roughly 15 percent of the peak performance for Sierra. The team discovered that the number of calculations they could do on the new machines will keep rising in a constant linear fashion, a solid indication that using more of the GPUs in the machines will result in even faster calculations. In turn this will result in producing more data and therefore to improved precision of the calculation of the neutron lifetime, researchers said.

    “Every time a new supercomputer comes along it just amazes you,” Vranas said. “These systems are significantly different than their predecessors, and it was quite an effort on the code side to make this happen. This is important science and Sierra and Summit will accelerate it in a meaningful and impactful way.”

    LLNL postdoctoral researcher Arjun Gambhir contributed to the research. Co-authors include Evan Berkowitz (Institute for Advanced Simulation, Jülich Supercomputing Centre), M.A. Clark (NVIDIA), Ken McElvain (LBNL and University of California, Berkeley), Amy Nicholson (University of North Carolina), Enrico Rinaldi (RIKEN-Brookhaven National Laboratory), Chia Cheng Chang (LBNL), Ba ́lint Joo ́ (Thomas Jefferson National Accelerator Facility), Thorsten Kurth (NERSC/LBNL) and Kostas Orginos (College of William and Mary).

    The Gordon Bell Prize is awarded each year to recognize outstanding achievements in high performance computing, with an emphasis on rewarding innovations in science applications, engineering and large-scale data analytics.

    Other finalists include an LBNL-led collaboration using exascale deep learning on Summit to identify extreme weather patterns; a team from ORNL that developed a genomics application on Summit capable of determining the genetic architectures for chronic pain and opioid addiction at up to five orders of magnitude beyond the current state-of-the-art; an ORNL team that used an artificial intelligence system to automatically develop a deep learning network on Summit capable of identifying information from raw electron microscopy data; a team from the University of Tokyo that applied artificial intelligence and trans-precision arithmetic to accelerate simulations of earthquakes in cities; and a team led by China’s Tsinghua University that developed a framework for efficiently utilizing an entire petascale system to process multi-trillion edge graphs in seconds.

    The Gordon Bell winner will be announced at the 2018 International Conference for High Performance Computing, Networking, Storage and Analysis (SC18) in Dallas this November.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration
    Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center (FFRDC), it is primarily funded by the U.S. Department of Energy (DOE) and managed and operated by Lawrence Livermore National Security, LLC (LLNS), a partnership of the University of California, Bechtel, BWX Technologies, AECOM, and Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it.

    LLNL is self-described as “a premier research and development institution for science and technology applied to national security.” Its principal responsibility is ensuring the safety, security and reliability of the nation’s nuclear weapons through the application of advanced science, engineering and technology. The Laboratory also applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.

    The Laboratory is located on a one-square-mile (2.6 km2) site at the eastern edge of Livermore. It also operates a 7,000 acres (28 km2) remote experimental test site, called Site 300, situated about 15 miles (24 km) southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of roughly 5,800 employees.

    LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley. It was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence,[2] director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility.

    The new laboratory was sited at a former naval air station of World War II. It was already home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions (i.e. fusion). About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus.

    Lawrence tapped 32-year-old Herbert York, a former graduate student of his, to run Livermore. Under York, the Lab had four main programs: Project Sherwood (the magnetic-fusion program), Project Whitney (the weapons-design program), diagnostic weapon experiments (both for the Los Alamos and Livermore laboratories), and a basic physics program. York and the new lab embraced the Lawrence “big science” approach, tackling challenging projects with physicists, chemists, engineers, and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university’s board of regents named both laboratories for him, as the Lawrence Radiation Laboratory.

    Historically, the Berkeley and Livermore laboratories have had very close relationships on research projects, business operations, and staff. The Livermore Lab was established initially as a branch of the Berkeley laboratory. The Livermore lab was not officially severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, and LLNL’s remote test location as Site 300.[3]

    The laboratory was renamed Lawrence Livermore Laboratory (LLL) in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had exclusively managed and operated the Laboratory since its inception 55 years before. The laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008.[4] The jury found that LLNS breached a contractual obligation to terminate the employees only for “reasonable cause.”[5] The five plaintiffs also have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial.[6] There are 125 co-plaintiffs awaiting trial on similar claims against LLNS.[7] The May 2008 layoff was the first layoff at the laboratory in nearly 40 years.[6]

    On March 14, 2011, the City of Livermore officially expanded the city’s boundaries to annex LLNL and move it within the city limits. The unanimous vote by the Livermore city council expanded Livermore’s southeastern boundaries to cover 15 land parcels covering 1,057 acres (4.28 km2) that comprise the LLNL site. The site was formerly an unincorporated area of Alameda County. The LLNL campus continues to be owned by the federal government.


    DOE Seal

  • richardmitnick 11:55 am on September 21, 2018 Permalink | Reply
    Tags: Astronomers Uncover New Clues to the Star that Wouldn't Die, , , , , Hubble Paints Picture of the Evolving Universe,   

    From NASA/ESA Hubble Telescope: “Hubble Paints Picture of the Evolving Universe” and “Astronomers Uncover New Clues to the Star that Wouldn’t Die” 

    NASA Hubble Banner

    NASA/ESA Hubble Telescope

    From NASA/ESA Hubble Telescope

    Hubble Paints Picture of the Evolving Universe
    Aug 16, 2018

    Ann Jenkins
    Space Telescope Science Institute, Baltimore, Maryland

    Ray Villard
    Space Telescope Science Institute, Baltimore, Maryland

    Pascal Oesch
    University of Geneva, Geneva, Switzerland

    Mireia Montes
    University of New South Wales, Sydney, Australia

    Astronomers using the ultraviolet vision of NASA’s Hubble Space Telescope have captured one of the largest panoramic views of the fire and fury of star birth in the distant universe. The field features approximately 15,000 galaxies, about 12,000 of which are forming stars. Hubble’s ultraviolet vision opens a new window on the evolving universe, tracking the birth of stars over the last 11 billion years back to the cosmos’ busiest star-forming period, which happened about 3 billion years after the big bang.

    Ultraviolet light has been the missing piece to the cosmic puzzle. Now, combined with infrared and visible-light data from Hubble and other space and ground-based telescopes, astronomers have assembled one of the most comprehensive portraits yet of the universe’s evolutionary history.

    The image straddles the gap between the very distant galaxies, which can only be viewed in infrared light, and closer galaxies, which can be seen across a broad spectrum. The light from distant star-forming regions in remote galaxies started out as ultraviolet. However, the expansion of the universe has shifted the light into infrared wavelengths. By comparing images of star formation in the distant and nearby universe, astronomers glean a better understanding of how nearby galaxies grew from small clumps of hot, young stars long ago.

    Because Earth’s atmosphere filters most ultraviolet light, Hubble can provide some of the most sensitive space-based ultraviolet observations possible.

    The program, called the Hubble Deep UV (HDUV) Legacy Survey, extends and builds on the previous Hubble multi-wavelength data in the CANDELS-Deep (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) fields within the central part of the GOODS (The Great Observatories Origins Deep Survey) fields. This mosaic is 14 times the area of the Hubble Ultra Violet Ultra Deep Field released in 2014.

    This image is a portion of the GOODS-North field, which is located in the northern constellation Ursa Major.

    Science paper:
    HDUV: The Hubble Deep UV Legacy Survey
    The Astrophysical Journal

    See the full article here .

    Astronomers Uncover New Clues to the Star that Wouldn’t Die

    Aug 2, 2018

    Donna Weaver
    Space Telescope Science Institute, Baltimore, Maryland
    410-338-4493 / 410-338-4514

    Ray Villard
    Space Telescope Science Institute, Baltimore, Maryland

    Nathan Smith
    University of Arizona, Tucson

    Armin Rest
    Space Telescope Science Institute, Baltimore, Maryland

    Brawl Among Three Rowdy Stellar Siblings May Have Triggered Eruption

    What happens when a star behaves like it exploded, but it’s still there?

    About 170 years ago, astronomers witnessed a major outburst by Eta Carinae, one of the brightest known stars in the Milky Way galaxy. The blast unleashed almost as much energy as a standard supernova explosion.

    Yet Eta Carinae survived.

    An explanation for the eruption has eluded astrophysicists. They can’t take a time machine back to the mid-1800s to observe the outburst with modern technology.

    However, astronomers can use nature’s own “time machine,” courtesy of the fact that light travels at a finite speed through space. Rather than heading straight toward Earth, some of the light from the outburst rebounded or “echoed” off of interstellar dust, and is just now arriving at Earth. This effect is called a light echo. The light is behaving like a postcard that got lost in the mail and is only arriving 170 years later.

    By performing modern astronomical forensics of the delayed light with ground-based telescopes, astronomers uncovered a surprise. The new measurements of the 1840s eruption reveal material expanding with record-breaking speeds up to 20 times faster than astronomers expected. The observed velocities are more like the fastest material ejected by the blast wave in a supernova explosion, rather than the relatively slow and gentle winds expected from massive stars before they die.

    Based on this data, researchers suggest that the eruption may have been triggered by a prolonged stellar brawl among three rowdy sibling stars, which destroyed one star and left the other two in a binary system. This tussle may have culminated with a violent explosion when Eta Carinae devoured one of its two companions, rocketing more than 10 times the mass of our Sun into space. The ejected mass created gigantic bipolar lobes resembling the dumbbell shape seen in present-day images.

    The results are reported in a pair of papers by a team led by Nathan Smith of the University of Arizona in Tucson, Arizona, and Armin Rest of the Space Telescope Science Institute in Baltimore, Maryland.

    The light echoes were detected in visible-light images obtained since 2003 with moderate-sized telescopes at the Cerro Tololo Inter-American Observatory in Chile. Using larger Magellan telescopes at the Carnegie Institution for Science’s Las Campanas Observatory and the Gemini South Observatory, both also located in Chile, the team then used spectroscopy to dissect the light, allowing them to measure theejecta’s expansion speeds. They clocked material zipping along at more than 20 million miles per hour (fast enough to travel from Earth to Pluto in a few days).

    The observations offer new clues to the mystery surrounding the titanic convulsion that, at the time, made Eta Carinae the second-brightest nighttime star seen in the sky from Earth between 1837 and 1858. The data hint at how it may have come to be the most luminous and massive star in the Milky Way galaxy.

    “We see these really high velocities in a star that seems to have had a powerful explosion, but somehow the star survived,” Smith explained. “The easiest way to do this is with a shock wave that exits the star and accelerates material to very high speeds.”

    Massive stars normally meet their final demise in shock-driven events when their cores collapse to make a neutron star or black hole. Astronomers see this phenomenon in supernova explosions where the star is obliterated. So how do you have a star explode with a shock-driven event, but it isn’t enough to completely blow itself apart? Some violent event must have dumped just the right amount of energy onto the star, causing it to eject its outer layers. But the energy wasn’t enough to completely annihilate the star.

    One possibility for just such an event is a merger between two stars, but it has been hard to find a scenario that could work and match all the data on Eta Carinae.

    The researchers suggest that the most straightforward way to explain a wide range of observed facts surrounding the eruption is with an interaction of three stars, where the objects exchange mass.

    If that’s the case, then the present-day remnant binary system must have started out as a triple system. “The reason why we suggest that members of a crazy triple system interact with each other is because this is the best explanation for how the present-day companion quickly lost its outer layers before its more massive sibling,” Smith said.

    In the team’s proposed scenario, two hefty stars are orbiting closely and a third companion is orbiting farther away. When the most massive of the close binary stars nears the end of its life, it begins to expand and dumps most of its material onto its slightly smaller sibling.

    The sibling has now bulked up to about 100 times the mass of our Sun and is extremely bright. The donor star, now only about 30 solar masses, has been stripped of its hydrogen layers, exposing its hot helium core.

    Hot helium core stars are known to represent an advanced stage of evolution in the lives of massive stars. “From stellar evolution, there’s a pretty firm understanding that more massive stars live their lives more quickly and less massive stars have longer lifetimes,” Rest explained. “So the hot companion star seems to be further along in its evolution, even though it is now a much less massive star than the one it is orbiting. That doesn’t make sense without a transfer of mass.”

    The mass transfer alters the gravitational balance of the system, and the helium-core star moves farther away from its monster sibling. The star travels so far away that it gravitationally interacts with the outermost third star, kicking it inward. After making a few close passes, the star merges with its heavyweight partner, producing an outflow of material.

    In the merger’s initial stages, the ejecta is dense and expanding relatively slowly as the two stars spiral closer and closer. Later, an explosive event occurs when the two inner stars finally join together, blasting off material moving 100 times faster. This material eventually catches up with the slow ejecta and rams into it like a snowplow, heating the material and making it glow. This glowing material is the light source of the main historical eruption seen by astronomers a century and a half ago.

    Meanwhile, the smaller helium-core star settles into an elliptical orbit, passing through the giant star’s outer layers every 5.5 years. This interaction generates X-ray emitting shock waves.

    A better understanding of the physics of Eta Carinae’s eruption may help to shed light on the complicated interactions of binary and multiple stars, which are critical for understanding the evolution and death of massive stars.

    The Eta Carinae system resides 7,500 light-years away inside the Carina nebula, a vast star-forming region seen in the southern sky.

    The team published its findings in papers titled Exceptionally Fast Ejecta Seen in Light Echoes of Eta Carinae’s Great Eruption and Light Echoes From the Plateau in Eta Carinae’s Great Eruption Reveal a Two-Stage Shock-Powered Event, which appear online Aug. 2 in The Monthly Notices of the Royal Astronomical Society.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI), is a free-standing science center, located on the campus of The Johns Hopkins University and operated by the Association of Universities for Research in Astronomy (AURA) for NASA, conducts Hubble science operations.

    ESA50 Logo large

    AURA Icon

  • richardmitnick 11:29 am on September 21, 2018 Permalink | Reply
    Tags: Andrew Peterson, Brown awarded $3.5M to speed up atomic-scale computer simulations, , Computational power is growing rapidly which lets us perform larger and more realistic simulations, Different simulations often have the same sets of calculations underlying them- so finding what can be re-used saves a lot of time and money, ,   

    From Brown University: “Brown awarded $3.5M to speed up atomic-scale computer simulations” 

    Brown University
    From Brown University

    September 20, 2018
    Kevin Stacey

    Andrew Peterson. No photo credit.

    With a new grant from the U.S. Department of Energy, a Brown University-led research team will use machine learning to speed up atom-level simulations of chemical reactions and the properties of materials.

    “Simulations provide insights into materials and chemical processes that we can’t readily get from experiments,” said Andrew Peterson, an associate professor in Brown’s School of Engineering who will lead the work.

    “Computational power is growing rapidly, which lets us perform larger and more realistic simulations. But as the size of the simulations grows, the time involved in running them can grow exponentially. This paradox means that even with the growth in computational power, our field still cannot perform truly large-scale simulations. Our goal is to speed those simulations up dramatically — ideally by orders of magnitude — using machine learning.”

    The grant provides $3.5 million dollars for the work over four years. Peterson will work with two Brown colleagues — Franklin Goldsmith, assistant professor of engineering, and Brenda Rubenstein, assistant professor of chemistry — as well as researchers from Carnegie Mellon, Georgia Tech and MIT.

    The idea behind the work is that different simulations often have the same sets of calculations underlying them. Peterson and his colleagues aim to use machine learning to find those underlying similarities and fast-forward through them.

    “What we’re doing is taking the results of calculations from prior simulations and using them to predict the outcome of calculations that haven’t been done yet,” Peterson said. “If we can eliminate the need to do similar calculations over and over again, we can speed things up dramatically, potentially by orders of magnitude.”

    The team will focus their work initially on simulations of electrocatalysis — the kinds of chemical reactions that are important in devices like fuel cells and batteries. These are complex, often multi-step reactions that are fertile ground for simulation-driven research, Peterson says.

    Atomic-scale simulations have demonstrated usefulness in Peterson’s own work in the design of new catalysts. In a recent example, Peterson worked with Brown chemist Shouheng Sun on a gold nanoparticle catalyst that can perform a reaction necessary for converting carbon dioxide into useful forms of carbon. Peterson’s simulations showed it was the sharp edges of the oddly shaped catalyst that were particularly active for the desired reaction.

    “That led us to change the geometry of the catalyst to a nanowire — something that’s basically all edges — to maximize its reactivity,” Peterson said. “We might have eventually tried a nanowire by trial and error, but because of the computational insights we were able to get there much more quickly.”

    The researchers will use a software package that Peterson’s research group developed previously as a starting point. The software, called AMP (Atomistic Machine-learning Package) is open-source and already widely used in the simulation community, Peterson says.

    The Department of Energy grant will bring atomic-scale simulations — and the insights they produce — to bear on ever larger and more complex simulations. And while the work under the grant will focus on electrocatalysis, the tools the team develops should be widely applicable to other types of material and chemical simulations.

    Peterson is hopeful that the investment that the federal government is making in machine learning will be repaid by making better use of valuable computing resources.

    “Modern supercomputers cost millions of dollars to build, and simulation time on them is precious,” Peterson said. “If we’re able to free up time on those machines for additional simulations to be run, that translates into vastly increased return-on-investment for those machines. It’s real money.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Brown

    Brown U Robinson Hall
    Located in historic Providence, Rhode Island and founded in 1764, Brown University is the seventh-oldest college in the United States. Brown is an independent, coeducational Ivy League institution comprising undergraduate and graduate programs, plus the Alpert Medical School, School of Public Health, School of Engineering, and the School of Professional Studies.

    With its talented and motivated student body and accomplished faculty, Brown is a leading research university that maintains a particular commitment to exceptional undergraduate instruction.

    Brown’s vibrant, diverse community consists of 6,000 undergraduates, 2,000 graduate students, 400 medical school students, more than 5,000 summer, visiting and online students, and nearly 700 faculty members. Brown students come from all 50 states and more than 100 countries.

    Undergraduates pursue bachelor’s degrees in more than 70 concentrations, ranging from Egyptology to cognitive neuroscience. Anything’s possible at Brown—the university’s commitment to undergraduate freedom means students must take responsibility as architects of their courses of study.

  • richardmitnick 7:32 am on September 21, 2018 Permalink | Reply
    Tags: A tectonic squeeze may be loading three thrust faults beneath central Los Angeles, , , , ,   

    From temblor: “A tectonic squeeze may be loading three thrust faults beneath central Los Angeles” 


    From temblor

    September 17, 2018
    Chris Rollins

    Thrust-faulting earthquakes are a fact of life in Los Angeles and a threat to it. Three such earthquakes in the second half of the 20th century painfully etched this ongoing threat to life, limb and infrastructure into the memories and the backs of the minds of many who call this growing metropolis home. The first struck 40 seconds after 6:00 AM on a February morning in 1971 when a section of a thrust fault beneath the western San Gabriel Mountains ruptured in a magnitude 6.7 tremor. The earthquake killed 60 people, including 49 in the catastrophic collapse of the Veterans Administration Hospital in Sylmar, the closest town to the event (which is often referred to as the Sylmar earthquake). Among other structures hit hard were the newly built Newhall Pass interchange at the junction of Interstate 5 and California State Route 14, of which multiple sections collapsed, and the Van Norman Dam, which narrowly avoided failure in what could have been a cruel deja vu for a city that had been through deadly dam disasters in 1928 and 1963.

    Devastation at the Veterans Administration Hospital in the 1971 Sylmar earthquake. Photo courtesy of Los Angeles Times.

    Sixteen years later, a section of the Puente Hills thrust fault ruptured in the magnitude 5.9 Whittier Narrows earthquake, killing eight people in East Los Angeles and bringing attention to a class of thrust faults that do not break the surface, called “blind” thrust faults, which will go on to form a key part of this story. Then early on another winter morning in 1994, an even more deeply buried blind thrust fault ruptured beneath the San Fernando Valley in the magnitude 6.7 Northridge earthquake, causing tens of billions of dollars in damage and taking 57 lives. One of the fatalities was Los Angeles police officer Clarence Wayne Dean, who died on his motorcycle when a span of the Newhall Pass interchange that had been rebuilt following the 1971 Sylmar earthquake collapsed again as he was riding across it in the predawn darkness.

    Collapse of the Newhall Pass (I-5/CA-14) interchange in the 1994 Northridge earthquake. Officer Dean died on the downed section of overpass at right. The interchange has since been renamed the Clarence Wayne Dean Memorial Interchange in his memory. Photo courtesy of CNN.

    LA’s problem: The squeeze

    Thrust earthquakes like these, in which the top side of the fault is thrust up and over the bottom side, will likely strike Los Angeles again in the 21st century. They may in fact pose a greater hazard to the city than earthquakes on the nearby San Andreas Fault because they can occur directly beneath the central metropolitan area. This means that a city that has found so much of its identity and place in history from being improvised as it went, and from being a cultural and economic melting pot, now faces the unwieldy task of readying its diverse infrastructure and populace for the strong shaking these kinds of earthquakes can produce.

    One way that the earthquake science community has been assessing the seismic hazard in LA is by using geodesy – long-term, high-precision monitoring of the deformation of the Earth’s surface – to locate sections of faults that are stuck, or locked, causing the Earth’s crust to deform around them. It is this bending of the crust, or accumulated strain, that is violently released in earthquakes; therefore the locations where this bending is taking place might indicate where future earthquakes will occur, and perhaps how large and frequent they could be. Several decades of geodetic monitoring have shown that the greater Los Angeles area is being squeezed from north to south at roughly 8-9 millimeters per year (⅓ inch per year), about one-fourth the rate at which human fingernails grow. Thrust faults, such as those on which the Sylmar, Whittier Narrows and Northridge earthquakes struck, are ultimately driven by this compression.

    Geodetic data, tectonics and material properties relevant to the problem. Dark blue arrows show the north-south tectonic compression inferred by Argus et al. [2005] after removing deformation caused by aquifer and oil use. Black lines are faults, dashed where blind. Background shading is a measure of material stiffness at the surface based on the Community Velocity Model [Shaw et al., 2015]. “Beach balls” show the locations and senses of slip of the 1971 Sylmar, 1987 Whittier Narrows and 1994 Northridge earthquakes. Figure simplified from Rollins et al. [2018].

    Why the science is still very much ongoing

    The task of linking the north-south tectonic squeeze to specific faults encounters several unique challenges in Los Angeles. First, the city sits atop not only active faults but also several aquifers and oil fields that have long provided part of its livelihood and continue to be used today, which deforms the crust around them. Geodetic data are affected by this anthropogenic deformation, to the extent that a recent study used these data to observe Los Angeles “breathing” water from year to year and even to resolve key hydrological properties of particular sections of aquifers. This spectacular deformation, which furnishes science that can be used in resource management around the world, has the unfortunate effect of obscuring the more gradual north-south tectonic shortening in Los Angeles in these data.

    Animation from Riel et al. [2018] showing long-term subsidence of the Earth’s surface due to use of the Los Angeles and Santa Ana aquifers.

    Second, the faults are a complex jumble. The crust underlying Los Angeles is cut by thrust faults, strike-slip faults like the San Andreas Fault and subparallel to it, and other strike-slip faults nearly perpendicular to it. Although these faults all take part in accommodating the gradual north-south squeeze, the relative contributions of the thrust and strike-slip faults in doing so has been the subject of debate. The problem of estimating strain accumulation on subsurface faults is also generally at the mercy of uncertainties as to how faults behave at depth in the Earth’s crust and how they intersect and link up.

    Third, Los Angeles sits atop a deep sedimentary basin, created when a previous episode of extension created a “hole” in the crust that was gradually filled by sediments eroded off the surrounding mountain ranges. These sedimentary layers are more easily deformed than the stiffer rocks in the mountains around the basin, complicating the problem of estimating strain accumulation at depth from the way the surface is deforming. Finally, as in the case of the Puente Hills Fault, some of the major thrust faults in Los Angeles do not break the surface but are “blind.” This means that the bending of the crust around locked sections of these faults is buried and more difficult to detect at the surface.

    Basin sediments affect the relationship between fault slip and deformation at the surface by up to 50% for the cases of the Puente Hills Fault (left) and Compton Fault (right). For the same fault slip, the basin is more compliant and so the Earth’s surface is displaced more (red arrows) than if it were absent (blue arrows). Figure simplified from Rollins et al. [2018].

    Three thrust faults may be doing a lot of the work

    Several important advances over the past two decades have paved pathways towards overcoming these challenges. The signal of deformation due to water and oil management can be subtracted from the geodetic data to yield a clearer picture of the tectonic shortening. The geometries of faults at depth have also come into focus, as earth scientists at the Southern California Earthquake Center and Harvard University have compiled decades of oil well logs and seismic reflection data to build the Community Fault Model, a detailed 3D picture of these complex geometries. A parallel effort has yielded the Community Velocity Model, a 3D model of the structure and composition of the Southern California crust that is internally consistent with the fault geometries.

    A cross section of faults and earthquakes across central Los Angeles from Rollins et al. [2018]. Red lines are faults, dashed where uncertain; pairs of arrows along the thrust faults show their long-term sense of slip. White circles are earthquakes. Basin structure is from the Community Velocity Model.

    Recently, a team of researchers from Caltech, JPL and USC (with contributions from many other earthquake scientists) has begun to put these pieces together. Their approaches and findings were published in the Journal of Geophysical Research (JGR) this summer. On the challenge presented by the complex array of faults, the study found that the strike-slip faults probably accommodate less than 20% of the total shortening at the max, leaving the rest to be explained by thrust faulting or other processes. Three thrust faults, the Sierra Madre, Puente Hills and Compton faults, stand out in particular as good candidates. All three appear to span the Los Angeles basin from west to east, and the Puente Hills and Sierra Madre faults have generated moderate earthquakes in the last three decades, including the Whittier Narrows shock and a magnitude 5.8 tremor in 1991. Paleoseismology (the study of prehistoric earthquakes) has also revealed that these three faults have each generated multiple earthquakes in the past 15,000 years whose magnitudes may have exceeded 7.0.

    Alternative models of how quickly strain is accumulating on the Compton, Puente Hills and Sierra Madre Faults, assuming that the transition between completely locked (stuck) and freely slipping patches of fault is gradual (left) or sharp (right), simplified from Rollins et al. [2018]. Gray lines are major highways.

    How fast is stress building up on these faults?

    Exploring a wide range of assumptions (such as whether the transitions between stuck and unstuck sections of faults may be gradual or abrupt), the team inferred that the Sierra Madre, Puente Hills and Compton faults appear to be partially or fully locked and building up stress on their upper (shallowest) sections. The estimated total rate of strain accumulation on the three faults is equivalent to a magnitude 6.7-6.8 earthquake like the Sylmar earthquake once every 100 years, or a magnitude 7.0 shock every 250 years. These back-of-the-envelope calculations, however, belie the fact that this strain is likely released by earthquakes across a wide range of magnitudes. The team is currently working to assess just how wide this range of magnitudes practically needs to be: whether the strain can be released as fast as it is accruing without needing to invoke earthquakes larger than Sylmar and Northridge, for example, or whether the M>7 thrust earthquakes inferred from paleoseismology are indeed a likely part of the picture over the long term.

    This picture of strain accumulation will sharpen as the methods used to build it are improved, as community models of faults and structure continue to be refined, and especially as more high-resolution data, such as that used to observe LA “breathing” water, is brought to bear on the estimation problem. The tolls of the Sylmar, Whittier Narrows and Northridge earthquakes in lives and livelihoods are a reminder that we should work as fast as possible to understand the menace that lies beneath the City of Angels.


    Argus, D. F., Heflin, M. B., Peltzer, G., Crampé, F., & Webb, F. H. (2005). Interseismic strain accumulation and anthropogenic motion in metropolitan Los Angeles. Journal of Geophysical Research: Solid Earth 110(B4).

    Riel, B. V., Simons, M., Ponti, D., Agram, P., & Jolivet, R. (2018). Quantifying ground deformation in the Los Angeles and Santa Ana coastal basins due to groundwater withdrawal. Water Resources Research 54(5), 3557-3582.

    Rollins, C., Avouac, J.-P., Landry, W., Argus, D. F., & Barbot, S. D. (2018). Interseismic strain accumulation on faults beneath Los Angeles, California. Journal of Geophysical Research: Solid Earth 123, doi: 10.1029/2017JB015387.

    Shaw, J. H., Plesch, A., Tape, C., Suess, M. P., Jordan, T. H., Ely, G., Hauksson, E., Tromp, J., Tanimoto, T., & Graves, R. (2015). Unified structural representation of the southern California crust and upper mantle. Earth and Planetary Science Letters 415: 1-15.

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Earthquake Alert


    Earthquake Alert

    Earthquake Network project

    Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.


    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

  • richardmitnick 5:14 pm on September 20, 2018 Permalink | Reply
    Tags: An ultrasensitive microphone for dark matter, , Dark Matter hunt, , Searching for much lighter dark matter candidates, , SuperCDMS experiment, , The predecessor of SuperCDMS SNOLAB—the SuperCDMS Soudan experiment housed in the Soudan mine in Minnesota—required the charge from 70 electron-hole pairs to make a detection. SuperCDMS SNOLAB wil,   

    From Symmetry: “Dark matter vibes” 

    Symmetry Mag
    From Symmetry

    Manuel Gnida

    Dawn Harmer, SLAC

    SuperCDMS physicists are testing a way to amp up dark matter vibrations to help them search for lighter particles.

    A dark matter experiment scheduled to go online at the Canadian underground laboratory SNOLAB in the early 2020s will conduct one of the most sensitive searches ever for hypothetical particles known as weakly interacting massive particles, or WIMPs.

    SNOLAB, a Canadian underground physics laboratory at a depth of 2 km in Vale’s Creighton nickel mine in Sudbury, Ontario

    SNOLAB, a Canadian underground physics laboratory at a depth of 2 km in Vale’s Creighton nickel mine in Sudbury, Ontario

    Scientists consider WIMPs strong dark matter candidates. But what if dark matter turns out to be something else? After all, despite an intense hunt with increasingly sophisticated detectors, scientists have yet to directly detect dark matter.

    That’s why researchers on the SuperCDMS dark matter experiment at SNOLAB are looking for ways to broaden their search. And they found one: They have tested a prototype detector that would allow their experiment to search for much lighter dark matter candidates as well.

    SLAC SuperCDMS, at SNOLAB (Vale Inco Mine, Sudbury, Canada)

    SLAC SuperCDMS, at SNOLAB (Vale Inco Mine, Sudbury, Canada)

    LBNL Super CDMS, at SNOLAB (Vale Inco Mine, Sudbury, Canada)

    “This development is exciting because it gives us access to a new sector of particle masses where alternatives to WIMPs could be hiding,” says Priscilla Cushman from the University of Minnesota, spokesperson for the SuperCDMS collaboration. “It also demonstrates the flexibility of our detector technology, now reaching energy thresholds and resolutions that weren’t possible a few years ago.”

    The collaboration published the results of the first low-mass dark matter search with the new technology in Physical Review Letters. Some scientists on the team also described the prototype in an earlier paper in Applied Physics Letters.

    An ultrasensitive mic for dark matter

    The core of the SuperCDMS experiment is made of very sensitive detectors on the top and bottom of hockey puck-shaped silicon and germanium crystals. The detectors are able to observe very small vibrations caused by dark matter particles rushing through the crystals. The challenge in using this technology to find light dark matter particles is that, the lighter the particle, the smaller the vibrations.

    “To pick those vibrations up, you need an extraordinary ‘microphone’,” says Matt Pyle from the University of California, who contributed to both papers. “Our goal is to build microphones—detectors—that are sensitive enough to detect signals of very light particles. Our technology is at the leading edge of what’s currently possible.”

    The vibrations caused by a dark matter interaction can also dislodge negatively charged electrons in the crystal. This leaves positively charged spots, or holes, at the locations where the electrons once were. If an electric field is applied, the pairs of electrons and holes traverse the crystal in opposite directions, and the detector can measure their charge.

    One way of making the experiment more sensitive is to increase the efficiency with which it measures the charge of the electron-hole pairs. This approach has been the major factor in improving sensitivity until now. The predecessor of SuperCDMS SNOLAB—the SuperCDMS Soudan experiment, housed in the Soudan mine in Minnesota—required the charge from 70 electron-hole pairs to make a detection. SuperCDMS SNOLAB will require just half as much.

    “But that’s not the type of improvement we did here,” says Roger Romani, a recent undergraduate student in Blas Cabrera’s group at Stanford University and lead author of the Applied Physics Letters paper. The team found a different way to make the experiment even more sensitive.

    “In our approach, we counted the number of electron-hole pairs by looking at the vibrations they caused when traveling through our detector crystal,” he says.

    To do so, Cabrera’s team, joined by Pyle and Santa Clara University’s Betty Young, applied a high voltage that pushed the electron-hole pairs through the crystal. The acceleration led to the production of more vibrations, on top of those created without voltage.

    “As a result, our prototype is sensitive to a single electron-hole pair,” says Francisco Ponce, a postdoctoral researcher on Cabrera’s team. “Being able to measure a smaller charge gives us a higher resolution in our experiment and lets us detect particles with smaller mass.”

    This refrigeration unit in the Cabrera lab at Stanford keeps the experiment’s detector crystals at nearly absolute zero temperature. Dawn Harmer, SLAC

    First search for light dark matter

    The SuperCDMS collaboration has used the prototype detector for a first light dark matter search, and the outcome is promising.

    “The experiment demonstrates that we’re sensitive to a mass range in which we had no sensitivity at all before,” says Cabrera, former SuperCDMS SNOLAB project director from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of the Department of Energy’s SLAC National Accelerator Laboratory and Stanford.

    Noah Kurinsky, a recent PhD student in Cabrera’s group, says, “Although the technology is in the early stages of its development, we’re able to set limits on the properties of light dark matter and are already competitive to other experiments that operate in the same mass range.”

    The result is even more compelling considering the experimental circumstances: Located in Cabrera’s lab in a basement at Stanford, the experiment wasn’t shielded from the unwanted cosmic-ray background (SuperCDMS SNOLAB will operate 6800 feet underground); it used a very small prototype crystal, limiting the size of the signal (SuperCDMS Soudan’s crystals were 1500 times heavier); and it ran for a relatively short time, limiting the amount of data for the analysis (XENON10 had 20,000 times more exposure).

    Eventually, the researchers want to scale up the size of their crystal and use it in a future generation of SuperCDMS SNOLAB. However, much more R&D work needs to be done before that can happen.

    At the moment, they’re working on improving the quality of the crystal and on better understanding its fundamental physics: for instance, how to deal with a quantum mechanical effect that randomly creates electron-hole pairs for no apparent reason and can cause a background signal that looks exactly like a signal from dark matter.

    The team is hopeful that their efforts will lead to new detector designs that continue to make SuperCDMS SNOLAB more powerful, Pyle says: “Then, we’ll have an even better shot at studying unknown dark matter territory.”

    See the full article here .


    Please help promote STEM in your local schools.

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.

  • richardmitnick 4:24 pm on September 20, 2018 Permalink | Reply
    Tags: , Evidence suggests that subducting slabs of the earth's crust may generate unusual features spotted near the core, Experiment used a diamond anvil cell which is essentially a tiny chamber located between two diamonds, ULVZs consist of chunks of a magnesium/iron oxide mineral called magnesiowüstite, ULVZs-ultra-low velocity zones   

    From Caltech: “Experiments using Diamond Anvils Yield New Insight into the Deep Earth” 

    Caltech Logo

    From Caltech

    Robert Perkins
    (626) 395-1862

    The diamond anvil in which samples of magnesiowüstite were placed under extreme pressure and studied. Credit: Jennifer Jackson/Caltech

    Cross-section illustration shows slabs of the earth’s crust descending through the mantle and aligning magnesiowüstite in ultra-low velocity zones.

    Evidence suggests that subducting slabs of the earth’s crust may generate unusual features spotted near the core.

    Nearly 1,800 miles below the earth’s surface, there are large odd structures lurking at the base of the mantle, sitting just above the core. The mantle is a thick layer of hot, mostly plastic rock that surrounds the core; atop the mantle is the thin shell of the earth’s crust. On geologic time scales, the mantle behaves like a viscous liquid, with solid elements sinking and rising through its depths.

    The aforementioned odd structures, known as ultra-low velocity zones (ULVZs), were first discovered in 1995 by Caltech’s Don Helmberger. ULVZs can be studied by measuring how they alter the seismic waves that pass through them. But observing is not necessarily understanding. Indeed, no one is really sure what these structures are.

    ULVZs are so-named because they significantly slow down the speeds of seismic waves; for example, they slow down shear waves (oscillating seismic waves capable of moving through solid bodies) by as much as 30 percent. ULVZs are several miles thick and can be hundreds of miles across. Several are scattered near the earth’s core roughly beneath the Pacific Rim. Others are clustered underneath North America, Europe, and Africa.

    “ULVZs exist so deep in the inner earth that they are impossible to study directly, which poses a significant challenge when trying to determine what exactly they are,” says Helmberger, Smits Family Professor of Geophysics, Emeritus.

    Earth scientists at Caltech now say they know not just what ULVZs are made of, but where they come from. Using experimental methods at high pressures, the researchers, led by Professor of Mineral Physics Jennifer Jackson, have found that ULVZs consist of chunks of a magnesium/iron oxide mineral called magnesiowüstite that could have precipitated out of a magma ocean that is thought to have existed at the base of the mantle millions of years ago.

    The other leading theory for ULVZs formation had suggested that they consist of melted material, some of it possibly leaking up from the core.

    Jackson and her colleagues, who reported on their work in a recent paper in the Journal of Geophysical Research: Solid Earth, found evidence supporting the magnesiowüstite theory by studying the mineral’s elastic (or seismic) anisotropy; elastic anisotropy is a variation in the speed at which seismic waves pass through a mineral depending on their direction of travel.

    One particularly unusual characteristic of the region where ULVZs exist—the core-mantle boundary (CMB)—is that it is highly heterogenous (nonuniform in character) as well as anisotropic. As a result, the speed at which seismic waves travel through the CMB varies based not only on the region that the waves are passing through but on the direction in which those waves are moving. The propagation direction, in fact, can alter the speed of the waves by a factor of three.

    “Previously, scientists explained the anisotropy as the result of seismic waves passing through a dense silicate material. What we’re suggesting is that in some regions, it is largely due to the alignment of magnesiowüstite within ULVZs,” says Jackson.

    At the pressures and temperatures experienced at the earth’s surface, magnesiowüstite exhibits little anisotropy. However, Jackson and her team found that the mineral becomes strongly anisotropic when subjected to pressures comparable to those found in the lower mantle.

    Jackson and her colleagues discovered this by placing a single crystal of magnesiowüstite in a diamond anvil cell, which is essentially a tiny chamber located between two diamonds. When the rigid diamonds are compressed against one another, the pressure inside the chamber rises. Jackson and her colleagues then bombarded the sample with x-rays. The interaction of the x-rays with the sample acts as a proxy for how seismic waves will travel through the material. At a pressure of 40 gigapascals—equivalent to the pressure at the lower mantle—magnesiowüstite was significantly more anisotropic than seismic observations of ULVZs.

    In order to create objects as large and strongly anisotropic as ULVZs, only a small amount of magnesiowüstite crystals need to be aligned in one specific direction, probably due to the application of pressure from a strong outside force. This could be explained by a subducting slab of the earth’s crust pushing its way to the CMB, Jackson says. (Subduction occurs at certain boundaries between earth’s tectonic plates, where one plate dives below another, triggering volcanism and earthquakes.)

    “Scientists are still in the process of discovering what happens to the crust when it’s subducted into the mantle,” Jackson says. “One possibility, which our research now seems to support, is that these slabs push all the way down to the core-mantle boundary and help to shape ULVZs.”

    Next, Jackson plans to explore the interaction of subducting slabs, ULVZs, and their seismic signatures. Interpreting these features will help place constraints on processes that happened early in Earth’s history, she says.

    The study is titled “Strongly Anisotropic Magnesiowüstite in Earth’s Lower Mantle.” Jackson collaborated with former Caltech postdoctoral researcher Gregory Finkelstein, now at the University of Hawai’i, who was the lead author of this study. Other colleagues include Wolfgang Sturhahn, visitor in geophysics at Caltech; as well as Ayman Said, Ahmet Alatas, Bogdan Leu, and Thomas Toellner of the Argonne National Laboratory in Illinois. This research was funded by the National Science Foundation and the W. M. Keck Institute for Space Studies.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: