Tagged: Caltech Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:48 pm on October 17, 2019 Permalink | Reply
    Tags: A dense network of seismometers observed the seismic waves that radiated from the earthquake., Caltech, Magnitude 6.4 foreshock on July 4 2019, Magnitude 7.1 mainshock July 5 2019, NASA Find Web of Ruptures in Ridgequest Quake", , Ridgecrest Earthquake Sequence, Satellites observed the surface ruptures and associated ground deformation extending out over 60 miles (100 kilometers) in every direction from the rupture., Southern California's largest earthquake sequence in two decades, The complexity of the event is only clear because of the multiple types of scientific instruments used to study it., The event illustrates how little we still understand about earthquakes., The Ridgecrest ruptures ended just a few miles shy of the Garlock Fault a major east-west fault running more than 185 miles (300 kilometers) from the San Andreas Fault to Death Valley., The Ridgecrest sequence involved about 20 previously undiscovered smaller faults crisscrossing in a geometrically complex and geologically young fault zone.   

    From NASA JPL-Caltech: “Caltech, NASA Find Web of Ruptures in Ridgequest Quake” 

    NASA JPL Banner

    From NASA JPL-Caltech

    October 17, 2019

    Esprit Smith
    Jet Propulsion Laboratory, Pasadena, Calif.
    818-354-4269
    esprit.smith@jpl.nasa.gov

    Robert Perkins
    Caltech, Pasadena, Calif.
    626-395-1862
    rperkins@caltech.edu

    1
    A USGS Earthquake Science Center Mobile Laser Scanning truck scans the surface rupture near the zone of maximum surface displacement of the magnitude 7.1 earthquake that struck the Ridgecrest area. Credit: USGS / Ben Brooks

    A new study of Southern California’s largest earthquake sequence in two decades provides new evidence that large earthquakes can occur in a more complex fashion than commonly assumed. The analysis by geophysicists from Caltech and NASA’s Jet Propulsion Laboratory, both in Pasadena, California, documents a series of ruptures in a web of interconnected faults, with rupturing faults triggering other faults.

    The dominoes-like sequence of ruptures also increased strain on a nearby major fault, according to the study, which was published today in the journal Science.

    The Ridgecrest Earthquake Sequence began with a magnitude 6.4 foreshock on July 4, 2019, followed by a magnitude 7.1 mainshock the next day with more than 100,000 aftershocks. The sequence rattled most of Southern California, but the strongest shaking occurred about 120 miles (190 kilometers) north of Los Angeles near the town of Ridgecrest.

    “This ended up being one of the best-documented earthquake sequences in history,” said Zachary Ross, assistant professor of geophysics at Caltech and lead author of the Science paper. Ross developed an automated computer analysis of seismometer data that detected the enormous number of aftershocks with highly precise location information, and the JPL team members analyzed data from international radar satellites ALOS-2 (from the Japan Aerospace Exploration Agency, or JAXA) and Sentinel-1A/B (operated by the European Space Agency, or ESA) to map fault ruptures at Earth’s surface.

    JAXA ALOS-2 satellite aka DAICH-2

    ESA Sentinel-1B

    “I was surprised to see how much complexity there was and the number of faults that ruptured,” said JPL co-author Eric Fielding.

    The satellite and seismometer data together depict an earthquake sequence that is far more complex than those found in the models of many previous large seismic events. Major earthquakes are commonly thought to be caused by the rupture of a single long fault, such as the more than 800-mile-long (1,300-kilometer-long) San Andreas fault, with the maximum possible magnitude dictated primarily by the length of the fault. After a large 1992 earthquake in Landers, California, ruptured several faults, seismologists began rethinking that model.

    The Ridgecrest sequence involved about 20 previously undiscovered, smaller faults crisscrossing in a geometrically complex and geologically young fault zone.

    “We actually see that the magnitude 6.4 quake simultaneously broke faults at right angles to each other, which is surprising because standard models of rock friction view this as unlikely,” Ross said.

    2
    All earthquakes of magnitude 2.5 and greater in the Ridgecrest area July 4 to Aug. 15, 2019, are shown as gray circles. Red stars mark the two largest. The Garlock Fault south of the cluster of earthquakes has slipped almost an inch since July. Credit: USGS

    The complexity of the event is only clear because of the multiple types of scientific instruments used to study it. Satellites observed the surface ruptures and associated ground deformation extending out over 60 miles (100 kilometers) in every direction from the rupture, while a dense network of seismometers observed the seismic waves that radiated from the earthquake. Together, these data allowed scientists to develop a model of how the faults slipped below the surface and the relationship between the major slipping faults and the significant number of small earthquakes occurring before, between and after the two largest shocks.

    The Ridgecrest ruptures ended just a few miles shy of the Garlock Fault, a major east-west fault running more than 185 miles (300 kilometers) from the San Andreas Fault to Death Valley. The fault has been relatively quiet for the past 500 years, but the strain placed on the Garlock Fault by July’s earthquake activity triggered it to start slowly moving, a process call fault creep. The fault has slipped 0.8 inches (2 centimeters) at the surface since July, the scientists said.

    The event illustrates how little we still understand about earthquakes. “It’s going to force people to think hard about how we quantify seismic hazard and whether our approach to defining faults needs to change,” Ross said. “We can’t just assume that the largest faults dominate the seismic hazard if many smaller faults can link up to create these major quakes.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    NASA JPL Campus

    Jet Propulsion Laboratory (JPL)) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge, on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    Caltech Logo

    NASA image

     
  • richardmitnick 9:53 am on October 17, 2019 Permalink | Reply
    Tags: As baffling as the concept of two entangled particles may be the situation becomes even more complex when more particles are involved., At Caltech researchers are focusing their studies on many-body entangled systems., Caltech, Entanglement Passes Tests with Flying Colors, In 1935 Albert Einstein Boris Podolsky and Nathan Rosen published a paper on the theoretical concept of quantum entanglement which Einstein called “spooky action at a distance.”, , , The perplexing phenomenon of quantum entanglement is central to quantum computing; quantum networking; and the fabric of space and time., The phenomenon of entanglement was first proposed by Albert Einstein and colleagues in the 1930s.   

    From Caltech: “Untangling Quantum Entanglement” 

    Caltech Logo

    From Caltech

    Caltech Magazine Fall 2019
    Whitney Clavin

    1
    In Erwin Schrödinger’s famous thought experiment, a cat is trapped in a box with a bit of poison the release of which is controlled by a quantum process. The cat therefore exists in a quantum state of being both dead and alive until somebody opens the box and finds the cat either dead or alive.

    The perplexing phenomenon of quantum entanglement is central to quantum computing, quantum networking, and the fabric of space and time.

    The famous “Jim twins,” separated soon after birth in the 1940s, seemed to live parallel lives even though they grew up miles apart in completely different families. When they were reunited at the age of 39, they discovered many similarities between their life stories, including the names of their sons, wives, and childhood pets, as well as their preferences for Chevrolet cars, carpentry, and more.

    A similar kind of parallelism happens at a quantum level, too. The electrons, photons, and other particles that make up our universe can become inextricably linked, such that the state observed in one particle will be identical for the other. That connection, known as entanglement, remains strong even across vast distances.

    “When particles are entangled, it’s as if they are born that way, like twins,” says Xie Chen, associate professor of theoretical physics at Caltech. “Even though they might be separated right after birth, [they’ll] still look the same. And they grow up having a lot of personality traits that are similar to each other.”

    The phenomenon of entanglement was first proposed by Albert Einstein and colleagues in the 1930s. At that time, many questioned the validity of entanglement, including Einstein himself. Over the years and in various experiments, however, researchers have generated entangled particles that have supported the theory. In these experiments, researchers first entangle two particles and then send them to different locations miles apart. The researchers then measure the state of one particle: for instance, the polarization (or direction of vibration) of a photon. If that entangled photon displays a horizontal polarization, then so too will its faithful partner.

    “It may be tempting to think that the particles are somehow communicating with each other across these great distances, but that is not the case,” says Thomas Vidick, a professor of computing and mathematical sciences at Caltech. “There can be correlation without communication.” Instead, he explains, entangled particles are so closely connected that there is no need for communication; they “can be thought of as one object.”

    As baffling as the concept of two entangled particles may be, the situation becomes even more complex when more particles are involved. In natural settings such as the human body, for example, not two but hundreds of molecules or even more become entangled, as they also do in various metals and magnets, making up an interwoven community. In these many-body entangled systems, the whole is greater than the sum of its parts.

    “The particles act together like a single object whose identity lies not with the individual components but in a higher plane. It becomes something larger than itself,” says Spyridon (Spiros) Michalakis, outreach manager of Caltech’s Institute for Quantum Information and Matter (IQIM) and a staff researcher. “Entanglement is like a thread that goes through every single one of the individual particles, telling them how to be connected to one another.”

    2
    Associate Professor of Theoretical Physics Xie Chen specializes in the fields of condensed matter physics and quantum information.

    At Caltech, researchers are focusing their studies on many-body entangled systems, which they believe are critical to the development of future technologies and perhaps to cracking fundamental physics mysteries. Scientists around the world have made significant progress applying the principles of many-body entanglement to fields such as quantum computing, quantum cryptography, and quantum networks (collectively known as quantum information); condensed-matter physics; chemistry; and fundamental physics. Although the most practical applications, such as quantum computers, may still be decades off, according to John Preskill, the Richard P. Feynman Professor of Theoretical Physics at Caltech and the Allen V.C. Davis and Lenabelle Davis Leadership Chair of the Institute of Quantum Science and Technology (IQST), “entanglement is a very important part of Caltech’s future.”

    Entanglement Passes Tests with Flying Colors

    In 1935, Albert Einstein, Boris Podolsky, and Nathan Rosen published a paper on the theoretical concept of quantum entanglement, which Einstein called “spooky action at a distance.” The physicists described the idea, then argued that it posed a problem for quantum mechanics, rendering the theory incomplete. Einstein did not believe two particles could remain connected to each other over great distances; doing so, he said, would require them to communicate faster than the speed of light, something he had previously shown to be impossible.

    Today, experimental work leaves no doubt that entanglement is real. Physicists have demonstrated its peculiar effects across hundreds of kilometers; in fact, in 2017, a Chinese satellite named Micius sent entangled photons to three different ground stations, each separated by more than 1,200 kilometers, and broke the distance record for entangled particles.

    Entanglement goes hand in hand with another quantum phenomenon known as superposition, in which particles exist in two different states simultaneously. Photons, for example, can display simultaneously both horizontal and vertical states of polarization.

    Or, to simplify, consider two “entangled” quarters, each hidden under a cup. If two people, Bob and Alice, were each to take one of those quarters to a different room, the quarters would remain both heads and tails until one person lifted the cup and observed his or her quarter; at that point, it would randomly become either heads or tails. If Alice were to lift her cup first and her quarter was tails, then when Bob observed his quarter, it would also be tails. If you repeated the experiment and the coins are covered once more, they would go back to being in a state of superposition. Alice would lift her cup again and might find her quarter as heads this time. Bob would then also find his quarter as heads. Whether the first quarter is found to be heads or tails is entirely random.

    Similarly, when a researcher entangles two photons and then sends each one in different directions under carefully controlled conditions, they will continue to be in a state of superposition, both horizontally and vertically polarized. Only when one of the photons is measured do both randomly adopt just one of the two possible polarization states.

    “Quantum correlations are deeply different than ordinary correlations,” says Preskill. “And randomness is the key. This spooky intrinsic randomness is actually what bothered Einstein. But it is essential to how the quantum world works.”

    “Scientists often use the word correlation to explain what is happening between these particles,” adds Oskar Painter, the John G Braun Professor of Applied Physics and Physics at Caltech. “But, actually, entanglement is the perfect word.”

    Entanglement to the Nth Degree

    Untangling the relationship between two entangled particles may be difficult, but the real challenge is to understand how hundreds of particles, if not more, can be similarly interconnected.

    According to Manuel Endres, an assistant professor of physics at Caltech, one of the first steps toward understanding many-body entanglement is to create and control it in the lab. To do this, Endres and his team use a brute force approach: they design and build laboratory experiments with the goal of creating a system of 100 entangled atoms.

    “This is fundamentally extremely difficult to do,” says Endres. In fact, he notes, it would be difficult even at a much smaller scale. “If I create a system where I generate, for instance, 20 entangled particles, and I send 10 one way and 10 another way, then I have to measure whether each one of those first 10 particles is entangled with each of the other set of 10. There are many different ways of looking at the correlations.”

    While the task of describing those correlations is difficult, describing a system of 100 entangled atoms with classical computer bits would be unimaginably hard. For instance, a complete classical description of all the quantum correlations among as many as 300 entangled particles would require more bits than the number of atoms in the visible universe. “But that’s the whole point and the reason we are doing this,” Endres says. “Things get so entangled that you need a huge amount of space to describe the information. It’s a complicated beast, but it’s useful.”

    “Generally, the number of parameters you need to describe the system is going to scale up exponentially,” says Vidick, who is working on mathematical and computational tools to describe entanglement. “It blows up very quickly, which, in general, is why it’s hard to make predictions or simulations, because you can’t even represent these systems in your laptop’s memory.”

    To solve that problem, Vidick and his group are working on coming up with computational representations of entangled materials that are simpler and more succinct than models that currently exist.

    “Quantum mechanics and the ideas behind quantum computing are forcing us to think outside the box,” he says.

    A Fragile Ecosystem

    Another factor in creating and controlling quantum systems has to do with their delicate nature. Like Mimosa pudica ,a member of the pea family also known as the “sensitive plant,” which droops when its leaves are touched, entangled states can easily disappear, or collapse, when the environment changes even slightly. For example, the act of observing a quantum state destroys it. “You don’t want to even look at your experiment, or breathe on it,” jokes Painter. Adds Preskill, “Don’t turn on the light, and don’t even dare walk into the room.”

    The problem is that entangled particles become entangled with the environment around them quickly, in a matter of microseconds or faster. This then destroys the original entangled state a researcher might attempt to study or use. Even one stray photon flying through an experiment can render the whole thing useless.

    “You need to be able to create a system that is entangled only with itself, not with your apparatus,” says Endres. “We want the particles to talk to one another in a controlled fashion. But we don’t want them to talk to anything in the outside world.”

    In the field of quantum computing, this fragility is problematic because it can lead to computational errors. Quantum computers hold the promise of solving problems that classical computers cannot, including those in cryptography, chemistry, financial modeling, and more. Where classical computers use binary bits (either a “1” or a “0”) to carry information, quantum computers use “qubits,” which exist in states of “1” and “0” at the same time. As Preskill explains, the qubits in this mixed state, or superposition, would be both dead and alive, a reference to the famous thought experiment proposed by Erwin Schrödinger in 1935, in which a cat in a box is both dead and alive until the box is opened, and the cat is observed to be one or the other. What’s more, those qubits are all entangled. If the qubits somehow become disentangled from one another, the quantum computer would be unable to execute its computations.

    To address these issues, Preskill and Alexei Kitaev (Caltech’s Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics and recipient of a 2012 Breakthrough Prize in Fundamental Physics), along with other theorists at Caltech, have devised a concept to hide the quantum information within a global entangled state, such that none of the individual bits have the answer. This approach is akin to distributing a code among hundreds of people living in different cities. No one person would have the whole code, so the code would be much less vulnerable to discovery.

    4
    Manuel Endres, assistant professor of physics, here pictured with Adam Shaw (left) and Ivaylo Madjarov (right), uses laser-based techniques in his lab to create many-body entanglement.

    “The key to correcting errors in entangled systems is, in fact, entanglement,” says Preskill. “If you want to protect information from damage due to the extreme instability of superpositions, you have to hide the information in a form that’s very hard to get at,” he says. “And the way you do that is by encoding it in a highly entangled state.”

    Spreading the Entanglement

    At Caltech, this work on the development of quantum-computing systems is conducted alongside with research into quantum networks in which each quantum computer acts as a separate node, or connection point, for the whole system. Painter refers to this as “breaking a quantum computer into little chunks” and then connecting them together to create a distributed network. In this approach, the chunks would behave as if they were not separated. “The network would be an example of many-body entanglement, in which the bodies are the different nodes in the network,” says Painter.

    Quantum networks would enhance the power of quantum computers, notes Preskill.

    “We’d like to build bigger and bigger quantum computers to solve harder and harder problems. And it’s hard to build one piece of hardware that can handle a million qubits,” he says. “It’s easier to make modular components with 100 qubits each or something like that. But then, if you want to solve harder problems, you’ve got to get these different little quantum computers to communicate with one another. And that would be done through a quantum network.”

    Quantum networks could also be used for cryptography purposes, to make it safer to send sensitive information; they would also be a means by which to distribute and share quantum information in the same way that the World Wide Web works for conventional computers. Another future use might be in astronomy. Today’s telescopes are limited. They cannot yet see any detail on, for instance, the surface of distant exoplanets, where astronomers might want to look for signs of life or civilization. If scientists could combine telescopes into a quantum network, it “would allow us to use the whole Earth as one big telescope with a much-improved resolution,” says Preskill.

    “Up until about 20 years ago, the best way to explore entanglement was to look at what nature gave us and try to study the exotic states that emerged,” notes Painter. “Now our goal is to try to synthesize these systems and go beyond what nature has given us.”

    At the Root of Everything

    While entanglement is the key to advances in quantum-information sciences, it is also a concept of interest to theoretical physicists, some of whom believe that space and time itself are the result of an underlying network of quantum connections.

    “It is quite incredible that any two points in space-time, no matter how far apart, are actually entangled. Points in space-time that we consider closer to each other are just more entangled than those further apart,” says Michalkis.

    The link between entanglement and space-time may even help solve one of the biggest challenges in physics: establishing a unifying theory to connect the macroscopic laws of general relativity (which describe gravity) with the microscopic laws of quantum physics (which describe how subatomic particles behave).

    The quantum error-correcting schemes that Preskill and others study may play a role in this quest. With quantum computers, error correction ensures that the computers are sufficiently robust and stable. Something similar may occur with space-time. “The robustness of space may come from a geometry where you can perturb the system, but it isn’t affected much by the noise, which is the same thing that happens in stable quantum-computing schemes,” says Preskill.

    “Essentially, entanglement holds space together. It’s the glue that makes the different pieces of space hook up with one another,” he adds.

    At Caltech, the concept of entanglement connects various labs and buildings across campus. Theorists and experimentalists in computer science, quantum-information science, condensed-matter physics, and other fields regularly work across disciplines and weave together their ideas.

    “We bring our ideas from condensed-matter physics to quantum-information folks, and we say, ‘Hey, I have a material you can use for quantum computation,’” says Chen. “Sometimes we borrow ideas from them. Many of us from different fields have realized that we have to deal with entanglement head-on.”

    Preskill echoes this sentiment and is convinced entanglement is an essential part of Caltech’s future: “We are making investments and betting on entanglement as being one of the most important themes of 21st-century science.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 1:37 pm on October 4, 2019 Permalink | Reply
    Tags: , , , Caltech, , , KAGRA joins the hunt,   

    From Caltech: “KAGRA to Join LIGO and Virgo in Hunt for Gravitational Waves” 

    Caltech Logo

    From Caltech

    October 04, 2019
    Whitney Clavin
    wclavin@caltech.edu

    KAGRA gravitational wave detector, Kamioka mine in Kamioka-cho, Hida-city, Gifu-prefecture, Japan

    Japan’s Kamioka Gravitational-wave Detector (KAGRA) will soon team up with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO) and Europe’s Virgo in the search for subtle shakings of space and time known as gravitational waves. Representatives for the three observatories signed a memorandum of agreement (MOA) about their collaborative efforts today, October 4. The agreement includes plans for joint observations and data sharing.

    “This is a great example of international scientific cooperation,” says Caltech’s David Reitze, executive director of the LIGO Laboratory. “Having KAGRA join our network of gravitational-wave observatories will significantly enhance the science in the coming decade.”

    “At present, KAGRA is in the commissioning phase, after the completion of its detector construction this spring. We are looking forward to joining the network of gravitational-wave observations later this year,” says Takaaki Kajita, principal investigator of the KAGRA project and co-winner of the 2015 Nobel Prize in Physics.

    In 2015, the twin detectors of LIGO, one in Washington and the other in Louisiana, made history by making the first direct detection of gravitational waves, a discovery that earned three of the project’s founders—Caltech’s Barry Barish, Ronald and Maxine Linde Professor of Physics, Emeritus, and Kip Thorne, Richard P. Feynman Professor of Theoretical Physics, Emeritus; and MIT’s Rainer Weiss, professor of physics, emeritus—the 2017 Nobel Prize in Physics. Since then, LIGO and its partner Virgo have identified more than 30 likely detections of gravitational waves, mostly from colliding black holes.

    “The more detectors we have in the global gravitational-wave network, the more accurately we can localize the gravitational-wave signals on the sky, and the better we can determine the underlying nature of cataclysmic events that produced the signals.” says Reitze.

    For instance, in 2017, Virgo and the two LIGO detectors were able together to localize a merger of two neutron stars to a patch of sky about 30 square degrees in size, or less than 0.1 percent of the sky. This was a small enough patch to enable ground-based and space telescopes to pinpoint the galaxy that hosted the collision and observe its explosive aftermath in light.

    “These findings amounted to the first time a cosmic event had been observed in both gravitational waves and light and gave astronomers a first-of-its kind look at the spectacular smashup of neutron stars,” says Virgo Collaboration spokesperson Jo van den Brand of Nikhef (the Dutch National Institute for Subatomic Physics) and Maastricht University in the Netherlands.

    With KAGRA joining the network, these gravitational-wave events will eventually be narrowed down to patches of sky that are only about 10 square degrees, greatly enhancing the ability of light-based telescopes to carry out follow-up observations. For its initial run, KAGRA will operate at sensitivities that are likely too low to detect gravitational waves, but with time, as the performance of the instrumentation is improved, it will reach sensitivities high enough to join the hunt.

    Having a fourth detector will also increase the overall detection rate, helping scientists to probe and understand some of the most energetic events in the universe.

    KAGRA is expected to come online for the first time in December of this year, joining the third observing run of LIGO and Virgo, which began on April 1, 2019.

    MIT /Caltech Advanced aLigo


    VIRGO Gravitational Wave interferometer, near Pisa, Italy

    The Japanese detector will pioneer two new approaches to gravitational-wave searches. It will be the first kilometer-scale gravitational-wave observatory to operate underground, which will dampen unwanted noise from winds and seismic activity; and it will be the first to use cryogenically chilled mirrors, a technique that cuts down on thermal noise.

    “These features could supply a very important direction for the futureof gravitational-wave detectors with much higher sensitivities. Therefore, we should make every effort, for the global gravitational-wave community, to prove that the underground site and the cryogenic mirrors are useful,” says Kajita.

    The new MOA also includes the German-British GEO600 detector. Although GEO600 is not sensitive enough to detect gravitational-wave signals from distant black hole and neutron star collisions, it has been important for testing new technologies that will be key for improving future detectors. In addition, LIGO India is expected to join the network of observatories in 2025, signifying the beginning of a truly global effort to catch ripples in the fabric of space and time.

    Additional information about the gravitational-wave observatories:

    LIGO is funded by NSF and operated by Caltech and MIT, which conceived of LIGO and lead the project. Financial support for the Advanced LIGO project was led by the NSF with Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council) and Australia (Australian Research Council-OzGrav) making significant commitments and contributions to the project. Approximately 1,300 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. A list of additional partners is available at https://my.ligo.org/census.php.

    The Virgo Collaboration is currently composed of approximately 480 scientists, engineers, and technicians from about 96 institutes from Belgium, France, Germany, Hungary, Italy, the Netherlands, Poland, and Spain. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa in Italy, and is funded by Centre National de la Recherche Scientifique (CNRS) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and Nikhef in the Netherlands. A list of the Virgo Collaboration members can be found at http://public.virgo-gw.eu/the-virgo-collaboration/. More information is available on the Virgo website at http://www.virgo-gw.eu.

    The KAGRA project is supported by MEXT (Ministry of Education, Culture, Sports, Science, and Technology-Japan). KAGRA is hosted by the Institute for Cosmic Ray Research (ICRR), the University of Tokyo, and co-hosted by High Energy Accelerator Research Organization (KEK) and the National Astronomical Observatory of Japan (NAOJ). The KAGRA collaboration is composed of more than 360 individuals from more than 100 institutions from 15 countries/regions. The list of collaborators’ affiliations is available at http://gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA/KSC#KAGRAcollaborators. More information is available on the KAGRA website at https://gwcenter.icrr.u-tokyo.ac.jp/en/.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 11:21 am on September 12, 2019 Permalink | Reply
    Tags: , Architected metamaterials, Caltech, , , ,   

    From Caltech: “New Metamaterial Morphs Into New Shapes, Taking on New Properties” 

    Caltech Logo

    From Caltech

    September 11, 2019

    Robert Perkins
    (626) 395‑1862
    rperkins@caltech.edu

    1

    A newly developed type of architected metamaterial has the ability to change shape in a tunable fashion.

    While most reconfigurable materials can toggle between two distinct states, the way a switch toggles on or off, the new material’s shape can be finely tuned, adjusting its physical properties as desired. The material, which has potential applications in next-generation energy storage and bio-implantable micro-devices, was developed by a joint Caltech-Georgia Tech-ETH Zürich team in the lab of Julia R. Greer.

    Greer, the Ruben F. and Donna Mettler Professor of Materials Science, Mechanics and Medical Engineering in Caltech’s Division of Engineering and Applied Science, creates materials out of micro- and nanoscale building blocks that are arranged into sophisticated architectures that can be periodic, like a lattice, or non-periodic in a tailor-made fashion, giving them unusual physical properties.

    Most materials that are designed to change shape require a persistent external stimulus to change from one shape to another and stay that way: for example, they may be one shape when wet and a different shape when dry—like a sponge that swells as it absorbs water.

    By contrast, the new nanomaterial deforms through an electrochemically driven silicon-lithium alloying reaction, meaning that it can be finely controlled to attain any “in-between” states, remain in these configurations even upon the removal of the stimulus, and be easily reversed. Apply a little current, and a resulting chemical reaction changes the shape by a controlled, small degree. Apply a lot of current, and the shape changes substantially. Remove the electrical control, and the configuration is retained—just like tying off a balloon. A description of the new type of material was published online by the journal Nature on September 11.

    Defects and imperfections exist in all materials, and can often determine a material’s properties. In this case, the team chose to take advantage of that fact and build in defects to imbue the material with the properties they wanted.

    “The most intriguing part of this work to me is the critical role of defects in such dynamically responsive architected materials,” says Xiaoxing Xia, a graduate student at Caltech and lead author of the Nature paper.

    For the Nature paper, the team designed a silicon-coated lattice with microscale straight beams that bend into curves under electrochemical stimulation, taking on unique mechanical and vibrational properties. Greer’s team created these materials using an ultra-high-resolution 3D printing process called two-photon lithography. Using this novel fabrication method, they were able to build in defects in the architected material system, based on a pre-arranged design. In a test of the system, the team fabricated a sheet of the material that, under electrical control, reveals a Caltech icon.

    3

    “This just further shows that materials are just like people, it’s the imperfections that make them interesting. I have always had a particular liking for defects, and this time Xiaoxing managed to first uncover the effect of different types of defects on these metamaterials and then use them to program a particular pattern that would emerge in response to electrochemical stimulus,” says Greer.

    A material with such a finely controllable ability to change shape has potential in future energy storage systems because it provides a pathway to create adaptive energy storage systems that would enable batteries, for example, to be significantly lighter, safer, and to have substantially longer lives, Greer says. Some battery materials expand when storing energy, creating a mechanical degradation due to stress from the repeated expanding and contracting. Architected materials like this one can be designed to handle such structural transformations.

    “Electrochemically active metamaterials provide a novel pathway for development of next generation smart batteries with both increased capacity and novel functionalities. At Georgia Tech, we are developing the computational tools to predict this complex coupled electro-chemo-mechanical behavior,” says Claudio V. Di Leo, assistant professor of aerospace engineering at the Georgia Institute of Technology.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 9:58 am on September 5, 2019 Permalink | Reply
    Tags: "Planetary Collisions Can Drop the Internal Pressures in Planets", A new study from Caltech shows that giant impacts can dramatically lower the internal pressure of planets- a finding that could significantly change the current model of planetary formation., After each impact the metal absorbed small amounts of other elements from the mantle and then sank to the core – dragging those elements with it., , As such the chemical composition of the mantle today records the mantle pressure during the planet’s formation., As the proto-Earth grew each object that collided with it delivered metal into the mantle., , , , Caltech, , If true the finding could help reconcile a long-standing contradiction between the geochemistry of the earth's mantle and physical models of planet formation., Planetary systems typically begin as a disk of dust that slowly accretes into rocky bodies., Researchers present a new paradigm for understanding how pressures in planets evolve., The amount of each element that dissolved into the metal was determined in part by the earth's internal pressures., The end of the main stage of this process is characterized by high-energy collisions between planet-sized bodies as they coalesce to form the final planets., The impacts could cause random fluctuations in core and mantle pressures that would explain some puzzling geochemical signatures in Earth’s mantle., The pressure right after an impact like the one that is thought to have formed the moon could have been half of that of present-day Earth., The shock energy of these impacts can vaporize significant portions of a planet., We have no direct observations of the growth of Earth-like planets.   

    From Caltech: “Planetary Collisions Can Drop the Internal Pressures in Planets” 

    Caltech Logo

    From Caltech

    September 04, 2019

    Robert Perkins
    (626) 395‑1862
    rperkins@caltech.edu

    1
    Credit: California Institute of Technology

    Researchers present a new paradigm for understanding how pressures in planets evolve.

    A new study from Caltech shows that giant impacts can dramatically lower the internal pressure of planets, a finding that could significantly change the current model of planetary formation.

    The impacts, such as the one that is thought to have caused the formation of the earth’s moon roughly 4.5 billion years ago, could cause random fluctuations in core and mantle pressures that would explain some puzzling geochemical signatures in Earth’s mantle.

    “Previous studies have incorrectly assumed that a planet’s internal pressure is simply a function of the mass of the planet, and so it increases continuously as the planet grows. What we’ve shown is that the pressure can temporarily change after a major impact, followed by a longer term increase in pressure as the post-impact body recovers. This finding has major implications for the planet’s chemical structure and subsequent evolution,” says Simon Lock, postdoctoral researcher at Caltech and lead author of a paper explaining the new model that was published by Science Advances on September 4.

    Lock authored the paper with colleague Sarah Stewart (PhD ’02), professor of planetary science at the University of California, Davis, a 2018 MacArthur Fellow, and an alumna of the Caltech Division of Geological and Planetary Sciences.

    Planetary systems typically begin as a disk of dust that slowly accretes into rocky bodies. The end of the main stage of this process is characterized by high-energy collisions between planet-sized bodies as they coalesce to form the final planets.

    The shock energy of these impacts can vaporize significant portions of a planet and even, as is thought to have happened with the impact that formed the moon, temporarily turn the two colliding bodies into a rotating donut of planetary material known as a “synestia,” which later cools back into one or more spherical bodies.

    Lock and Stewart used computational models of giant impacts and planetary structures to simulate collisions that formed bodies with masses of between 0.9 and 1.1 Earth masses and found that, immediately after a collision, their internal pressures were much lower than had been expected. They found that the decrease in pressure was due to a combination of factors: the rapid rotation imparted by the collision, which generated a centrifugal force that acted against gravity, in essence pushing material away from the spin axis; and the low density of the hot, partially vaporized body.

    “We have no direct observations of the growth of Earth-like planets. It turns out that the physical properties of a planet can vary wildly during their growth by collisions. Our new view of planet formation is much more variable and energetic than previous models which opens the door for new explanations of previous data,” Stewart says.

    The final result is that major impacts can lower a planet’s internal pressure significantly. The pressure right after an impact like the one that is thought to have formed the moon could have been half of that of present-day Earth.

    If true, the finding could help reconcile a long-standing contradiction between the geochemistry of the earth’s mantle and physical models of planet formation.

    As the proto-Earth grew, each object that collided with it delivered metal into the mantle. After each impact, the metal absorbed small amounts of other elements from the mantle, and then sank to the core – dragging those elements with it. The amount of each element that dissolved into the metal was determined, in part, by the earth’s internal pressures. As such, the chemical composition of the mantle today records the mantle pressure during the planet’s formation.

    Studies of the metals in the earth’s mantle today indicate that this absorption process occurred at pressures found in the middle of the mantle today. However, giant impact models show that such impacts melt most of the mantle, and so the mantle should have recorded a much higher pressure – equivalent to what we now see just above the core. This anomaly between the geochemical observation and physical models is one that scientists have long sought to explain.

    By showing that the pressures after giant impacts were lower than previously thought, Lock and Stewart may have found the physical mechanism to solve this conundrum.

    Next, Lock and Stewart plan to use their results to calculate how stochastic changes in pressure during formation affect the chemical structure of planets. Lock says that they will also continue to study how planets recover from the trauma of giant impacts “We have shown that the pressures in planets can increase dramatically as a planet recovers, but what effect does that have on how the mantle solidifies or how Earth’s first crust formed? This is a whole new area that has yet to be explored,” he says.

    Funding for this research came from the NASA Earth and Space Science Fellowship, the Department of Energy National Nuclear Security Administration, Harvard University’s Earth and Planetary Sciences Department and Caltech’s Division of Geological and Planetary Sciences.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 2:08 pm on August 28, 2019 Permalink | Reply
    Tags: "Newly Discovered Giant Planet Slingshots Around Its Star", , , , Caltech, , HR 5183 b   

    From Caltech: “Newly Discovered Giant Planet Slingshots Around Its Star” 

    Caltech Logo

    From Caltech

    August 27, 2019
    Written by Elise Cutts

    Contact
    Whitney Clavin
    (626) 395‑1856
    wclavin@caltech.edu

    Astronomers have discovered a planet three times the mass of Jupiter that travels on a long, egg-shaped path around its star. If this planet were somehow placed into our own solar system, it would swing from within our asteroid belt to out beyond Neptune. Other giant planets with highly elliptical orbits have been found around other stars, but none of those worlds were located at the very outer reaches of their star systems like this one.

    1
    This illustration compares the eccentric orbit of HR 5183 b to the more circular orbits of the planets in our own solar system. Credit: W. M. Keck Observatory/Adam Makarenko

    “This planet is unlike the planets in our solar system, but more than that, it is unlike any other exoplanets we have discovered so far,” says Sarah Blunt, a Caltech graduate student and first author on the new study publishing in The Astronomical Journal. “Other planets detected far away from their stars tend to have very low eccentricities, meaning that their orbits are more circular. The fact that this planet has such a high eccentricity speaks to some difference in the way that it either formed or evolved relative to the other planets.”

    The planet was discovered using the radial velocity method, a workhorse of exoplanet discovery that detects new worlds by tracking how their parent stars “wobble” in response to gravitational tugs from those planets.

    Radial Velocity Method-Las Cumbres Observatory

    Radial velocity Image via SuperWasp http:// http://www.superwasp.org/exoplanets.htm

    However, analyses of these data usually require observations taken over a planet’s entire orbital period. For planets orbiting far from their stars, this can be difficult: a full orbit can take tens or even hundreds of years.

    The California Planet Search, led by Caltech Professor of Astronomy Andrew W. Howard, is one of the few groups that watches stars over the decades-long timescales necessary to detect long-period exoplanets using radial velocity. The data needed to make the discovery of the new planet were provided by the two observatories used by the California Planet Search—the Lick Observatory in Northern California and the W. M. Keck Observatory in Hawaii—and by the McDonald Observatory in Texas.

    UCSC Lick Observatory, Mt Hamilton, in San Jose, California, Altitude 1,283 m (4,209 ft)

    Keck Observatory, operated by Caltech and the University of California, Maunakea Hawaii USA, 4,207 m (13,802 ft)

    U Texas at Austin McDonald Observatory, Altitude 2,070 m (6,790 ft)

    The astronomers have been watching the planet’s star, called HR 5183, since the 1990s, but do not have data corresponding to one full orbit of the planet, called HR 5183 b, because it circles its star roughly every 45 to 100 years. The team instead found the planet because of its strange orbit.

    “This planet spends most of its time loitering in the outer part of its star’s planetary system in this highly eccentric orbit, then it starts to accelerate in and does a slingshot around its star,” explains Howard. “We detected this slingshot motion. We saw the planet come in and now it’s on its way out. That creates such a distinctive signature that we can be sure that this is a real planet, even though we haven’t seen a complete orbit.”

    The new findings show that it is possible to use the radial velocity method to make detections of other far-flung planets without waiting decades. And, the researchers suggest, looking for more planets like this one could illuminate the role of giant planets in shaping their solar systems.

    Planets take shape out of disks of material left over after stars form. That means that planets should start off in flat, circular orbits. For the newly detected planet to be on such an eccentric orbit, it must have gotten a gravitational kick from some other object. The most plausible scenario, the researchers propose, is that the planet once had a neighbor of similar size. When the two planets got close enough to each other, one pushed the other out of the solar system, forcing HR 5183 b into a highly eccentric orbit.

    “This newfound planet basically would have come in like a wrecking ball,” says Howard, “knocking anything in its way out of the system.”

    This discovery demonstrates that our understanding of planets beyond our solar system is still evolving. Researchers continue to find worlds that are unlike anything in our solar system or in solar systems we have already discovered.

    “Copernicus taught us that Earth is not the center of the solar system, and as we expanded into discovering other solar systems of exoplanets, we expected them to be carbon copies of our own solar system,” Howard explains, “But it’s just been one surprise after another in this field. This newfound planet is another example of a system that is not the image of our solar system but has remarkable features that make our universe incredibly rich in its diversity.”

    The study, titled, “Radial Velocity of an Eccentric Jovian World Orbiting at 18AU,” was funded by the National Science Foundation, NASA, Tennessee State University and the State of Tennessee, the Beatrice Watson Parrent Fellowship, the Trottier Family Foundation, and Caltech. Other Caltech authors include: BJ Fulton, a staff scientist at IPAC; former postdoctoral scholar Sean Mills (BS ’12); Erik Petigura, a former postdoctoral scholar now based at UCLA; and Arpita Roy, R.A. & G.B. Millikan Postdoctoral Scholar in Astronomy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 12:42 pm on August 6, 2019 Permalink | Reply
    Tags: "Ghosts of Ancient Explosions Live on in Stars Today", , , , Caltech, , , ,   

    From Caltech: “Ghosts of Ancient Explosions Live on in Stars Today” 

    Caltech Logo

    From Caltech

    August 05, 2019

    Contact
    Lori Dajose
    (626) 395‑1217
    ldajose@caltech.edu

    The chemical composition of certain stars gives clues about their predecessors, stars that have long since exploded and faded.

    1
    Image of a Type Ia supernova. Credit: Zwicky Transient Facility

    Zwicky Transient Facility (ZTF) instrument installed on the 1.2m diameter Samuel Oschin Telescope at Palomar Observatory in California. Courtesy Caltech Optical Observatories

    Edwin Hubble at Caltech Palomar Samuel Oschin 48 inch Telescope, (credit: Emilio Segre Visual Archives/AIP/SPL)

    Caltech Palomar Intermediate Palomar Transient Factory telescope at the Samuel Oschin Telescope at Palomar Observatory,located in San Diego County, California, United States, altitude 1,712 m (5,617 ft)

    Caltech Palomar Samuel Oschin 48 inch Telescope, located in San Diego County, California, United States, altitude 1,712 m (5,617 ft)

    When small, dense stars called white dwarfs explode, they produce bright, short-lived flares called Type Ia supernovae. These supernovae are informative cosmological markers for astronomers—for example, they were used to prove that the universe is accelerating in its expansion.

    White dwarfs are not all the same, ranging from half of the mass of our sun to almost 50 percent more massive than our sun. Some explode in Type Ia supernovae; others simply die quietly. Now, by studying the “fossils” of long-exploded white dwarfs, Caltech astronomers have found that early on in the universe, white dwarfs often exploded at lower masses than they do today. This discovery indicates that a white dwarf could explode from a variety of causes, and does not necessarily have to reach a critical mass before exploding.

    A paper about the research, led by Evan Kirby, assistant professor of astronomy, appears in The Astrophysical Journal.

    Near the end of their lives, a majority of stars like our sun dwindle down into dim, dense white dwarfs, with all their mass packed into a space about the size of Earth. Sometimes, white dwarfs explode in what’s called a Type Ia (pronounced one-A) supernova.

    It is uncertain why some white dwarfs explode while others do not. In the early 1900s, an astrophysicist named Subrahmanyan Chandrasekhar calculated that if a white dwarf had more than 1.4 times the mass of our sun, it would explode in a Type Ia supernova. This mass was dubbed the Chandrasekhar mass. Though Chandrasekhar’s calculations gave one explanation for why some more massive white dwarfs explode, it did not explain why other white dwarfs less than 1.4 solar masses also explode.

    Studying Type Ia supernovae is a time-sensitive process; they flare into existence and fade back into darkness all within a few months. To study long-gone supernovae and the white dwarfs that produced them, Kirby and his team use a technique colloquially called galactic archaeology.

    Galactic archaeology is the process of looking for chemical signatures of long-past explosions in other stars. When a white dwarf explodes in a Type Ia supernova, it pollutes its galactic environment with elements forged in the explosion—heavy elements like nickel and iron. The more massive a star is when it explodes, the more heavy elements will be formed in the supernova. Then, those elements become incorporated into any newly forming stars in that region. Just as fossils today give clues about animals that have long ceased to exist, the amounts of nickel in stars illustrates how massive their long-exploded predecessors must have been.

    Using the Keck II telescope, Kirby and his team first looked at certain ancient galaxies, those that ran out of material to form stars in the first billion years of the universe’s life.

    Keck 2 telescope Maunakea Hawaii USA, 4,207 m (13,802 ft)

    Most of the stars in these galaxies, the team found, had relatively low nickel content. This meant that the exploded white dwarfs that gave them that nickel must have been relatively low mass—about as massive as the sun, lower than the Chandrasekhar mass.

    Yet, the researchers found that the nickel content was higher in more recently formed galaxies, meaning that as more time went by since the Big Bang, white dwarfs had begun to explode at higher masses.

    “We found that, in the early universe, white dwarfs were exploding at lower masses than later in the universe’s lifetime,” says Kirby.”It’s still unclear what has driven this change.”

    Understanding the processes that result in Type Ia supernovae is important because the explosions themselves are useful tools for making measurements of the universe. Regardless of how they exploded, most Type Ia supernovae follow a well-characterized relationship between their luminosity and the time it takes for them to fade.

    “We call Type Ia supernovae ‘standardizable candles.’

    Standard Candles to measure age and distance of the universe from supernovae NASA

    If you look at a candle at a distance, it will look dimmer than when it’s up close. If you know how bright it is supposed to be up close, and you measure how bright it is at a distance, you can calculate that distance,” says Kirby. “Type Ia supernovae have been very useful in calculating things like the rate of expansion of the universe. We use them all the time in cosmology. So, it’s important to understand where they come from and characterize the white dwarfs that generate these explosions.”

    The next steps are to study elements other than nickel, in particular, manganese. Manganese production is very sensitive to the mass of the supernova that produces it, and therefore gives a precise way to validate the conclusions drawn by the nickel content.

    The paper is titled Evidence for Sub-Chandrasekhar Type Ia Supernovae from Stellar Abundances in Dwarf Galaxies. In addition to Kirby, co-authors are Justin L. Xie and Rachel Guo of Harvard University, Caltech graduate student Mithi A. C. de los Reyes, Maria Bergemann and Mikhail Kovalev of the Max Planck Institute for Astronomy, Ken J. Shen of University of California Berkeley, and Anthony L. Piro and Andrew McWilliam of the Observatories of the Carnegie Institution for Science. Funding was provided by the National Science Foundation, a Cottrell Scholar award from the Research Corporation for Science Advancement, and Caltech.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 3:41 pm on July 24, 2019 Permalink | Reply
    Tags: "Seismologists Monitor Ridgecrest Aftershocks Using Novel Fiber Optic Network", Caltech, Ridgecrest earthquakes   

    From Caltech: “Seismologists Monitor Ridgecrest Aftershocks Using Novel Fiber Optic Network” 

    Caltech Logo

    From Caltech

    1

    July 24, 2019

    Written by
    Robert Perkins

    Contact
    Emily Velasco
    626‑395‑6487
    evelasco@caltech.edu

    Seismologists from Caltech are using fiber optic cables to monitor and record the aftershocks from the 2019 Ridgecrest earthquake sequence in greater detail than previously possible. Thousands of tiny aftershocks are occurring throughout the region each day, an unprecedented number of which will now be able to be tracked and studied.

    The nascent technique involves shooting a beam of light down a “dark,” or unused, fiber optic cable. When the beam hits tiny imperfections in the cable, a miniscule portion of the light is reflected back and recorded.

    In this manner, each imperfection acts as a trackable waypoint along the fiber optic cable, which is typically buried several feet beneath the earth’s surface. Seismic waves moving through the ground cause the cable to expand and contract minutely, which changes the travel time of light to and from these waypoints. By monitoring these changes, seismologists can observe the motion of seismic waves.

    “These imperfections occur frequently enough that every few meters of fiber act like an individual seismometer. For the 50 kilometers of fiber optic cable in three different locations we’ve tapped into for the project, it’s roughly akin to deploying over 6,000 seismometers in the area,” says Zhongwen Zhan, assistant professor of geophysics, who is leading the effort.

    The project was launched just days after the two large earthquakes struck the Ridgecrest area. Zhan called around, searching for unused fiber optic cable that would be long enough and close enough to the seismically active region to be useful. Eventually, the manager of the Inyokern Airport, Scott Seymour (who had also offered the use of the fiber network around the airport), connected Zhan with Michael Ort, the chief executive officer of the California Broadband Cooperative’s Digital 395 project. The project aims to build a new 583-mile fiber network that mainly follows the U.S. Route 395, which runs north-south along the eastern side of the Sierra Nevada, passing near Ridgecrest.

    Digital 395 has offered to let Zhan use three segments of its fiber optic cable: 10 kilometers from Ridgecrest to the west, and two sections both to the north and south of Olancha, near which there was intense seismic activity triggered by the M7.1 quake. “The July 4–5 earthquakes created a 50-kilometer-long rupture that has triggered aftershocks in separate regions that we’ll be able to study,” Zhan says. Meanwhile, the sensing instruments that Zhan connected to the fiber optic cable for the project were provided by manufacturers OptaSense and Silixa.

    Zhan’s team also deployed farther south at the Goldstone Deep Space Communications Complex, a NASA facility run by JPL. (JPL is managed by Caltech for NASA.) The site, which is close to Fort Irwin, hosts a dark fiber network that Zhan previously used to test the fiber optic seismic monitoring technology. “It’s only about 70 kilometers from Ridgecrest, so this would be a good time go back,” he says.

    Immediately after the earthquake, the USGS also deployed temporary seismometers around Ridgecrest to monitor aftershocks. Zhan says his fiber optic system will complement rather than duplicate that effort. The temporary seismometers are able to cover a wider area but produce sparser readings, he says. They also tend to be battery operated and nonnetworked, meaning that the data they record will not be available until after their batteries run down and are retrieved, while a portion of the data from Zhan’s fiber optic system will be available immediately.

    Though the Ridgecrest seismic monitoring will be temporary, Zhan and his colleagues hope to establish similar systems permanently in key cities throughout Southern California. This work began with a pilot project in 2018 involving the Caltech Seismological Laboratory and the City of Pasadena to use a portion of the city’s dark fiber to monitor temblors in the area.

    “The combination of the Pasadena fiber array and the Ridgecrest deployments have provided two important science firsts: the Pasadena array is the first example of a permanent earthquake monitoring system using fiber optics, while the Ridgecrest deployments, also a first in earthquake monitoring, give us a glimpse of what we could see if we were able to continuously light up dark fiber throughout Southern California,” says Michael Gurnis, John E. and Hazel S. Smits Professor of Geophysics and director of the Caltech Seismological Laboratory. “They allow us to observe and understand how seismic waves reverberate through our complex mountains and basins following a major temblor.”

    The information collected from the Ridgecrest fiber network will help seismologists learn more about how earthquake sequences decay and migrate through the earth, and will offer more details about how seismic waves move throughout the region around Ridgecrest.

    Processing the large volume of data that the fiber optic system is gathering will take months, even using automated systems, says Zhan, who estimates that his team will receive on the order of 10 to 20 terabytes of data over the next few months.

    “This will keep us busy for a while, but in the end, we’ll have a clearer picture of how this sequence evolved than would otherwise be possible,” Zhan says.

    This research is funded by a National Science Foundation CAREER Award, Caltech trustee Li Lu, and the Caltech-JPL President’s and Director’s Research and Development Fund.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
  • richardmitnick 11:33 am on July 19, 2019 Permalink | Reply
    Tags: Caltech, Dr. Jennifer Andrews, , , , , The Seismo Lab at Caltech,   

    From Caltech: Women in STEM “What is it Like to be a Caltech Seismologist During a Big Quake?” Dr. Jennifer Andrews 

    Caltech Logo

    From Caltech

    July 18, 2019
    Robert Perkins
    (626) 395‑1862
    rperkins@caltech.edu

    When an earthquake strikes, seismologists at Caltech’s Seismological Laboratory spring into action.

    2

    1
    Dr. Jennifer Andrews

    An arm of Caltech’s Division of Geological and Planetary Sciences (GPS), the Seismo Lab is home to dozens of seismologists who collaborate with the United States Geological Survey (USGS) to operate one of the largest seismic networks in the nation.Together, they analyze data to provide the public with information about where the quake occurred and how big it was. That information not only helps first responders, but feeds into the scientific understanding on earthquakes and when and where the next big quicks are likely to strike.

    After the two largest Ridgecrest earthquakes on July 4 and 5 (Magnitude 6.4 and 7.1, respectively), Caltech staff seismologist Jen Andrews was part of the Seismo Lab team that rushed to respond. Recently, she described that experience.

    Where were you when the earthquakes hit?

    For Thursday’s quake, I was at home in my shower. I didn’t even realize at the time that it was a quake. But when I got out and looked at my computer, I saw the report. Then the phone rang, and it was Egill [Hauksson, research professor of geophysics at Caltech], saying it was time to go to work. It was all hands on deck.

    For Friday’s quake, I was at the ballet at the Dorothy Chandler Pavilion in Downtown Los Angeles. They’d just finished act 1 and were in intermission, so fortunately no dancers were on stage to be knocked off their feet. I was in the balcony, so the movement I felt was probably amplified by the height (and also the soft sediment beneath Downtown). The chandeliers were swaying, but no one panicked. As soon as I felt it shake, I started counting. We felt it as a roll, so I knew the epicenter wasn’t right beneath us. Once I reached 20 seconds, I knew this was a big earthquake, even bigger than the first one. I immediately got in a taxi and headed straight to campus.

    What did you do next?

    Here at the Seismo Lab, it’s our responsibility to verify that all of the info we’re putting out about earthquakes—the locations and magnitudes, for example—are correct. We’re responsible for getting info about the origin out within two minutes of the shaking, so we have fully automated systems that send updates to the National Earthquake Information Center right away. All of that happens without anyone touching anything, before we can even get to our desks. But once we get there, we look at the waveforms and make sure that we’re correctly identifying the P and S waves. [During an earthquake, several types of seismic waves radiate out from the quake’s epicenter, including compressional waves (or P-waves), transverse waves (or S-waves), and surface waves.] We also know the speed at which seismic waves should travel, so we can use that to make sure that we’re correctly identifying where the quake originated. It turns out that the automatic systems did a brilliant job of getting most of the information correct.

    What is it like to be in the Seismo Lab after a big earthquake?

    It’s very busy. There’s a lot of people: seismologists, news reporters, even curious students and people who are on campus who just want to know what’s going on. Meanwhile, we have a lot of issues to deal with: we have seismologists on the phone with state representatives and others speaking to members of the press, while still others are trying to process data coming in from seismometers. Within a few hours of a quake, the USGS tries to figure out who’s going out to the location of the earthquake, and what equipment they’ll be taking. For the Ridgecrest quakes, they did flyovers in a helicopter looking for ruptures, and then sent people on the ground to measure the rupture. They then deployed additional seismometers so that we could get an even clearer picture of any aftershocks.

    How long after the earthquake will things stay busy for you?

    The media attention relaxes after a few hours or days, but I’m going to be looking at the data we gathered from these quakes for a long time. I was here every day over the holiday weekend and the following week working on it. It could take months or even years for our group to process all the data.

    Do you learn more from big earthquakes like these than you do from little ones?

    You learn different things. The data will be incorporated into earthquake hazard models, though likely will not make big changes. But these quakes in particular were interesting, as two perpendicular faults were involved. We can study the rupture dynamics, which you can’t resolve in smaller quakes. Also, having two strong quakes caused variations in fault slip and ground motion that will be important to study and understand.

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    The California Institute of Technology (commonly referred to as Caltech) is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphases on science and engineering. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. “The mission of the California Institute of Technology is to expand human knowledge and benefit society through research integrated with education. We investigate the most challenging, fundamental problems in science and technology in a singularly collegial, interdisciplinary atmosphere, while educating outstanding students to become creative members of society.”

    Caltech campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: