Tagged: AI and Machine Learning Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:29 am on June 12, 2021 Permalink | Reply
    Tags: "A Tectonic Shift in Analytics and Computing Is Coming", "Destination Earth", "Speech Understanding Research", "tensor processing units", AI and Machine Learning, , , Computing clusters, , GANs: generative adversarial networks, , , , , Seafloor bathymetry, SML: supervised machine learning, UML: Unsupervised Machine Learning   

    From Eos: “A Tectonic Shift in Analytics and Computing Is Coming” 

    From AGU
    Eos news bloc

    From Eos

    4 June 2021
    Gabriele Morra
    morra@louisiana.edu
    Ebru Bozdag
    Matt Knepley
    Ludovic Räss
    Velimir Vesselinov

    Artificial intelligence combined with high-performance computing could trigger a fundamental change in how geoscientists extract knowledge from large volumes of data.

    1
    A Cartesian representation of a global adjoint tomography model, which uses high-performance computing capabilities to simulate seismic wave propagation, is shown here. Blue and red colorations represent regions of high and low seismic velocities, respectively. Credit: David Pugmire, DOE’s Oak Ridge National Laboratory (US).

    More than 50 years ago, a fundamental scientific revolution occurred, sparked by the concurrent emergence of a huge amount of new data on seafloor bathymetry and profound intellectual insights from researchers rethinking conventional wisdom. Data and insight combined to produce the paradigm of plate tectonics. Similarly, in the coming decade, a new revolution in data analytics may rapidly overhaul how we derive knowledge from data in the geosciences. Two interrelated elements will be central in this process: artificial intelligence (AI, including machine learning methods as a subset) and high-performance computing (HPC).

    Already today, geoscientists must understand modern tools of data analytics and the hardware on which they work. Now AI and HPC, along with cloud computing and interactive programming languages, are becoming essential tools for geoscientists. Here we discuss the current state of AI and HPC in Earth science and anticipate future trends that will shape applications of these developing technologies in the field. We also propose that it is time to rethink graduate and professional education to account for and capitalize on these quickly emerging tools.

    Work in Progress

    Great strides in AI capabilities, including speech and facial recognition, have been made over the past decade, but the origins of these capabilities date back much further. In 1971, the Defense Advanced Research Projects Agency (US) substantially funded a project called Speech Understanding Research [Journal of the Acoustical Society of America], and it was generally believed at the time that artificial speech recognition was just around the corner. We know now that this was not the case, as today’s speech and writing recognition capabilities emerged only as a result of both vastly increased computing power and conceptual breakthroughs such as the use of multilayered neural networks, which mimic the biological structure of the brain.

    Recently, AI has gained the ability to create images of artificial faces that humans cannot distinguish from real ones by using generative adversarial networks (GANs). These networks combine two neural networks, one that produces a model and a second one that tries to discriminate the generated model from the real one. Scientists have now started to use GANs to generate artificial geoscientific data sets.

    These and other advances are striking, yet AI and many other artificial computing tools are still in their infancy. We cannot predict what AI will be able to do 20–30 years from now, but a survey of existing AI applications recently showed that computing power is the key when targeting practical applications today. The fact that AI is still in its early stages has important implications for HPC in the geosciences. Currently, geoscientific HPC studies have been dominated by large-scale time-dependent numerical simulations that use physical observations to generate models [Morra et al, 2021a*]. In the future, however, we may work in the other direction—Earth, ocean, and atmospheric simulations may feed large AI systems that in turn produce artificial data sets that allow geoscientific investigations, such as Destination Earth, for which collected data are insufficient.

    *all citations are included in References below.

    Data-Centric Geosciences

    Development of AI capabilities is well underway in certain geoscience disciplines. For a decade now [Ma et al., 2019], remote sensing operations have been using convolutional neural networks (CNNs), a kind of neural network that adaptively learns which features to look at in a data set. In seismology (Figure 1), pattern recognition is the most common application of machine learning (ML), and recently, CNNs have been trained to find patterns in seismic data [Kong et al., 2019], leading to discoveries such as previously unrecognized seismic events [Bergen et al., 2019].

    2
    Fig. 1. Example of a workflow used to produce an interactive “visulation” system, in which graphic visualization and computer simulation occur simultaneously, for analysis of seismic data. Credit: Ben Kadlec.

    New AI applications and technologies are also emerging; these involve, for example, the self-ordering of seismic waveforms to detect structural anomalies in the deep mantle [Kim et al., 2020]. Recently, deep generative models, which are based on neural networks, have shown impressive capabilities in modeling complex natural signals, with the most promising applications in autoencoders and GANs (e.g., for generating images from data).

    CNNs are a form of supervised machine learning (SML), meaning that before they are applied for their intended use, they are first trained to find prespecified patterns in labeled data sets and to check their accuracy against an answer key. Training a neural network using SML requires large, well-labeled data sets as well as massive computing power. Massive computing power, in turn, requires massive amounts of electricity, such that the energy demand of modern AI models is doubling every 3.4 months and causing a large and growing carbon footprint.

    In the future, the trend in geoscientific applications of AI might shift from using bigger CNNs to using more scalable algorithms that can improve performance with less training data and fewer computing resources. Alternative strategies will likely involve less energy-intensive neural networks, such as spiking neural networks, which reduce data inputs by analyzing discrete events rather than continuous data streams.

    Unsupervised ML (UML), in which an algorithm identifies patterns on its own rather than searching for a user-specified pattern, is another alternative to data-hungry SML. One type of UML identifies unique features in a data set to allow users to discover anomalies of interest (e.g., evidence of hidden geothermal resources in seismic data) and to distinguish trends of interest (e.g., rapidly versus slowly declining production from oil and gas wells based on production rate transients) [Vesselinov et al., 2019].

    AI is also starting to improve the efficiency of geophysical sensors. Data storage limitations require instruments such as seismic stations, acoustic sensors, infrared cameras, and remote sensors to record and save data sets that are much smaller than the total amount of data they measure. Some sensors use AI to detect when “interesting” data are recorded, and these data are selectively stored. Sensor-based AI algorithms also help minimize energy consumption by and prolong the life of sensors located in remote regions, which are difficult to service and often powered by a single solar panel. These techniques include quantized CNN (using 8-bit variables) running on minimal hardware, such as Raspberry Pi [Wilkes et al., 2017].

    Advances in Computing Architectures

    Powerful, efficient algorithms and software represent only one part of the data revolution; the hardware and networks that we use to process and store data have evolved significantly as well.

    Since about 2004, when the increase in frequencies at which processors operate stalled at about 3 gigahertz (the end of Moore’s law), computing power has been augmented by increasing the number of cores per CPU and by the parallel work of cores in multiple CPUs, as in computing clusters.

    Accelerators such as graphics processing units (GPUs), once used mostly for video games, are now routinely used for AI applications and are at the heart of all major ML facilities (as well the DOE’s Exascale Ccomputing Project (US), a part of the National Strategic Computing Initiative – NSF (US)). For example, Summit and Sierra, the two fastest supercomputers in the United States, are based on a hierarchical CPU-GPU architecture.

    Meanwhile, emerging tensor processing units, which were developed specifically for matrix-based operations, excel at the most demanding tasks of most neural network algorithms. In the future, computers will likely become increasingly heterogeneous, with a single system combining several types of processors, including specialized ML coprocessors (e.g., Cerebras) and quantum computing processors.

    Computational systems that are physically distributed across remote locations and used on demand, usually called cloud computing, are also becoming more common, although these systems impose limitations on the code that can be run on them. For example, cloud infrastructures, in contrast to centralized HPC clusters and supercomputers, are not designed for performing large-scale parallel simulations. Cloud infrastructures face limitations on high-throughput interconnectivity, and the synchronization needed to help multiple computing nodes coordinate tasks is substantially more difficult to achieve for physically remote clusters. Although several cloud-based computing providers are now investing in high-throughput interconnectivity, the problem of synchronization will likely remain for the foreseeable future.

    Boosting 3D Simulations

    Artificial intelligence has proven invaluable in discovering and analyzing patterns in large, real-world data sets. It could also become a source of realistic artificial data sets, generated through models and simulations. Artificial data sets enable geophysicists to examine problems that are unwieldy or intractable using real-world data—because these data may be too costly or technically demanding to obtain—and to explore what-if scenarios or interconnected physical phenomena in isolation. For example, simulations could generate artificial data to help study seismic wave propagation; large-scale geodynamics; or flows of water, oil, and carbon dioxide through rock formations to assist in energy extraction and storage.

    HPC and cloud computing will help produce and run 3D models, not only assisting in improved visualization of natural processes but also allowing for investigation of processes that can’t be adequately studied with 2D modeling. In geodynamics, for example, using 2D modeling makes it difficult to calculate 3D phenomena like toroidal flow and vorticity because flow patterns are radically different in 3D. Meanwhile, phenomena like crustal porosity waves [Geophysical Research Letters] (waves of high porosity in rocks; Figure 2) and corridors of fast-moving ice in glaciers require extremely high spatial and temporal resolutions in 3D to capture [Räss et al., 2020].

    3
    Fig. 2. A 3D modeling run with 16 billion degrees of freedom simulates flow focusing in porous media and identifies a pulsed behavior phenomenon called porosity waves. Credit: Räss et al. [2018], CC BY 4.0.

    Adding an additional dimension to a model can require a significant increase in the amount of data processed. For example, in exploration seismology, going from a 2D to a 3D simulation involves a transition from requiring three-dimensional data (i.e., source, receiver, time) to five-dimensional data (source x, source y, receiver x, receiver y, and time [e.g., Witte et al., 2020]). AI can help with this transition. At the global scale, for example, the assimilation of 3D simulations in iterative full-waveform inversions for seismic imaging was performed recently with limited real-world data sets, employing AI techniques to maximize the amount of information extracted from seismic traces while maintaining the high quality of the data [Lei et al., 2020].

    Emerging Methods and Enhancing Education

    As far as we’ve come in developing AI for uses in geoscientific research, there is plenty of room for growth in the algorithms and computing infrastructure already mentioned, as well as in other developing technologies. For example, interactive programming, in which the programmer develops new code while a program is active, and language-agnostic programming environments that can run code in a variety of languages are young techniques that will facilitate introducing computing to geoscientists.

    Programming languages, such as Python and Julia, which are now being taught to Earth science students, will accompany the transition to these new methods and will be used in interactive environments such as the Jupyter Notebook. Julia was shown recently to perform well as compiled code for machine learning algorithms in its most recent implementations, such as the ones using differentiable programming, which reduces computational resource and energy requirements.

    Quantum computing, which uses the quantum states of atoms rather than streams of electrons to transmit data, is another promising development that is still in its infancy but that may lead to the next major scientific revolution. It is forecast that by the end of this decade, quantum computers will be applied in solving many scientific problems, including those related to wave propagation, crustal stresses, atmospheric simulations, and other topics in the geosciences. With competition from China in developing quantum technologies and AI, quantum computing and quantum information applications may become darlings of major funding opportunities, offering the means for ambitious geophysicists to pursue fundamental research.

    Taking advantage of these new capabilities will, of course, require geoscientists who know how to use them. Today, many geoscientists face enormous pressure to requalify themselves for a rapidly changing job market and to keep pace with the growing complexity of computational technologies. Academia, meanwhile, faces the demanding task of designing innovative training to help students and others adapt to market conditions, although finding professionals who can teach these courses is challenging because they are in high demand in the private sector. However, such teaching opportunities could provide a point of entry for young scientists specializing in computer science or part-time positions for professionals retired from industry or national labs [Morra et al., 2021b].

    The coming decade will see a rapid revolution in data analytics that will significantly affect the processing and flow of information in the geosciences. Artificial intelligence and high-performance computing are the two central elements shaping this new landscape. Students and professionals in the geosciences will need new forms of education enabling them to rapidly learn the modern tools of data analytics and predictive modeling. If done well, the concurrence of these new tools and a workforce primed to capitalize on them could lead to new paradigm-shifting insights that, much as the plate tectonic revolution did, help us address major geoscientific questions in the future.

    Acknowledgments:

    The listed authors thank Peter Gerstoft, Scripps Institution of Oceanography (US), University of California, San Diego; Henry M. Tufo, University of Colorado-Boulder (US); and David A. Yuen, Columbia University (US) and Ocean University of China [中國海洋大學](CN), Qingdao, who contributed equally to the writing of this article.

    References:

    Bergen, K. J., et al. (2019), Machine learning for data-driven discovery in solid Earth geoscience, Science, 363(6433), eaau0323, https://doi.org/10.1126/science.aau0323.

    Kim, D., et al. (2020), Sequencing seismograms: A panoptic view of scattering in the core-mantle boundary region, Science, 368(6496), 1,223–1,228, https://doi.org/10.1126/science.aba8972.

    Kong, Q., et al. (2019), Machine learning in seismology: Turning data into insights, Seismol. Res. Lett., 90(1), 3–14, https://doi.org/10.1785/0220180259.

    Lei, W., et al. (2020), Global adjoint tomography—Model GLAD-M25, Geophys. J. Int., 223(1), 1–21, https://doi.org/10.1093/gji/ggaa253.

    Ma, L., et al. (2019), Deep learning in remote sensing applications: A meta-analysis and review, ISPRS J. Photogramm. Remote Sens., 152, 166–177, https://doi.org/10.1016/j.isprsjprs.2019.04.015.

    Morra, G., et al. (2021a), Fresh outlook on numerical methods for geodynamics. Part 1: Introduction and modeling, in Encyclopedia of Geology, 2nd ed., edited by D. Alderton and S. A. Elias, pp. 826–840, Academic, Cambridge, Mass., https://doi.org/10.1016/B978-0-08-102908-4.00110-7.

    Morra, G., et al. (2021b), Fresh outlook on numerical methods for geodynamics. Part 2: Big data, HPC, education, in Encyclopedia of Geology, 2nd ed., edited by D. Alderton and S. A. Elias, pp. 841–855, Academic, Cambridge, Mass., https://doi.org/10.1016/B978-0-08-102908-4.00111-9.

    Räss, L., N. S. C. Simon, and Y. Y. Podladchikov (2018), Spontaneous formation of fluid escape pipes from subsurface reservoirs, Sci. Rep., 8, 11116, https://doi.org/10.1038/s41598-018-29485-5.

    Räss, L., et al. (2020), Modelling thermomechanical ice deformation using an implicit pseudo-transient method (FastICE v1.0) based on graphical processing units (GPUs), Geosci. Model Dev., 13, 955–976, https://doi.org/10.5194/gmd-13-955-2020.

    Vesselinov, V. V., et al. (2019), Unsupervised machine learning based on non-negative tensor factorization for analyzing reactive-mixing, J. Comput. Phys., 395, 85–104, https://doi.org/10.1016/j.jcp.2019.05.039.

    Wilkes, T. C., et al. (2017), A low-cost smartphone sensor-based UV camera for volcanic SO2 emission measurements, Remote Sens., 9(1), 27, https://doi.org/10.3390/rs9010027.

    Witte, P. A., et al. (2020), An event-driven approach to serverless seismic imaging in the cloud, IEEE Trans. Parallel Distrib. Syst., 31, 2,032–2,049, https://doi.org/10.1109/TPDS.2020.2982626.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 8:27 am on September 26, 2019 Permalink | Reply
    Tags: 5G, AI and Machine Learning,   

    From European Space Agency: “How ESA helps connect industry and spark 5G innovation” 

    ESA Space For Europe Banner

    From European Space Agency


    Space’s part in the 5G revolution

    25 September 2019

    Connecting people and machines to everything, everywhere and at all times through 5G networks promises to transform society. People will be able to access information and services developed to meet their immediate needs but, for this to happen seamlessly, satellite networks are needed alongside terrestrial ones.

    The European Space Agency is working with companies keen to develop and use space-enabled seamless 5G connectivity to develop ubiquitous services. At the UK Space Conference, held from 24 to 26 September in Newport, ESA is showcasing its work with several British-based companies, supported by the UK Space Agency.

    The companies are working on applications that range from autonomous ships to connected cars and drone delivery, from cargo logistics to emergency services, from media and broadcast to financial services.

    Spire is a satellite-powered data company that provides predictive analysis for global maritime, aviation and weather forecasting. It uses automatic identification systems aboard ships to track their whereabouts on the oceans.

    Spire’s network of 80 nanosatellites picks up the identity, position, course and speed of each vessel.

    ESA Spire Satellite

    Thanks to intelligent machine-learning algorithms, it can predict vessel locations and the ship’s estimated time of arrival at port, enabling port authorities to manage busy docks and market traders to price the goods carried aboard.

    Peter Platzer, chief executive of Spire, said: “ESA recognised the value of smaller, more nimble satellites and was looking for a provider that could bring satellites more rapidly and cheaper to orbit. That really was the start of our collaboration. ESA was instrumental in the fact that Spire’s largest office today is in the UK and most of its workforce is in Europe.”

    Integrating the ubiquity and unprecedented performance of satellites with terrestrial 5G networks is fundamental to the future success of Darwin, a project to develop connected cars in a partnership between ESA, Telefonica 02, a satellite operator, the universities of Oxford and Glasgow and several UK-based start-up companies.

    ESA Darwin telescope

    Connected cars need to switch seamlessly between terrestrial and satellite networks, so that people and goods can move across the country without any glitches.

    Darwin relies on a terminal that will allow seamless switching between the networks.

    Daniela Petrovic of Telefonica O2, who founded Darwin, said: “There is a really nice ecosystem of players delivering innovation. ESA provided the opportunities to start discussions with satellite operators and helped us create this partnership.

    “There is a good body of knowledge within ESA on innovation and science hubs and this gave us the opportunity to see what other start-ups are doing. Through ESA, we are getting exposure to 22 member state countries which can see the opportunity and maybe get involved.”

    Magali Vaissiere, Director of Telecommunications and Integrated Applications at ESA, said: “We are very excited to see the response of industry to our Space for 5G initiative, which aims to bring together the cellular and satellite telecommunications world and provide the connectivity fabric to enable the digital transformation of industry and society.

    “The showcase of flagship 5G projects today confirms the strategic importance of our Space for 5G initiative, which will be a significant strategic part of the upcoming ESA Conference of Ministers to be held in November.”

    Other companies that formed part of the showcase include: Cranfield University, which as part of its Digital Aviation Research and Technology Centre is set to spearhead the UK’s research into digital aviation technology; HiSky, a satellite virtual network operator that offers global low-cost voice, data and internet of things communications using existing telecommunications satellites; Inmarsat, a global satellite operator that is showcasing a range of new maritime services enabled by the seamless integration of 5G cellular and satellite connectivity; Open Cosmos, a small satellite manufacturer based at Harwell in Oxfordshire, which is investigating how to deliver 5G by satellite; and Sky and Space Global based in London that plans a constellation of 200 nanosatellites in equatorial low Earth orbit for narrowband communications.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    The European Space Agency (ESA), established in 1975, is an intergovernmental organization dedicated to the exploration of space, currently with 19 member states. Headquartered in Paris, ESA has a staff of more than 2,000. ESA’s space flight program includes human spaceflight, mainly through the participation in the International Space Station program, the launch and operations of unmanned exploration missions to other planets and the Moon, Earth observation, science, telecommunication as well as maintaining a major spaceport, the Guiana Space Centre at Kourou, French Guiana, and designing launch vehicles. ESA science missions are based at ESTEC in Noordwijk, Netherlands, Earth Observation missions at ESRIN in Frascati, Italy, ESA Mission Control (ESOC) is in Darmstadt, Germany, the European Astronaut Centre (EAC) that trains astronauts for future missions is situated in Cologne, Germany, and the European Space Astronomy Centre is located in Villanueva de la Cañada, Spain.

    ESA50 Logo large

     
  • richardmitnick 11:04 am on September 22, 2019 Permalink | Reply
    Tags: AI and Machine Learning, , , , , ,   

    From LANL via WIRED: “AI Helps Seismologists Predict Earthquakes” 

    LANL bloc

    Los Alamos National Laboratory

    via

    Wired logo

    From WIRED

    Machine learning is bringing seismologists closer to an elusive goal: forecasting quakes well before they strike.

    1
    Remnants of a 2,000-year-old spruce forest on Neskowin Beach, Oregon — one of dozens of “ghost forests” along the Oregon and Washington coast. It’s thought that a mega-earthquake of the Cascadia subduction zone felled the trees, and that the stumps were then buried by tsunami debris.Photograph: Race Jones/Outlive Creative

    In May of last year, after a 13-month slumber, the ground beneath Washington’s Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada’s Vancouver Island. It then briefly reversed course, migrating back across the US border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.

    Because the quake was so spread out in time and space, however, it’s likely that no one felt it. These kinds of phantom earthquakes, which occur deeper underground than conventional, fast earthquakes, are known as “slow slips.” They occur roughly once a year in the Pacific Northwest, along a stretch of fault where the Juan de Fuca plate is slowly wedging itself beneath the North American plate. More than a dozen slow slips have been detected by the region’s sprawling network of seismic stations since 2003. And for the past year and a half, these events have been the focus of a new effort at earthquake prediction by the geophysicist Paul Johnson.

    Johnson’s team is among a handful of groups that are using machine learning to try to demystify earthquake physics and tease out the warning signs of impending quakes. Two years ago, using pattern-finding algorithms similar to those behind recent advances in image and speech recognition and other forms of artificial intelligence, he and his collaborators successfully predicted temblors in a model laboratory system—a feat that has since been duplicated by researchers in Europe.

    Now, in a paper posted this week on the scientific preprint site arxiv.org, Johnson and his team report that they’ve tested their algorithm on slow slip quakes in the Pacific Northwest. The paper has yet to undergo peer review, but outside experts say the results are tantalizing. According to Johnson, they indicate that the algorithm can predict the start of a slow slip earthquake to “within a few days—and possibly better.”

    “This is an exciting development,” said Maarten de Hoop, a seismologist at Rice University who was not involved with the work. “For the first time, I think there’s a moment where we’re really making progress” toward earthquake prediction.

    Mostafa Mousavi, a geophysicist at Stanford University, called the new results “interesting and motivating.” He, de Hoop, and others in the field stress that machine learning has a long way to go before it can reliably predict catastrophic earthquakes—and that some hurdles may be difficult, if not impossible, to surmount. Still, in a field where scientists have struggled for decades and seen few glimmers of hope, machine learning may be their best shot.

    Sticks and Slips

    The late seismologist Charles Richter, for whom the Richter magnitude scale is named, noted in 1977 that earthquake prediction can provide “a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Today, many seismologists will tell you that they’ve seen their fair share of all three.

    But there have also been reputable scientists who concocted theories that, in hindsight, seem woefully misguided, if not downright wacky. There was the University of Athens geophysicist Panayiotis Varotsos, who claimed he could detect impending earthquakes by measuring “seismic electric signals.” There was Brian Brady, the physicist from the US Bureau of Mines who in the early 1980s sounded successive false alarms in Peru, basing them on a tenuous notion that rock bursts in underground mines were telltale signs of coming quakes.

    Paul Johnson is well aware of this checkered history. He knows that the mere phrase “earthquake prediction” is taboo in many quarters. He knows about the six Italian scientists who were convicted of manslaughter in 2012 for downplaying the chances of an earthquake near the central Italian town of L’Aquila, days before the region was devastated by a magnitude 6.3 temblor. (The convictions were later overturned.) He knows about the prominent seismologists who have forcefully declared that “earthquakes cannot be predicted.”

    But Johnson also knows that earthquakes are physical processes, no different in that respect from the collapse of a dying star or the shifting of the winds. And though he stresses that his primary aim is to better understand fault physics, he hasn’t shied away from the prediction problem.

    2
    Paul Johnson, a geophysicist at Los Alamos National Laboratory, photographed in 2008 with a block of acrylic plastic, one of the materials his team uses to simulate earthquakes in the laboratory.Photograph: Los Alamos National Laboratory

    More than a decade ago, Johnson began studying “laboratory earthquakes,” made with sliding blocks separated by thin layers of granular material. Like tectonic plates, the blocks don’t slide smoothly but in fits and starts: They’ll typically stick together for seconds at a time, held in place by friction, until the shear stress grows large enough that they suddenly slip. That slip—the laboratory version of an earthquake—releases the stress, and then the stick-slip cycle begins anew.

    When Johnson and his colleagues recorded the acoustic signal emitted during those stick-slip cycles, they noticed sharp peaks just before each slip. Those precursor events were the laboratory equivalent of the seismic waves produced by foreshocks before an earthquake. But just as seismologists have struggled to translate foreshocks into forecasts of when the main quake will occur, Johnson and his colleagues couldn’t figure out how to turn the precursor events into reliable predictions of laboratory quakes. “We were sort of at a dead end,” Johnson recalled. “I couldn’t see any way to proceed.”

    At a meeting a few years ago in Los Alamos, Johnson explained his dilemma to a group of theoreticians. They suggested he reanalyze his data using machine learning—an approach that was well known by then for its prowess at recognizing patterns in audio data.

    Together, the scientists hatched a plan. They would take the roughly five minutes of audio recorded during each experimental run—encompassing 20 or so stick-slip cycles—and chop it up into many tiny segments. For each segment, the researchers calculated more than 80 statistical features, including the mean signal, the variation about that mean, and information about whether the segment contained a precursor event. Because the researchers were analyzing the data in hindsight, they also knew how much time had elapsed between each sound segment and the subsequent failure of the laboratory fault.

    Armed with this training data, they used what’s known as a “random forest” machine learning algorithm to systematically look for combinations of features that were strongly associated with the amount of time left before failure. After seeing a couple of minutes’ worth of experimental data, the algorithm could begin to predict failure times based on the features of the acoustic emission alone.

    Johnson and his co-workers chose to employ a random forest algorithm to predict the time before the next slip in part because—compared with neural networks and other popular machine learning algorithms—random forests are relatively easy to interpret. The algorithm essentially works like a decision tree in which each branch splits the data set according to some statistical feature. The tree thus preserves a record of which features the algorithm used to make its predictions—and the relative importance of each feature in helping the algorithm arrive at those predictions.

    3
    A polarizing lens shows the buildup of stress as a model tectonic plate slides laterally along a fault line in an experiment at Los Alamos National Laboratory.Photograph: Los Alamos National Laboratory.

    When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance—a measure of how the signal fluctuates about the mean—and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

    The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place—that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

    To be sure, sliding blocks don’t begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

    Out of the Lab

    Most if not all of the places on Earth that can experience a magnitude 9 earthquake are subduction zones, where one tectonic plate dives beneath another. A subduction zone just east of Japan was responsible for the Tohoku earthquake and the subsequent tsunami that devastated the country’s coastline in 2011. One day, the Cascadia subduction zone, where the Juan de Fuca plate dives beneath the North American plate, will similarly devastate Puget Sound, Vancouver Island and the surrounding Pacific Northwest.

    Cascadia plate zones

    Cascadia subduction zone

    The Cascadia subduction zone stretches along roughly 1,000 kilometers of the Pacific coastline from Cape Mendocino in Northern California to Vancouver Island. The last time it breached, in January 1700, it begot a magnitude 9 temblor and a tsunami that reached the coast of Japan. Geological records suggest that throughout the Holocene, the fault has produced such megaquakes roughly once every half-millennium, give or take a few hundred years. Statistically speaking, the next big one is due any century now.

    That’s one reason seismologists have paid such close attention to the region’s slow slip earthquakes. The slow slips in the lower reaches of a subduction-zone fault are thought to transmit small amounts of stress to the brittle crust above, where fast, catastrophic quakes occur. With each slow slip in the Puget Sound-Vancouver Island area, the chances of a Pacific Northwest megaquake ratchet up ever so slightly. Indeed, a slow slip was observed in Japan in the month leading up to the Tohoku quake.

    For Johnson, however, there’s another reason to pay attention to slow slip earthquakes: They produce lots and lots of data. For comparison, there have been no major fast earthquakes on the stretch of fault between Puget Sound and Vancouver Island in the past 12 years. In the same time span, the fault has produced a dozen slow slips, each one recorded in a detailed seismic catalog.

    That seismic catalog is the real-world counterpart to the acoustic recordings from Johnson’s laboratory earthquake experiment. Just as they did with the acoustic recordings, Johnson and his co-workers chopped the seismic data into small segments, characterizing each segment with a suite of statistical features. They then fed that training data, along with information about the timing of past slow slip events, to their machine learning algorithm.

    After being trained on data from 2007 to 2013, the algorithm was able to make predictions about slow slips that occurred between 2013 and 2018, based on the data logged in the months before each event. The key feature was the seismic energy, a quantity closely related to the variance of the acoustic signal in the laboratory experiments. Like the variance, the seismic energy climbed in a characteristic fashion in the run-up to each slow slip.

    The Cascadia forecasts weren’t quite as accurate as the ones for laboratory quakes. The correlation coefficients characterizing how well the predictions fit observations were substantially lower in the new results than they were in the laboratory study. Still, the algorithm was able to predict all but one of the five slow slips that occurred between 2013 and 2018, pinpointing the start times, Johnson says, to within a matter of days. (A slow slip that occurred in August 2019 wasn’t included in the study.)

    For de Hoop, the big takeaway is that “machine learning techniques have given us a corridor, an entry into searching in data to look for things that we have never identified or seen before.” But he cautions that there’s more work to be done. “An important step has been taken—an extremely important step. But it is like a tiny little step in the right direction.”

    Sobering Truths

    The goal of earthquake forecasting has never been to predict slow slips. Rather, it’s to predict sudden, catastrophic quakes that pose danger to life and limb. For the machine learning approach, this presents a seeming paradox: The biggest earthquakes, the ones that seismologists would most like to be able to foretell, are also the rarest. How will a machine learning algorithm ever get enough training data to predict them with confidence?

    The Los Alamos group is betting that their algorithms won’t actually need to train on catastrophic earthquakes to predict them. Recent studies suggest that the seismic patterns before small earthquakes are statistically similar to those of their larger counterparts, and on any given day, dozens of small earthquakes may occur on a single fault. A computer trained on thousands of those small temblors might be versatile enough to predict the big ones. Machine learning algorithms might also be able to train on computer simulations of fast earthquakes that could one day serve as proxies for real data.

    But even so, scientists will confront this sobering truth: Although the physical processes that drive a fault to the brink of an earthquake may be predictable, the actual triggering of a quake—the growth of a small seismic disturbance into full-blown fault rupture—is believed by most scientists to contain at least an element of randomness. Assuming that’s so, no matter how well machines are trained, they may never be able to predict earthquakes as well as scientists predict other natural disasters.

    “We don’t know what forecasting in regards to timing means yet,” Johnson said. “Would it be like a hurricane? No, I don’t think so.”

    In the best-case scenario, predictions of big earthquakes will probably have time bounds of weeks, months or years. Such forecasts probably couldn’t be used, say, to coordinate a mass evacuation on the eve of a temblor. But they could increase public preparedness, help public officials target their efforts to retrofit unsafe buildings, and otherwise mitigate hazards of catastrophic earthquakes.

    Johnson sees that as a goal worth striving for. Ever the realist, however, he knows it will take time. “I’m not saying we’re going to predict earthquakes in my lifetime,” he said, “but … we’re going to make a hell of a lot of progress.”

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network project Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus

    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: