Tagged: Horizon – The EU Research and Innovation Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:57 pm on April 26, 2021 Permalink | Reply
    Tags: "What happens below Earth’s surface when the most powerful earthquakes occur", An unusual feature of megathrust quakes is that they are often followed by a series of other very powerful megathrust quakes several years later and with epicentres hundreds of kilometres away., Geologists find a rock made of a mineral with what’s called an inclusion crystal inside it. This inclusion was trapped inside the mineral as two subducting plates squeezed each other at great depth., , Horizon - The EU Research and Innovation Magazine, It turns out that we can get a unique window on subduction zones as they were millions of years ago., , Megathrust quakes are the result of the subduction of one tectonic plate below another., Science requires a careful examination of seismological and geodetic data at a greater scale than has previously been done., We have very little understanding of the dynamics of the subduction and how it might trigger an instability that leads to another megathrust event a few years later.   

    From Horizon The EU Research and Innovation Magazine : “What happens below Earth’s surface when the most powerful earthquakes occur” 

    1

    From Horizon The EU Research and Innovation Magazine

    26 April 2021
    Caleb Davies

    1
    Megathrust earthquakes happen at subduction zones, where one tectonic plate is forced under another. Credit: Marco Reyes / Unsplash.

    At 03:34 local time on 27 February 2010, Chile was struck by one of the most powerful earthquakes in a century. The shock triggered a tsunami, which devastated coastal communities. The combined events killed more than 500 people. So powerful was the shaking that, by one NASA estimate, it shifted Earth’s axis of spin by a full 8 cm.

    Like nearly all the of the most powerful earthquakes, this was a megathrust earthquake. These happen at subduction zones, places where one tectonic plate is forced under another. If the plates suddenly slip – wallop, you get a massive earthquake. The 2010 Chile quake was a magnitude 8.8: strong enough to shift buildings off their foundations.

    We understand subduction zones poorly, which is why geophysicist Professor Anne Socquet, based at Grenoble Alps University [Université Grenoble Alpes] (FR), had planned to visit Chile. She wanted to install seismic monitoring instruments to collect data. By coincidence, she arrived just a week after the quake. ‘It was terrifying,’ she said. ‘The apartment we had rented had fissures in the walls that you could put your fist inside.’

    Most people who study megathrust quakes focus on the foreshocks that immediately precede the main quake, Prof. Socquet says. But an unusual feature of megathrust quakes is that they are often followed by a series of other very powerful megathrust quakes several years later and with epicentres hundreds of kilometres away. The 2010 Chile quake, for instance, was followed by other events in 2014, 2015 and 2016 centred on areas up and down the Chile coast. Prof. Socquet wanted to look at these sequences of megathrust earthquakes and investigate the potential links between those great quakes. This requires a careful examination of seismological and geodetic data at a greater scale than has previously been done.

    Megathrust

    We know that megathrust quakes are the result of the subduction of one tectonic plate below another. But beyond that, we have very little understanding of the dynamics of the subduction and how it might trigger an instability that leads to another megathrust event a few years later. There is some evidence that it could be to do with the release and migration of fluids at great depth. Prof. Socquet’s DEEP-trigger project is about filling that gap. ‘This is kind of virgin territory in terms of observations,’ she said.

    The first step of the six-month-old project was supposed to be adding to the network of about 250 GPS instruments that she has contributed to in Chile since 2007 and building a new instrument network in Peru. Currently unable to travel to South America due to the Covid-19 pandemic, she’s been working with local contacts to begin the installation. She’s also working on computational tools to begin analysing legacy data from the region.

    ‘The critical thing will be to have systematic observations of the link between the slow slip and the seismic fractures at large time and space scales. This will be a very big input to science.’

    At the University of Pavia in Italy, mineralogist Professor Matteo Alvaro is also interested in megaquakes – albeit much, much older ones.

    It turns out that we can get a unique window on subduction zones as they were millions of years ago. There are certain places, few and far between, where rocks that have been through subduction zones are forced up to the surface. By analysing these rocks we can deduce the depths and pressures at which the subduction happened and build up a picture of how subduction works – and maybe how megathrust earthquakes are triggered.

    3
    Prof. Alvaro has just demonstrated the first successful application of a combination of x-ray crystallography and a technique called Raman spectroscopy with a sample of a rock from a location known as the Mir pipe in Siberia. Image credit – Vladimir, licensed under CC BY 3.0.

    Crystal

    It usually works like this. Geologists find a rock made of a mineral with what’s called an inclusion crystal inside it. This inclusion was trapped inside the mineral as two subducting plates squeezed each other at great depth, perhaps 100 km or more below the surface. It will have a particular crystal structure – a specific, repeating spatial arrangement of atoms – which depends on the pressure it experienced as it formed. The crystal can reveal the pressure the inclusion was exposed to and hence depth it was formed at.

    The trouble is, this is an over simplification. It only holds if the inclusion is cube-shaped – and it almost never is. This whole idea of pressure equals depth – we all know this might be incorrect, says Prof. Alvaro. ‘The natural questions is, okay, but by how much are we wrong?’ That’s what he decided to find out in his project TRUE DEPTHS.

    The plan was simple in principle. Prof Alvaro wanted to measure the strain experienced by the crystal while still trapped inside the mineral. If he could understand the tiny displacement of the atoms from their usual positions in a typical, unpressurised crystal structure, that would provide a better measure of the stress applied by the surrounding rock as the crystal was formed and so a more accurate measure of the depth at which it was formed. To study the atomic structure, he uses a combination of x-ray crystallography and a technique called Raman spectroscopy.

    Prof. Alvaro has just demonstrated the first successful application of his techniques. He looked at a sample of a rock from a location known as the Mir pipe in Siberia. This is a shaft of molten kimberlite rock that rose very fast from huge depths. (We get most of our diamonds from kimberlite pipes like this, and indeed, Mir has been mined extensively.) Prof. Alvaro looked at rocks of garnet with a tiny quartz inclusions inside that were brought up. ‘The kimberlite is the elevator that brings it to the surface,’ he said.

    Trigger

    By measuring the strain on the inclusions, he could confirm it formed at pressure of 1.5 gigaPascals (about 15,000 times that found at Earth’s surface) and a temperature of 850oC. This isn’t entirely surprising, but it is the first proof that Prof. Alvaro’s technique really works. He is now looking to make more measurements and build a library of examples.

    He also wonders, more speculatively, if it’s possible that the formation and deformation of the inclusions might act as the very first trigger of megathrust earthquakes. The idea would be that these tiny changes set off cracks in larger rocks that eventually lead a fault to slip out of place. Prof. Alvaro is planning to explore this further.

    ‘No one knows what the initial trigger is, the thing that triggers the first slip,’ said Prof. Alvaro. ‘We started thinking – and maybe it’s a completely crazy idea – that maybe it’s these inclusions. A cluster of them, maybe subject to an instantaneous phase change and so a change in volume. Maybe that could be the very first trigger.’

    ________________________________________________________________________________________

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network project is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    QuakeAlertUSA

    1

    About Early Warning Labs, LLC

    Early Warning Labs, LLC (EWL) is an Earthquake Early Warning technology developer and integrator located in Santa Monica, CA. EWL is partnered with industry leading GIS provider ESRI, Inc. and is collaborating with the US Government and university partners.

    EWL is investing millions of dollars over the next 36 months to complete the final integration and delivery of Earthquake Early Warning to individual consumers, government entities, and commercial users.

    EWL’s mission is to improve, expand, and lower the costs of the existing earthquake early warning systems.

    EWL is developing a robust cloud server environment to handle low-cost mass distribution of these warnings. In addition, Early Warning Labs is researching and developing automated response standards and systems that allow public and private users to take pre-defined automated actions to protect lives and assets.

    EWL has an existing beta R&D test system installed at one of the largest studios in Southern California. The goal of this system is to stress test EWL’s hardware, software, and alert signals while improving latency and reliability.

    Earthquake Early Warning Introduction

    The United States Geological Survey (USGS), in collaboration with state agencies, university partners, and private industry, is developing an earthquake early warning system (EEW) for the West Coast of the United States called ShakeAlert. The USGS Earthquake Hazards Program aims to mitigate earthquake losses in the United States. Citizens, first responders, and engineers rely on the USGS for accurate and timely information about where earthquakes occur, the ground shaking intensity in different locations, and the likelihood is of future significant ground shaking.

    The ShakeAlert Earthquake Early Warning System recently entered its first phase of operations. The USGS working in partnership with the California Governor’s Office of Emergency Services (Cal OES) is now allowing for the testing of public alerting via apps, Wireless Emergency Alerts, and by other means throughout California.

    ShakeAlert partners in Oregon and Washington are working with the USGS to test public alerting in those states sometime in 2020.

    ShakeAlert has demonstrated the feasibility of earthquake early warning, from event detection to producing USGS issued ShakeAlerts ® and will continue to undergo testing and will improve over time. In particular, robust and reliable alert delivery pathways for automated actions are currently being developed and implemented by private industry partners for use in California, Oregon, and Washington.

    Earthquake Early Warning Background

    The objective of an earthquake early warning system is to rapidly detect the initiation of an earthquake, estimate the level of ground shaking intensity to be expected, and issue a warning before significant ground shaking starts. A network of seismic sensors detects the first energy to radiate from an earthquake, the P-wave energy, and the location and the magnitude of the earthquake is rapidly determined. Then, the anticipated ground shaking across the region to be affected is estimated. The system can provide warning before the S-wave arrives, which brings the strong shaking that usually causes most of the damage. Warnings will be distributed to local and state public emergency response officials, critical infrastructure, private businesses, and the public. EEW systems have been successfully implemented in Japan, Taiwan, Mexico, and other nations with varying degrees of sophistication and coverage.

    Earthquake early warning can provide enough time to:

    Instruct students and employees to take a protective action such as Drop, Cover, and Hold On
    Initiate mass notification procedures
    Open fire-house doors and notify local first responders
    Slow and stop trains and taxiing planes
    Install measures to prevent/limit additional cars from going on bridges, entering tunnels, and being on freeway overpasses before the shaking starts
    Move people away from dangerous machines or chemicals in work environments
    Shut down gas lines, water treatment plants, or nuclear reactors
    Automatically shut down and isolate industrial systems

    However, earthquake warning notifications must be transmitted without requiring human review and response action must be automated, as the total warning times are short depending on geographic distance and varying soil densities from the epicenter.

    ________________________________________________________________________________________


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 11:17 am on April 22, 2021 Permalink | Reply
    Tags: "Paris to Berlin in an hour by train? Here’s how it could happen", Horizon - The EU Research and Innovation Magazine, , Maglev   

    From Horizon-The EU Research and Innovation Magazine : “Paris to Berlin in an hour by train? Here’s how it could happen” 

    1

    From Horizon-The EU Research and Innovation Magazine

    22 April 2021
    Tom Cassauwers

    1
    The hyperloop is ready for a breakthrough, and Zeleros is one of the concepts in the running. The Spanish start-up has created a unique technology thanks to their approach to their higher-pressure tubes. Artist’s impression – Zeleros hyperloop.

    The hyperloop is what you get when you take a magnetic levitation train and put it into an airless tube. The lack of resistance allows the train, in theory, to achieve unseen speeds, a concept that is edging closer and closer to reality – and could provide a greener alternative to short-haul air travel.

    In November of 2020 two people were shooting through an airless tube at 160 km/h in the desert outside of Las Vegas. This wasn’t a ride invented by a casino or theme park; it was the first crewed ride of a hyperloop by the company Virgin Hyperloop. The ride only lasted 15 seconds, and the speeds they achieved were a far cry from the 1200 km/h they promise they will one day reach, but it represented a step forward.

    The hyperloop might be the future of transportation for medium-length journeys. It could out-compete high-speed rail, and at the same time operate at speeds comparable to aviation, but at a fraction of its environmental and energy costs. It’s a concept which start-ups and researchers have eagerly adopted, including several teams across Europe.

    Open design

    The idea originated with the US entrepreneur Elon Musk, associated with companies like SpaceX and Tesla. After he mentioned it several times in public, a team of SpaceX and Tesla engineers released an open concept in 2013. This initial idea then spawned a range of companies and even student teams, trying to design their own versions. Among them were several students in the Spanish city of Valencia.

    ‘We started in 2015 after Elon Musk’s announcement, when we were still students’, said Juan Vicén Balaguer, co-founder and chief marketing officer of the hyperloop start-up Zeleros, which today employs more than 50 people and raised around €10 million in funding. ‘We’ve been working on this technology for five years, and it can be a real alternative mode of transportation.’

    Yet the idea behind the hyperloop is older than Elon Musk, and it’s similar to an earlier idea called a vactrain or vacuum tube train. A comparable concept was already proposed by 19th century author Michel Verne, son of Jules, and has since then been periodically brought up by science-fiction writers and technologists. Now, however, the hyperloop seems to be getting ready for a breakthrough, and Zeleros is one of the concepts in the running.

    2
    ‘You need to remove the air from the front of the vehicle. If not, the craft would stop. Which is why we use a compressor system at the front of the vehicle’, explained Juan Vicén Balaguer, co-founder and chief marketing officer of Zeleros. Artist’s impression – Zeleros hyperloop.

    Higher-pressure tube

    What makes their technology unique is their approach to the tube. ‘Each company uses a different level of pressure,’ said Vicén. ‘Some are going for space pressure levels. Which means that the atmosphere in the tube is similar to space. It contains almost zero air.’

    This state would allow for very fast speeds, since the train would face almost no friction. Yet it comes with a range of practical issues. It’s very difficult and expensive to achieve and maintain this level of pressure for long stretches of tube. Safety would also be an issue. if something happens to the hull of the train, passengers would be exposed to dangerous vacuum conditions.

    That’s why Zeleros is aiming for higher-pressure tubes. ‘It would be similar to the pressure seen in aviation,’ said Vicén. The pressure in the tubes proposed by Zeleros would extend to around 100 millibars. This, in turn, allows them to copy safety systems from aircraft, such as the oxygen masks that drop from overhead cabins. This design choice also makes their tubes cheaper to build, thereby reducing infrastructure costs. Yet it also means their trains face more air friction when they glide through the tube, which they have to compensate for in other ways.

    ‘You need to remove the air from the front of the vehicle,’ said Vicén. ‘If not, the craft would stop. Which is why we use a compressor system at the front of the vehicle. If there was zero pressure, we wouldn’t need this. But it’s a balance between economics and efficiency.’

    At the front of the train is a compressor, which looks like the front of an airliner engine and which sucks in air and lets it out at the rear, providing propulsion for the craft. A so-called linear motor is also located at key parts of the track, like the start, to give the train its initial propulsion. From there it self-propels along the track, with magnets at the top of the vehicle attracting it to the top of the tube and making it levitate. This proposed craft would carry between 50 and 200 passengers, and would reach up to 1000 km/h. By comparison, the cruising speed of a short-haul passenger aircraft is about 800 km/h.

    Outcompete air

    But why do we need this in the first place? Shouldn’t we just invest more in our regular, high-speed trains? It’s more complicated than that, says Professor María Luisa Martínez Muneta from the Technical University of Madrid [Universidad Politécnica de Madrid] (ES), where she coordinates the HYPERNEX research project. HYPERNEX connects hyperloop start-ups, like Zeleros, with universities, railway companies and regulators, in order to accelerate the technology’s development in Europe.

    ‘Hyperloops face today’s greatest transportation demands: reduction of travel time and of environmental impact,’ said Prof. Martínez Muneta.

    Because of its limited speed – generally around 300-350 km/h – high-speed rail quickly becomes a bad choice for longer range travel if you want to get somewhere in a hurry. This gap is filled by short and medium-distance air travel, but aircraft emit a high volume of emissions compared to trains and are not always convenient, as airports can be located away from city centres.

    A hyperloop could solve the problem. ‘This mode of transport is focused on covering routes between 400 and 1500 kilometres,’ said Prof. Martínez Muneta. In this way a hyperloop would replace most shorter aeroplane travel, with much less of an environmental impact. ‘The hyperloop produces zero direct emissions as it is 100% electrical, while achieving higher speeds and therefore shorter travel times,’ she said.

    3
    With a speed of 1000 km/h, the hyperloop could be a greener and faster alternative to air travel. Image credit – Horizon.

    Labs and regulation

    Bringing this vision into reality will likely take a decade. Vicén from Zeleros predicts that the first commercial passenger routes will come online around 2030, with hyperloops focused on cargo arriving a few years earlier, around 2025-2027.

    One key issue in this timeframe is regulation. ‘The European Union is the first region that has a committee that promotes regulation and standardisation of hyperloops,’ said Vicén, referring to the 2020 founding of a joint technical committee on hyperloops by the European Committee for Standardization and the European Committee for Electrotechnical Standardization.

    According to Zeleros, this is an important step if hyperloops want to become commercially viable. These craft would operate at hitherto unseen speeds, with new safety characteristics like airless tubes. This would in turn require new regulations and standardisations, for example on what to do if the capsule depressurised.

    4
    The pressure in the tubes proposed by Zeleros would extend to around 100 millibars and allows to copy safety systems from aircraft, such as the oxygen masks that drop from overhead cabins. Artist’s impression – Zeleros hyperloop.

    The technology also remains somewhat untested, although real-world experiments are happening more often. Vicén mentions how they have already tested their technology in computer simulations, where they can model things like aerodynamic conditions and electromagnetic dynamics. They also use so-called physical demonstrators or prototypes that test in laboratory conditions how magnetism is affected by high speeds, for example.

    Nevertheless, they are aching to move from the lab to the field. Right now, they are planning to build a 3-km test track at a still-to-be-determined location in Spain, where by 2023 they hope to demonstrate their technology, and they are working with the Port of Valencia to study the use of hyperloops in transporting freight.

    Hyperloops might still be a few years out, but we’ll likely see more of them in the future.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

     
  • richardmitnick 12:06 pm on April 19, 2021 Permalink | Reply
    Tags: "How scientists are ‘looking’ inside asteroids", Horizon - The EU Research and Innovation Magazine   

    From Horizon-The EU Research and Innovation Magazine : “How scientists are ‘looking’ inside asteroids” 

    1

    From Horizon The EU Research and Innovation Magazine

    19 April 2021
    Tereza Pultarova

    1
    The shape of asteroids such as 243 Ida can reveal information about what they’re made of, which can, in turn, tell us more about the formation of the solar system. Image credit – National Aeronautics and Space Administration(US)/NASA-JPL/Caltech(US)/USGS – U.S. Geological Survey.

    Asteroids can pose a threat to life on Earth but are also a valuable source of resources to make fuel or water to aid deep space exploration. Devoid of geological and atmospheric processes, these space rocks provide a window onto the evolution of the solar system. But to really understand their secrets, scientists must know what’s inside them.

    Only four spacecraft have ever landed on an asteroid – most recently in October 2020 – but none has peered inside one. Yet understanding the internal structures of these cosmic rocks is crucial for answering key questions about, for example, the origins of our own planet.

    Asteroids are the only objects in our solar system that are more or less unchanged since the very beginning of the solar system’s formation,’ said Dr Fabio Ferrari, who studies asteroid dynamics at the University of Bern [Universität Bern](CH) . ‘If we know what’s inside asteroids, we can understand a lot about how planets formed, how everything that we have in our solar system has formed and might evolve in the future.’

    Then are also more practical reasons for knowing what’s inside an asteroid, such as mining for materials to facilitate human exploration of other celestial bodies, but also defending against an Earth-bound rock.

    NASA’s upcoming Double Asteroid Redirection Test (DART) mission, expected to launch later this year, will crash into the 160m in diameter asteroid moon Dimorphos in 2022, with the aim of changing its orbit.

    The experiment will demonstrate for the first time whether humans can deflect a potentially dangerous asteroid.

    But scientists have only rough ideas about how Dimorphos will respond to the impact as they know very little about both this asteroid moon, and its parent asteroid, Didymos.

    To better address such questions, scientists are investigating how to remotely tell what’s inside an asteroid and discern its type.

    Types

    There are many types of asteroids. Some are solid blocks of rock, rugged and sturdy, others are conglomerates of pebbles, boulders and sand, products of many orbital collisions, held together only by the power of gravity. There are also rare metallic asteroids, heavy and dense.

    ‘To deflect the denser monolithic asteroids, you would need a bigger spacecraft, you would need to travel faster,’ said Dr Hannah Susorney, a research fellow in planetary science at the University of Bristol (UK). ‘The asteroids that are just bags of material – we call them rubble piles – can, on the other hand, blow apart into thousands of pieces. Those pieces could by themselves become dangerous.’

    Dr Susorney is exploring what surface features of an asteroid can reveal about the structure of its interior as part of a project called EROS.

    This information could be useful for future space mining companies who would want to know as much as possible about a promising asteroid before investing into a costly prospecting mission as well as knowing more about potential threats.

    ‘There are thousands of near-Earth asteroids, those whose trajectories could one day intersect with that of the Earth,’ she said. ‘We have only visited a handful of them. We know close to nothing about the vast majority.’

    2
    During the fourth ever landing on an asteroid, Bennu was mapped thanks to a mosaic of images collected by NASA’s OSIRIS-REx spacecraft. Peering inside an asteroid is the next crucial step. Image credit – NASA Goddard Space Flight Center(US)/University of Arizona (US).

    Topography

    Dr Susorney is trying to create detailed topography models of two of the most well-studied asteroids – Itokawa (the target of the 2005 Japanese Hayabusa 1 mission) and Eros (mapped in detail by the NEAR Shoemaker space probe in the late 1990s).

    3
    Itokawa. Credit:Japan Aerospace Exploration Agency (JAXA) (国立研究開発法人宇宙航空研究開発機構] (JP).

    4
    Hayabusa. Credit:Japan Aerospace Exploration Agency (JAXA) (国立研究開発法人宇宙航空研究開発機構] (JP).

    ‘The surface topography can actually tell us a lot,’ Dr Susorney said. ‘If you have a rubble pile asteroid, such as Itokawa, which is essentially just a bag of fluff, you cannot expect very steep slopes there. Sand cannot be held up into an infinite slope unless it’s supported. A solid cliff can. The rocky monolithic asteroids, such as Eros, do tend to have much more pronounced topographical features, much deeper and steeper craters.’

    Susorney wants to take the high-resolution models derived from spacecraft data and find parameters in them that could then be used in the much lower resolution asteroid shape models created from ground-based radar observations.

    “The difference in the resolution is quite substantial, she admits. Tens to hundreds of metres in the high-res spacecraft models and kilometres from ground-based radar measurements. But we have found that, for example, the slope distribution gives us a hint. How much of the asteroid is flat and how much is steep?”

    6
    Coloured topographical maps from Dr Susorney show Eros (left), a rocky monolithic asteroid, as having steeper craters than Itokawa (right), a rubble pile asteroid. Image credit – Hannah Susorney.

    Dr Ferrari is working with the team preparing the DART mission. As part of a project called GRAINS, he developed a tool that enables modelling of the interior of Dimorphos, the impact target, as well as other rubble pile asteroids.

    ‘We expect that Dimorphos is a rubble pile because we think that it formed from matter ejected by the main asteroid, Didymos, when it was spinning very fast,’ Dr Ferrari said. ‘This ejected matter then re-accreted and formed the moon. But we have no observations of its interior.’

    An aerospace engineer by education, Dr Ferrari borrowed a solution for the asteroid problem from the engineering world, from a discipline called granular dynamics.

    “On Earth, this technique can be used to study problems such as sand piling or various industrial processes involving small particles, Dr Ferrari said. It’s a numerical tool that allows us to model the interaction between the different particles (components) – in our case, the various boulders and pebbles inside the asteroid.”

    Rubble pile

    The researchers are modelling various shapes and sizes, various compositions of the boulders and pebbles, the gravitational interactions and the friction between them. They can run thousands of such simulations and then compare them with surface data about known asteroids to understand rubble pile asteroids’ behaviour and make-up.

    ‘We can look at the external shape, study various features on the surface, and compare that with our simulations,’ Dr Ferrari said. ‘For example, some asteroids have a prominent equatorial bulge,’ he says, referring to the thickening around the equator that can appear as a result of the asteroid spinning.

    In the simulations, the bulge might appear more prominent for some internal structures than others.

    For the first time, Dr Ferrari added, the tool can work with non-spherical elements, which considerably improves accuracy.

    ‘Spheres behave very differently from angular objects,’ he said.

    The model suggests that in the case of Dimorphos, the DART impact will create a crater and throw up a lot of material from the asteroid’s surface. But there are still many questions, particularly the size of the crater, according to Dr Ferrari.

    ‘The crater might be as small as ten metres but also as wide as a hundred metres, taking up half the size of the asteroid. We don’t really know,’ said Dr Ferrari. ‘Rubble piles are tricky. Because they are so loose, they might as well just absorb the impact.’

    No matter what happens on Dimorphos, the experiment will provide a treasure trove of data for refining future simulations and models. We can see whether the asteroid behaves as we expected and learn how to make more accurate predictions for future missions that lives on Earth may very well depend on.

    5
    The solar system’s asteroid belt contains C-type asteroids, which likely consist of clay and silicate rocks, M-type, which are composed mainly of metallic iron, and S-type, which are formed of silicate materials and nickel-iron. Image credit – Horizon.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 9:32 am on January 22, 2021 Permalink | Reply
    Tags: "Unravelling the when where and how of volcanic eruptions", A project called FEVER, , , , Horizon - The EU Research and Innovation Magazine, It’s still difficult to predict when and how these eruptions will happen or how they’ll unfold., Knowing where a volcano will erupt from is one thing but knowing when it will do so is a different matter., The VOLCAPSE project, There are about 1500 potentially active volcanoes worldwide and about 50 eruptions occur each year.,   

    From Horizon-The EU Research and Innovation Magazine: “Unravelling the when where and how of volcanic eruptions” 

    1

    From Horizon-The EU Research and Innovation Magazine

    20 January 2021
    Sandrine Ceurstemont

    1
    When Villarrica erupted in 2015 the volcano spewed ash and lava 1,000m into the air. Credit: Warehouse of Images / shutterstock.

    There are about 1,500 potentially active volcanoes worldwide and about 50 eruptions occur each year. But it’s still difficult to predict when and how these eruptions will happen or how they’ll unfold. Now, new insight into the physical processes inside volcanoes are giving scientists a better understanding of their behaviour, which could help protect the 1 billion people who live close to volcanoes.

    Dome-building volcanoes, which are frequently active, are among the most dangerous types of volcanoes since they are known for their explosive activity. This type of volcano often erupts by first quietly producing a dome-shaped extrusion of thick lava at its summit which is too viscous to flow. When it eventually becomes destabilised, it breaks off and produces fast-moving currents of hot gas, solidified lava pieces and volcanic ash, called pyroclastic clouds, that flow down the sides of the volcano at the speed of a fast train.

    “The hazards associated with them can be very spontaneous and hard to predict,’ said Professor Thomas Walter, a professor of volcanology and geohazards at the University of Potsdam in Germany. ‘That’s why it’s so important to understand this phenomenon of lava domes.’

    Little is known about the behaviour of lava domes, partly because there isn’t much data available. Prof. Walter and his colleagues want to better understand how they form, whether they can vary significantly in shape and what their internal structure is like. Over the last five years, through a project called VOLCAPSE, they have been using innovative techniques to monitor lava domes by using high resolution radar data captured by satellites as well as close-up views from cameras set up near volcanoes.

    ‘Pixel by pixel, we could determine how the shape, morphology and structure of these lava domes changed,’ said Prof. Walter. ‘We compared (the webcam images) to satellite radar observations.’

    2
    The VOLCAPSE project monitors a few dome-building volcanoes around the world using various techniques to better understand this explosive type of volcano. Credit:Thomas Walter/VOLCAPSE.

    Time-lapse

    The project focussed on a few dome-building volcanoes such as Colima in Mexico, Mount Merapi in Indonesia, Bezymianny in Russia, and Mount Lascar and Lastarria in Chile. It partly involved visiting them and installing instruments such as time-lapse cameras powered by solar panels that could be controlled remotely. If a lava dome started to form, for example, the team could tweak the settings so that it captured higher resolution images more often.

    Due to high altitudes and harsh weather conditions, setting up the cameras was more challenging than expected. ‘It was a sharp learning curve, but also trial and error, because nobody could tell us what to expect at these volcanoes since it was never done before,’ said Prof. Walter.

    During their visits, the team also used drones. These would fly over a lava dome and capture high resolution images from different perspectives, which could be used to create detailed 3D models. Temperature and gas sensors on the drones provided additional information.

    Prof. Walter and his colleagues used the data to create computer simulations, such as how the growth of lava domes changes from eruption to eruption. They found that new lava domes don’t always form in the same location: a lava dome may form at the summit of a volcano during one eruption while the next time it builds up on one of its flanks. The team was puzzled, since a conduit inside a volcano brings magma to the surface during an eruption, which would mean that it changes its orientation between one eruption and the next. ‘That was very surprising for us,’ said Prof. Walter.

    Stress field

    They were able to explain how this happens by examining the distribution of internal forces – or stress field – in a volcano. When magma is expelled during an eruption, it changes how the forces are distributed inside and causes a reorientation of the conduit.

    The team also found that there was a systematic pattern to how the stress field changed, meaning that by studying the position of lava domes they could estimate where they had formed in the past and where they would appear in the future. This could help determine which areas near a volcano are likely to be most affected by eruptions yet to come.

    ‘This is a very cool result for predictive research if you want to understand where the lava dome is going to extrude (or collapse) from in the future,’ he said.

    3
    Fumaroles are a telltale sign of an active volcano, releasing volcanic gases into the atmosphere. Credit: Thomas Walter/VOLCAPSE.

    Knowing where a volcano will erupt from is one thing, but knowing when it will do so is a different matter and the physical factors that govern this are also not well understood. Although there is a relationship between how often eruptions occur and their size, with big eruptions occurring very rarely compared to smaller ones, a lack of reliable data makes it hard to examine the processes that control eruption frequency and magnitude.

    ‘When you go back in the geological record, (the traces of) many eruptions disappear because of erosion,’ said Professor Luca Caricchi, a professor of petrology and volcanology at the University of Geneva in Switzerland.

    Furthermore, it’s not possible to access these processes directly since they occur deep down beneath a volcano, at depths of 5 to 60 kilometres. Measuring the chemistry and textures of magma expelled during an eruption can provide some clues about the internal processes that led to the event. And magma chambers can sometimes be investigated when they pop up at the surface of the Earth due to tectonic processes. Extracting information from specific time periods is still difficult though since the ‘picture’ you get is like a movie where all the frames are collapsed into a single shot. ‘It’s complicated to retrieve the evolution in time – what really happened during the movie,’ said Prof. Caricchi.

    Prof. Caricchi and his colleagues are using a novel approach to forecast the recurrence rate of eruptions. Previous predictions were typically based on statistical analyses of the geological records of a volcano. But through a project called FEVER the team is aiming to combine this method with physical modelling of the processes responsible for the frequency and size of eruptions. A similar approach has been used to estimate when earthquakes and floods will occur again.

    Using physical models should especially be useful to make predictions for volcanoes where there is little data available. ‘To extrapolate our findings from a place where we know a lot, like in Japan, you need a physical model that tells you why the frequency-magnitude relationship changes,’ said Prof. Caricchi.

    To create their model, the team have incorporated variables that affect pressure in the magma reservoir or the rate of accumulation of magma at depth below the volcano. The viscosity of the crust under the volcano and the size of the magma reservoir, for example, play a role. They have performed over a million simulations using all the possible combinations of values that can occur. The relationship between frequency and magnitude they obtained from their model was similar to what was estimated by using volcanic records so they think they were able to capture the fundamental processes involved.

    “It’s sort of a fight between the amount of magma and the properties of the crust, said Prof. Caricchi, They are the two big players that fight each other to finally lead to this relationship.”

    4
    Models that can better predict future eruptions could protect the lives of the 1 billion people who live close to volcanoes. Credit: Thomas Walter/VOLCAPSE.

    Tectonic plates

    However, the team also found that the relationship between the size and frequency of changes across volcanoes in different regions. Prof. Caricchi thinks this is due to differences in the geometry of tectonic plates in each area.

    The tectonic plates of the world were mapped in 1996, USGS.

    “We can see that the rate at which a plate subducts below another, and also the angle of subduction, seem to play an important role in defining the frequency and magnitude of a resulting eruption,” he said. The team is now starting to incorporate this new information into their model.

    Being able to predict the frequency and magnitude of future eruptions using a model could help better assess hazards. In Japan, for example, one of the countries with the most active volcanoes, knowing the probability of future eruptions of various sizes is important when deciding where to build infrastructure such as nuclear power plants.

    It’s also invaluable in densely populated areas, such as in Mexico City, which is surrounded by active volcanoes, including Nevado de Toluca. Prof. Caricchi and his colleagues studied this volcano, which hasn’t erupted for about 3,000 years. They found that once magmatic activity restarts, it would take about 10 years before a large eruption could potentially occur. This knowledge would prevent Mexico City from being evacuated if initial signs of activity are spotted.

    “Once the activity restarts, you know you have ten years to follow the evolution of the situation, said Prof. Caricchi (People) will now know a little bit more about what to expect.”

    The research in this article was funded by the EU’s European Research Council.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 12:16 pm on December 3, 2020 Permalink | Reply
    Tags: "Opening the ‘black box’ of artificial intelligence", , Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Opening the ‘black box’ of artificial intelligence” 

    1

    From Horizon The EU Research and Innovation Magazine

    01 December 2020
    Tom Cassauwers

    1
    When decisions are made by artificial intelligence, it can be difficult for the end user to understand the reasoning behind them. Credit: phylevn/Flickr, licenced under CC BY 2.0.

    Artificial intelligence is growing ever more powerful and entering people’s daily lives, yet often we don’t know what goes on inside these systems. Their non-transparency could fuel practical problems, or even racism, which is why researchers increasingly want to open this ‘black box’ and make AI explainable.

    In February of 2013, Eric Loomis was driving around in the small town of La Crosse in Wisconsin, US, when he was stopped by the police. The car he was driving turned out to have been involved in a shooting, and he was arrested. Eventually a court sentenced him to six years in prison.

    This might have been an uneventful case, had it not been for a piece of technology that had aided the judge in making the decision. They used COMPAS, an algorithm that determines the risk of a defendant becoming a recidivist. The court inputs a range of data, like the defendant’s demographic information, into the system, which yields a score of how likely they are to again commit a crime.

    How the algorithm predicts this, however, remains non-transparent. The system, in other words, is a black box – a practice against which Loomis made a 2017 complaint in the US Supreme Court. He claimed COMPAS used gender and racial data to make its decisions, and ranked Afro-Americans as higher recidivism risks. The court eventually rejected his case, claiming the sentence would have been the same even without the algorithm. Yet there have also been a number of revelations which suggest COMPAS doesn’t accurately predict recidivism.

    Adoption

    While algorithmic sentencing systems are already in use in the US, in Europe their adoption has generally been limited. A Dutch AI sentencing system, that judged on private cases like late payments to companies, was for example shut down in 2018 after critical media coverage. Yet AI has entered into other fields across Europe. It is being rolled out to help European doctors diagnose Covid-19. And start-ups like the British M:QUBE, which uses AI to analyse mortgage applications, are popping up fast.

    These systems run historical data through an algorithm, which then comes up with a prediction or course of action. Yet often we don’t know how such a system reaches its conclusion. It might work correctly, or it might have a technical error inside of it. It might even reproduce some form of bias, like racism, without the designers even realising it.

    This is why researchers want to open this black box, and make AI systems transparent, or ‘explainable’, a movement that is now picking up steam. The EU White Paper on Artificial Intelligence released earlier this year called for explainable AI, major companies like Google and IBM are funding research into it and GDPR even includes a right to explainability for consumers.

    “We are now able to produce AI models that are very efficient in making decisions, said Fosca Giannotti, senior researcher at the Information Science and Technology Institute of the National Research Council in Pisa, Italy. But often these models are impossible to understand for the end-user, which is why explainable AI is becoming so popular.”

    Diagnosis

    Giannotti leads a research project on explainable AI, called XAI, which wants to make AI systems reveal their internal logic. The project works on automated decision support systems like technology that helps a doctor make a diagnosis or algorithms that recommend to banks whether or not to give someone a loan. They hope to develop the technical methods or even new algorithms that can help make AI explainable.

    “Humans still make the final decisions in these systems, said Giannotti. But every human that uses these systems should have a clear understanding of the logic behind the suggestion.”

    Today, hospitals and doctors increasingly experiment with AI systems to support their decisions, but are often unaware of how the decision was made. AI in this case analyses large amounts of medical data, and yields a percentage of likelihood a patient has a certain disease.

    For example, a system might be trained on large amounts of photos of human skin, which in some cases represent symptoms of skin cancer. Based on that data, it predicts whether someone is likely to have skin cancer from new pictures of a skin anomaly. These systems are not general practice yet, but hospitals are increasingly testing them, and integrating them in their daily work.

    These systems often use a popular AI method called deep learning, that takes large amounts of small sub-decisions. These are grouped into a network with layers that can range from a few dozen up to hundreds deep, making it particularly hard to see why the system suggested someone has skin cancer, for example, or to identify faulty reasoning.

    “Sometimes even the computer scientist who designed the network cannot really understand the logic,” said Giannotti.

    Natural language

    For Senén Barro, professor of computer science and artificial intelligence at the University of Santiago de Compostela in Spain, AI should not only be able to justify its decisions but do so using human language.

    “Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result,” said Prof. Barro.

    He is scientific coordinator of a project called NL4XAI which is training researchers on how to make AI systems explainable, by exploring different sub-areas such as specific techniques to accomplish explainability.

    He says that the end result could look similar to a chatbot. “Natural language technology can build conversational agents that convey these interactive explanations to humans,” he said.

    Another method to give explanations is for the system to provide a counterfactual. “It might mean that the system gives an example of what someone would need to change to alter the solution,” said Giannotti. In the case of a loan-judging algorithm, a counterfactual might show to someone whose loan was denied what the nearest case would be where they would be approved. It might say that someone’s salary is too low, but if they earned €1,000 more on a yearly basis, they would be eligible.

    White box

    Giannotti says there are two main approaches to explainability. One is to start from black box algorithms, which are not capable of explaining their results themselves, and find ways to uncover their inner logic. Researchers can attach another algorithm to this black box system – an ‘explanator’ – which asks a range of questions of the black box and compares the results with the input it offered. From this process the explanator can reconstruct how the black box system works.

    “But another way is just to throw away the black box, and use white box algorithms, ” said Giannotti. These are machine learning systems that are explainable by design, yet often are less powerful than their black box counterparts.

    “We cannot yet say which approach is better, cautioned Giannotti. The choice depends on the data we are working on.” When analysing very big amounts of data, like a database filled with high-resolution images, a black box system is often needed because they are more powerful. But for lighter tasks, a white box algorithm might work better.

    Finding the right approach to achieving explainability is still a big problem though. Researchers need to find technical measures to see whether an explanation actually explains a black-box system well. ‘The biggest challenge is on defining new evaluation protocols to validate the goodness and effectiveness of the generated explanation,’ said Prof. Barro of NL4XAI.

    On top of that, the exact definition of explainability is somewhat unclear, and depends on the situation in which it is applied. An AI researcher who writes an algorithm will need a different kind of explanation compared to a doctor who uses a system to make medical diagnoses.

    “Human evaluation (of the system’s output) is inherently subjective since it depends on the background of the person who interacts with the intelligent machine,” said Dr Jose María Alonso, deputy coordinator of NL4XAI and also a researcher at the University of Santiago de Compostela.

    Yet the drive for explainable AI is moving along step by step, which would improve cooperation between humans and machines. “Humans won’t be replaced by AI, said Giannotti. They will be amplified by computers. But explanation is an important precondition for this cooperation.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 11:59 am on December 3, 2020 Permalink | Reply
    Tags: "Q&A- It’s time to rethink the Milky Way", , , , , Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Q&A- It’s time to rethink the Milky Way” 

    1

    From Horizon The EU Research and Innovation Magazine

    03 December 2020
    Kelly Oakes

    1
    How stars and planets form, and how special is the Milky Way, are among the as-yet unanswered questions about our galaxy. Credit: Oliver Griebl/Wikimedia, licenced under CC BY 3.0.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt. The bar is visible in this image.

    The Milky Way might be right on our cosmic doorstep, but a group of astronomers suspect that the way we currently study it is stunting our understanding. Professor Ralf Klessen at Heidelberg University in Germany [Ruprecht-Karls-Universität Heidelberg (DE)] is one of four researchers who have recently begun a six-year project, ECOGAL, to try something new: imagine our home galaxy as one huge galactic ecosystem.

    Prof. Klessen believes that using this lens could answer fundamental questions about how stars and planets form, and how they shape the Milky Way’s future.

    What do you mean by studying the galaxy as an ecosystem?

    Think of it like trying to understand the climate on Earth. To do that you need to understand cloud physics, solar irradiation, the interaction of oceans with the atmosphere, the impact of humans, the carbon cycle, and so on. In order to really understand how all this works (together) you need to come with lots of complementary expertise, and I think the same applies to understanding our Milky Way and how it evolves.

    You’ve described looking at the Milky Way like an ecosystem as a ‘paradigm shift’ for astronomy. What’s wrong with how we study our home galaxy now?

    If you look at an image of the Milky Way, you see these dark patches that block the light in the background: that is a collection of clouds made of gas and dust. These things are large and can gravitationally contract to become more compact and eventually form individual stars, or star clusters.

    Star formation theories so far mostly look at these clouds in isolation. But we know that only a small fraction of the mass of each cloud is converted into stars (and this happens) on a relatively short timescale. That means the environmental conditions they experience do matter.

    So there is a link between what happens in the galaxy on large scales, determining where you can actually see and find these clouds, and then – within the clouds – where you find actual stars, and then – within these star-forming regions – where planetary systems form. There are many complex and intricate feedback loops involved.

    But typically these scales are considered disconnected. This approach really hits limits.

    What are the major outstanding scientific questions about the Milky Way?

    We still do not really fully understand how stars form – in particular, massive stars – and how this connects to their environment. We also have competing theories of how planets form, and how protoplanetary disks – the sites out of which they form – evolve.

    There’s also an interesting question of, ‘Is the Milky Way special or not?’ If you compare the Milky Way with cosmological simulations, there seem to be indications that our history was particularly boring: there was no catastrophic merging event with some neighbouring galaxy, we have apparently fewer satellite systems than you would expect, and so on.

    Could the idea that the Milky Way is especially boring have some bearing on fact that at least one habitable planet – i.e. Earth – has been able to form within it?

    It’s difficult to make that connection. The stability of planetary systems is something that is on scales so much smaller than the Milky Way is a whole. The solar system has a few light hours diameter. But light travels from one end of the Milky Way to the other in 100,000 years.

    Yet in interacting galaxies, you tend to form stars in a more clustered environment, you tend to form them in a more violent fashion. Of course, if you form more stars in a more compact configuration, you have more interactions that can disrupt the protoplanetary disk that will probably prevent the long-term stability of a planetary system. So, in a dense star cluster we would not expect a solar system like ours to exist.

    So, one can make some relations, but only in a statistical sense.

    How do you hope ECOGAL will change our understanding of the Milky Way?

    We want to build a census of star and planet formation in the Milky Way and how the different environmental conditions influence the process. For example, in the galactic centre, the density is much higher, the radiation field is much more intense, it is much more turbulent than in the solar neighbourhood. If you go further out in the galactic disk, it becomes much more boring – densities are lower, timescales are longer, not that much happens. All these peculiarities influence the properties of stellar birth.

    Another aspect is to really understand these feedback loops: how do radiation and winds from stars influence their environment? How do the highly energetic supernova explosions that mark the death of very massive stars contribute to the gas dynamics on large galactic scales? Or on small scales, how do the properties of the host star determine the birth of planetary systems? The question is, in a sense, how does the galactic habitat influence planet formation? Where can you keep planetary systems stable over long-enough timescales for life to form?

    Finally, we want to build a comprehensive model of how (the Milky Way) evolves and use this as a role model for galaxies in the more distant universe.

    What difference will this more holistic view make to the way you conduct research?

    We will really look at the connection of these scales and these feedback loops and try to understand them, both observationally with real data, and from a theoretical, computational point of view.

    This is very much a team effort. My group works on theoretical models of the Milky Way as a whole and we zoom into individual star-forming regions. Patrick Henebelle (at CEA Paris-Saclay, France) takes individual clouds and zooms in further into individual star formation sites (to study) protoplanetary accretion disks. Each of these scales is then intimately coupled to observation. Sergio Molinari at INAF Rome (Italy) looks at the large-scale distribution of clouds in the Milky Way and young star-forming regions. Leonardo Testi at the European Southern Observatory (ESO) in Garching (Germany) looks at the distribution of protostellar and protoplanetary disks, making the connection observationally between planet formation, star formation and star cluster formation.

    ESA’s Gaia mission is set to build a 3D map of the Milky Way by analysing one billion stars, and is releasing new data on 3 December. How will Gaia advance our knowledge? Will you use the data in ECOGAL?

    ESA (EU)/GAIA satellite .

    It sounds a bit strange, but getting good distances is extremely difficult in astronomy. For the interstellar medium – the gas between stars – this is so much more difficult (than for stars). We can use the information from Gaia and combine it with distance estimates from modelling of the Milky Way. In that sense Gaia is essential to help build a three-dimensional picture of the gas distribution in the Milky Way.

    But because it is an optical instrument, Gaia cannot really see very deeply into the enshrouded regions where young clusters (of stars) form. For that reason, infrared or submillimetre observations, like those by the Atacama large millimetre/submillimetre array (ALMA) in Chile operated by ESO, and other radio telescopes, are our bread and butter.

    What is the most important thing someone should know about the Milky Way?

    Star formation is not something that happened a long time ago and (then stopped) – the universe is full of dynamics. Stars and new planetary systems are born all the time around us, in our vicinity, in the Milky Way… that is not something that has been all done and decided an eternity ago.

    This dynamic picture of the universe is something that I find extremely fascinating, and I think it is a picture that maybe not so many people are aware of. They think of the heavens as this eternal thing that does not change. That is absolutely not the case.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 10:15 am on March 20, 2020 Permalink | Reply
    Tags: "The prostheses that could alleviate amputees’ phantom limb pain", , Horizon - The EU Research and Innovation Magazine,   

    From Horizon The EU Research and Innovation Magazine: “The prostheses that could alleviate amputees’ phantom limb pain” 

    1

    From Horizon The EU Research and Innovation Magazine

    19 March 2020
    Ian Le Guillou

    1
    When it comes to improving amputees’ quality of life, one of the biggest challenges is making prostheses that feel like a natural part of the user. Image credit – Johns Hopkins University Applied Physics Laboratory/Wikimedia Commons

    New prosthetic technologies that stimulate the nerves could pave the way for prostheses that feel like a natural part of the body and reduce the phantom limb pain commonly endured by amputees.

    Silvestro Micera, a professor of translational neuroengineering at Ecole Polytechnique Fédérale de Lausanne in Switzerland, has spent the past 20 years figuring out how to make better prostheses for people with amputated limbs. He became interested in prostheses as a teenager.

    ‘I loved comics and science fiction movies – things like Doctor Octopus from Spiderman,’ he said. ‘In the beginning, it was the scientific interest of a teenager, but then it became an idea of helping people to get back what they’ve lost.’

    A project that he led, called NEBIAS, developed a robotic hand that provides sensory feedback to the user. The technology behind it was ground-breaking – an implant positioned under the skin that connects to the person’s nerves. It transmits information from sensors in the hand by stimulating the nerves with electrical signals. This allows, for instance, a person to tell if an object they are holding is soft or hard.

    Through a follow-up project called SensAgain, Prof. Micera worked to further develop the technology and take it to market.

    ‘In five to ten years from now, the technology from NEBIAS is going to be provided to patients around the world,’ he said.

    To get the technology into practice as quickly as possible, he switched focus to developing prosthetic legs rather than hands, as leg amputations affect more people. Last year, he published a paper [Science Translational Medicine] showing how sensory feedback in prosthetic legs can help people.

    The users were able to walk better and move around in a less tiring and more confident way, says Prof. Micera. ‘If you’re walking better and faster, then you’re also able to reduce other effects, like back pain and cardiovascular disease,’ he said.

    Evidence

    To scale up this work, he set up a company called SensArs, being funded through a project called GOSAFE, with the aim of getting more evidence of the impact that the prosthesis makes on people’s lives.

    ‘We showed that longer-term, like six months or more, the technology works,’ he said. ‘The challenge is to go from a few patients for a few months to many patients for many years.’

    With only a handful of people testing the prosthesis, it is difficult to draw broad conclusions, but the early signs have been encouraging.

    ‘Very quickly, almost immediately, the subjects learn how to use this kind of prosthesis. They are able to exploit it in a very effective way. They get a very good embodiment, where they feel like the prosthesis is part of the body,’ Prof. Micera said.

    Making a prosthesis that feels like a part of a person is one of the biggest challenges for improving quality of life for amputees. Aside from the need to have sensory feedback, it is very complex to replicate the muscles that control a limb so that a prosthesis feels natural. In the hand alone, for instance, there are more than 30 individual muscles to control it.

    Professor Giovanni Di Pino, from the Campus Bio-Medico University of Rome in Italy, is a neurophysiologist who worked with Prof. Micera previously. He is applying his expertise to study neural interfaces, the devices that connect directly to a person’s nerves.

    ‘I want to try to understand how to develop a hand prosthesis that feels like part of the body,’ said Prof. Di Pino.

    Some commercial prosthetic hands are controlled through electrodes placed on the skin. These can detect the stimulation of specific muscles, which the person uses to manipulate the prosthesis. However, Prof. Di Pino says that many upper limb amputees are unhappy with their prosthetic hand, as it does not feel like a part of their body.

    Prof. Di Pino is running a project called RESHAPE, which is testing new ways to connect a controllable hand to the body. The ultimate goal is for the amputee to feel complete, and this comes back to how the body is represented in the mind.

    ‘The image of the body is like a map in our brain,’ said Prof. Di Pino. ‘We are going to describe the representation of the hand.’

    Brain scans

    Prof. Di Pino is using brain scans to understand how the neural connections change when the person is trying to move their missing hand.

    These connections and representations could be significant to help reduce the impact of phantom limb pain. This is a phenomenon where amputees can feel, sometimes very intense, pain that appears to come from their missing limb.

    ‘Phantom limb pain is extremely common in upper limb amputees,’ Prof. Di Pino said. ‘The subject is in pain because he cannot feel the hand, but he or she can feel the cortex that used to feel the hand.’

    The connections in the brain that used to control the hand and sense pain are still there, but now they are not receiving the same sensory feedback. In one of the subjects of his study, Prof. Di Pino found that using interface implants helped reduce phantom limb pain.

    ‘In the beginning, he had a lot of pain, and after three months of tests we (got rid of) 70% of his pain,’ he said.

    The advances made through these research projects will still take many years before they can help improve prostheses. However, the insights and technology developed could have a much broader impact.

    ‘Many years ago, someone asked me “why do you want to work on a niche like prostheses?”,’ Prof. Micera said. ‘Now I think I can show why, because if the technology works for sensory feedback in the nervous system, then you can use it for many things: blind people, paralysed people, diabetic patients and many others.’

    His team is already developing their technology to restore sight in blind people by stimulating the optic nerve. ‘We are also planning to apply it to restoring motor function in tetraplegic patients with spinal cord injury,’ he said.

    ‘What we did a few years ago for prostheses, I hope to do for patients with optic nerve implants a few years from now.’

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 1:00 pm on March 13, 2020 Permalink | Reply
    Tags: By 2030 a fifth of the fuel that motorists put into the petrol tanks of their cars could be alcohol in the form of ethanol., Horizon - The EU Research and Innovation Magazine   

    From Horizon: “Why raising the alcohol content of Europe’s fuels could reduce carbon emissions” 

    1

    From Horizon The EU Research and Innovation Magazine

    09 March 2020
    Richard Gray

    1
    E20 fuel would double the amount of ethanol in petrol and could reduce the EU’s emissions from gasoline by 8.2%. Image credit – Piqsels, licenced under CCO.

    By 2030, a fifth of the fuel that motorists put into the petrol tanks of their cars could be alcohol, according to research concluding that new petrol and ethanol blends can reduce carbon emissions from Europe’s transport sector with little additional cost to consumers.

    Labels that carry a single letter followed by a number are found on petrol pumps across Europe. Many motorists probably don’t notice these codes, or aren’t aware that when they use a pump which has one, they’re putting alcohol into their cars.

    The alcohol, in the form of ethanol derived from plants, is part of efforts to make the fuels we put in our vehicles more environmentally friendly. Most petrol now sold at pumps in Europe is a blend of 5% bioethanol and 95% gasoline, denoted by an E5 label, while some countries have moved to a new generation of fuel that contains up to 10% bioethanol, known as E10.

    And as the world looks to reduce its impact on climate change by cutting emissions from fossil fuels, motorists in the European Union could soon be putting even more alcohol into their tanks.

    Standards

    The European Committee for Standardization (CEN) commissioned research looking at the costs and benefits of introducing a fuel containing 20% bioethanol, or E20. The results from the project, which concluded towards the end of 2019, will help them develop new quality and specification standards that will be required before it can be sold.

    ‘The conclusion we have reached is that all the vehicles coming onto the market and those since 2011 should be able to handle fuels with up to 20% ethanol,’ said Ortwin Costenoble, a senior standardisation consultant at the Royal Netherlands Standardization Institute (NEN), which led the project. ‘We were working on the basis that in 2030, countries would adopt E20 as the main source of fuel.’

    Under the EU’s renewable energy directive, 10% of the fuel used in transport will need to come from renewable sources such as biofuel by the end of 2020. The 2018 revision of this directive set a target of 14% renewable energy being used in all transport by 2030.

    At present, the majority of EU member states use E5 petrol in their vehicles. Some countries, however, have started moving to E10. In January, Denmark, Hungary, Lithuania and Slovakia became the latest countries to introduce E10 to their forecourts, bringing the total number of EU member states to sell the fuel at the majority of retail stations to 13.

    Renewable

    While bioethanol still produces carbon dioxide when it burns, because it is made from plants rather than fossil fuels that take millions of years to form, it is considered to be a renewable fuel. It is also considered to be greener, partly because as the plants grow, they absorb carbon dioxide from the air and store it before it is converted into fuel and burned. This means they are not releasing additional carbon into the atmosphere as happens when fossil fuels are burned.

    A litre of pure ethanol also produces about two thirds of the carbon emissions compared to a litre of ordinary petrol. But ethanol contains less energy per litre than petrol, so a non-optimised car will need more alcohol to travel the same distance as it would with fossil fuel. This eats away at the carbon emission savings that are possible from using ethanol. And to produce the alcohol in the first place also requires energy, probably using fossil fuels, which can further reduce carbon savings.

    But the CEN study found that while fuel consumption would go up if countries switched to using E20 fuel, due to the increased amount of ethanol, carbon dioxide emissions overall would go down 10% compared to all cars using E10.

    ‘If you use a normal octane fuel blend with 20% bioethanol, the fuel consumption increases only by 4%,’ said Costenoble. But with more ethanol you may allow the octane component of gasoline to rise, and vehicles running on fuels with a higher-octane rating tend to be more efficient.

    The researchers estimated that if all 28 EU countries (the UK was still part of the EU at the time of the study) adopted E20, it could reduce greenhouse gas emissions by the equivalent of 25.4Mt (mega-tonnes) of carbon dioxide – about 8.2% of the current emissions from gasoline in the EU.

    They estimated that further savings could be made if the fuel’s petrol component had a higher octane rating of 102 – most fuel on sale today has an octane rating of 95.

    Production

    And there are concerns about how sustainable large-scale bioethanol production can be. Most bioethanol sold in the EU is produced by fermenting sugars contained in primary crops like maize, wheat, and sugar beet. This can take up land and resources that could otherwise be used to grow food.

    Efforts, however, are underway to produce a second generation of biofuels that could overcome this problem.

    ‘Some of our members are starting to use agricultural wastes and residues left behind from food crops,’ said Victor Bernabeu, senior technical and regulatory affairs manager at the European Renewable Ethanol Association, also known as ePURE. But justifying investment into such technologies has been difficult because there have been regular changes to the renewable energy policy framework, says Bernabeu.

    Existing policies are one of the roadblocks standing in the way of E20 fuel from coming on the market in the EU. The fuel quality directive, for example, currently only allows 10% of a fuel to be replaced with ethanol, a measure originating from a time when the impact of increasing alcohol levels on vehicle emissions was unknown.

    ‘It seems like a logical step to introduce E20 and everyone we spoke to seems to want it, but at the moment it is an illegal fuel,’ said Costenoble. A change in the regulations will be needed before it can be introduced, but he hopes manufacturers and standardisation writers will begin preparing for E20 before that happens.

    Public acceptance

    Another hurdle will be public acceptance. As most vehicles currently on the road are able to run on E10 and can move to E20 with some calibration or inexpensive upgrades costing a few hundred euros, there is unlikely to be much public opposition, according to Bernabeu.

    But if the cost of fuel itself increases because it contains higher levels of ethanol, it is likely to be welcomed far less. The work by Costenoble and his colleagues, however, found that E20 could be produced with current refinery infrastructure, which would need minimal adjustments.

    Making the fuel supply logistics chain compatible with E20 would cost less than one cent per litre, says Costenoble.

    But the cost of fuel to consumers mainly depends on varying market price of oil and ethanol, combined with the tax applied by different countries. Currently ethanol costs slightly more than gasoline, but many countries in Europe do not levy tax on the ethanol in fuel. This could help to offset any additional cost to consumers, says Costenoble.

    Bernabeu believes that the reduced environmental impact of shifting to E10 and then E20 fuels could also make them more acceptable to motorists.

    ‘Lots of people are probably not aware they are consuming ethanol in their cars at the moment already,’ said Bernabeu. He points to countries where E10 has been introduced, such as Belgium and France, where he says there have been major public information campaigns. ‘(E10) has been pitched as a greenhouse gas reduction measure, so it has been widely accepted.’

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 2:45 pm on February 7, 2020 Permalink | Reply
    Tags: "Five things we’re going to learn from Europe’s Solar Orbiter mission", , , , , Horizon - The EU Research and Innovation Magazine, NASA Parker Solar Probe Plus, ,   

    From Horizon The EU Research and Innovation Magazine: “Five things we’re going to learn from Europe’s Solar Orbiter mission” 

    1

    From Horizon The EU Research and Innovation Magazine

    ESA/NASA Solar Orbiter depiction

    07 February 2020
    Jonathan O’Callaghan

    At 23.03 (local time) on Sunday 9 February, Europe’s newest mission to study the sun is set to lift off from Cape Canaveral in Florida, US. Called Solar Orbiter, this European Space Agency (ESA) mission will travel to within the orbit of planet Mercury to study the sun like never before, returning stunning new images of its surface.

    Equipped with instruments and cameras, the decade-long mission is set to provide scientists with key information in their ongoing solar research. We spoke to three solar physicists about what the mission might teach us and the five unanswered questions about the sun it might finally help us solve.

    1. When solar eruptions are heading our way

    Solar Orbiter will reach a minimum distance of 0.28% of the Earth-sun distance throughout the course of its mission, which could last the rest of the 2020s. No other mission will have come closer to the sun, save for NASA’s ongoing Parker Solar Probe mission, which will reach just 0.04 times the Earth-sun distance.

    NASA Parker Solar Probe Plus named to honor Pioneering Physicist Eugene Parker

    Dr Emilia Kilpua from the University of Helsinki in Finland is the coordinator of a project called SolMAG, which is studying eruptions of plasma from the sun known as coronal mass ejections (CMEs).

    Coronal mass ejections – NASA-Goddard Space Flight Center-SDO

    NASA/SDO

    She says this proximity, and a suite of cameras that Parker Solar Probe lacks, will give Solar Orbiter the chance to gather data that is significantly better than any spacecraft before it, helping us monitor CMEs.

    ‘One of the great things about Solar Orbiter is that it will cover a lot of different distances, so we can really capture these coronal mass ejections when they are evolving from the sun to Earth,’ she said. CMEs can cause space weather events on Earth, interfering with our satellites, so this could give us a better early-warning system for when they are heading our way.

    2. Why the sun blows a supersonic wind

    One of the major unanswered questions about the sun concerns its outer atmosphere, known as its corona. ‘It’s heated to (more than) a million degrees, and we currently don’t know why it’s so hot,’ said Dr Alexis Rouillard from the Institute for Research in Astrophysics and Planetology in Toulouse, France, the coordinator of a project studying solar wind called SLOW_SOURCE. ‘It’s (more than) 200 times the temperature of the surface of the sun.’

    ESA China Double Star mission continuous interaction between particles in the solar wind and Earth’s magnetic shield 2003-2007

    ESA China Smile solar wind and Earth’s magnetic shield – the magnetosphere spacecraft depiction

    Magnetosphere of Earth, original bitmap from NASA. SVG rendering by Aaron Kaase

    A consequence of this hot corona is that the sun’s atmosphere cannot be contained by its gravity, so it has a constant wind of particles blowing out into space, known as solar wind.

    4
    This artist’s rendering shows a solar storm hitting Mars and stripping ions from the planet’s upper atmosphere. NASA/GSFC

    This wind blows at more than 250km per second, up to speeds of 800km per second, and we currently do not know how that wind is pushed outwards to supersonic speeds.

    Dr Rouillard is hoping to study the slower solar wind using Solar Orbiter, which may help us explain how stars like the sun create supersonic winds. “By getting closer to the sun we collect more (pristine) particles, he said. “Solar Orbiter will provide unprecedented measurements of the solar wind composition. (And) we will be able to develop models for how the wind (is pushed out) into space.”

    3. What its poles look like

    During the course of its mission, Solar Orbiter will make repeated encounters with the planet Venus. Each time it does, the angle of the spacecraft’s orbit will be slightly raised until it rises above the planets. If the mission is extended as hoped to 2030, it will reach an inclination of 33 degrees – giving us our first ever views of the sun’s poles.

    Aside from being fascinating, there will be some important science that can be done here. By measuring the sun’s magnetic fields at the poles, scientists hope to get a better understanding of how and why the sun goes through 11-year cycles of activity, culminating in a flip of its magnetic poles. They are set to flip again in the mid-2020s.

    ‘By understanding how the magnetic fields are distributed and evolve in these polar regions, we gain a new insight on the cycles that the sun is going through,’ said Dr Rouillard. ‘Every 11 years, the sun goes from a minimum activity state to a maximum activity state. By measuring from high latitudes, it will provide us with new insights on the cyclic evolution of (the sun’s) magnetic fields.’

    4. Why it has polar ‘crowns’

    Occasionally the sun erupts huge arm-like loops of material from its surface, which are known as prominences. They extend from its surface into the corona, but their formation is not quite understood. Solar Orbiter, however, will give us our most detailed look at them yet.

    ‘We’re going to have very intricate views of some of these active regions and their associated prominences,’ says Professor Rony Keppens from KU Leuven in Belgium, coordinator of a project called PROMINENT which is studying solar prominences. ‘It’s going to be possible to have more than several images per second. That means some of the dynamics that had not been seen before now are going to be visualised for the first time.’

    Some of the sun’s largest prominences come from near its poles, so by raising its inclination Solar Orbiter will give us a unique look at these phenomena. ‘They’re called polar crown prominences, because they are like crowns on the head of the sun,’ said Prof. Keppens. ‘They encircle the polar regions and they live for very long, weeks or months on end. The fact that Solar Orbiter is going to have first-hand views of the polar regions is going to be exciting, especially for studies of prominences.’

    5. How it controls the solar system

    By studying the sun with Solar Orbiter, scientists hope to better understand how its eruptions travel out into the solar system, creating a bubble of activity around the sun in our galaxy known as the heliosphere.

    NASA Heliosphere

    This can of course create space weather here on Earth, so studying it is important for our own planet.

    ‘One of the ideas we have is to take measurements of the solar magnetic field in active regions in the equatorial belt of the sun,’ said Professor Keppens. ‘We’re going to extrapolate that data into the corona, and then use simulations to try and mimic how some of these eruptions happen and progress out into the heliosphere.’

    Thus, Solar Orbiter will not just give us a better understanding of the sun itself, but also how it affects planets like Earth too. Alongside the first-ever images of the poles and the closest-ever images of its surface, Solar Orbiter will give us an unprecedented understanding of how the star we call home really works.

    The research in this article is funded by the European Research Council. Sharing encouraged.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
  • richardmitnick 5:33 pm on January 21, 2020 Permalink | Reply
    Tags: "Deep Antarctic drilling will reveal climate secrets trapped in 1.5 million-year-old ice", An international team of researchers are hoping to drill more than 2700 metres below the surface in their search for ice that is up to 1.5 million years old., , Drilling into the ice of Antarctica is like going back in time., , , Horizon - The EU Research and Innovation Magazine, , Scientists will have to drill at a depth of nearly 3km to retrieve some of the oldest ice that can tell us about the past and future of climate.   

    From Horizon The EU Research and Innovation Magazine via phys.org: “Deep Antarctic drilling will reveal climate secrets trapped in 1.5 million-year-old ice” 

    1

    From Horizon The EU Research and Innovation Magazine

    via


    phys.org

    January 21, 2020
    Richard Gray

    1
    Scientists will have to drill at a depth of nearly 3km to retrieve some of the oldest ice that can tell us about the past and future of climate. Credit: NASA/Michael Studinger

    An ambitious mission to drill into the Antarctic ice sheet to extract some of the oldest ice on the planet will provide vital clues about a mysterious shift in the behaviour of our planet’s climate.

    Drilling into the ice of Antarctica is like going back in time. Frozen within it are relics from long past eras—dust that settled on the surface long ago and bubbles of air trapped by ancient blizzards.

    For scientists hoping to understand how the Earth’s climate has changed in the past, it is a treasure trove. Packed into every metre of ice are thousands of years-worth of these precious artefacts.

    Now an international team of researchers are hoping to drill more than 2,700 metres below the surface in their search for ice that is up to 1.5 million years old.

    Their aim is to extract ice cores that will help them to piece together what happened to our planet’s climate during a crucial and mysterious period of change that occurred around 1 million years ago.

    “Over the last few million years, the Earth’s climate has oscillated between cold glacial periods of time and shorter, warmer interglacial periods,” said Professor Carlo Barbante, an analytical chemist at Ca’Foscari University of Venice, Italy, and coordinator of the Beyond EPICA project that is hoping to recover the ice cores.

    “We know from information contained within marine sediments that we would have one warm and one cold period every 41,000 years, but then around 1 million years ago, this cycle changed to have a periodicity of about every 100,000 years.

    “We don’t know exactly why this change occurred. Most probably it was due to changes in the carbon dioxide cycle.”

    Orbit

    Much of the glacial cycle on Earth is driven by our planet’s less than perfect orbit around the sun and the fact that it tends to wobble on its axis. But other changes, such as volcanic activity that throws aerosols into the air and the levels of greenhouse gases in the atmosphere, also influence this cycle.

    By analysing ice dating back to around the time of the change in the glacial cycle, Prof. Barbante and his colleagues hope to find preserved dust, gas and isotopes that can tell them what may have led to this shift in the climate.

    “Understanding this period from the past is important because it can help us to answer questions we have about the sensitivity of our climate today,” said Prof. Barbante.

    While scientists know that increases in greenhouse gases cause the climate of the planet to warm, it is difficult to model the exact impacts of this due to the complexity of the climate system.

    Correlating climate changes in the past to carbon dioxide concentrations can help make better predictions about what our own greenhouse gas emissions will do to the planet.

    “We hope to fill in gaps in our knowledge about how the Earth will respond to changes in greenhouse gas concentrations,” added Prof. Barbante.

    Prof. Barbante and his colleagues are hoping to build on the success of a previous project called the European Project for Ice Coring in Antarctica (EPICA), which drilled to a depth of 3,270 metres at a site called Dome C, close to the French-Italian research facility Concordia Station on the Polar Plateau of East Antarctica. By 2006, after 10 years of drilling, the team had obtained ice that was up to 820,000 years old.

    The Beyond EPICA team have spent the past three years conducting new surveys with the help of radar mounted on aircraft and sledges to identify a site with older ice.

    The key has been to find parts of the ice sheet that are deep enough to contain ice that is over a million years old, but not so deep that the pressure from the ice above causes the bottom to melt.

    “We had two candidate sites—about 40km from the Concordia base in an area known as Little Dome C and another close to a Japanese base called Dome Fuji Station,” says Professor Olaf Eisen, a glaciologist at the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research in Bremerhaven, Germany, who led the search for the new drilling site.

    “We wanted nice layered ice, with continuous accumulation of snow at the surface and bed rock that is not too mountainous so it doesn’t disturb the stratigraphy (layers) of the ice.”

    Topography

    The radar images allowed the team to see distinct boundaries in the ice that they could use to estimate its age and also see the shape of the rock beneath. They eventually settled on the site at Little Dome C. The underlying topography of the land and variations in the amount of ice that has accumulated above it over time mean they will not need to drill as deep as they did with the original EPICA project to reach the oldest ice. They estimate they will find ice more than 1 million years old at a depth of about 2,700 metres.

    “The ice on the plateau there is nicely stratified and undisturbed,” added Prof. Eisen.

    The team are now embarking on the next phase of the project, which is to start drilling and retrieving the ice. At the end of last year, they began building their camp at the site and preparing a trench where they will drill a preparatory hole through the first 20 metres of porous snowpack, or ‘firn,” on top of the ice sheet.

    They aim to drill the first 150 metres into the ice over the next Antarctic summer between November 2020 and January 2021. As they drill down, the team will pump a thick, non-toxic fluid into the resulting borehole to prevent the pressure of the ice sheet from closing it up.

    “We hope to reach the bed rock within three to four seasons, if everything goes well,” said Prof. Barbante. “This precious ice gives us a chance to look back to a time when the world was different. Human beings were not around, at least not as Homo sapiens, and the climate was in a very different mode.

    “It is fascinating archive of the past, but it can help us also understand the future.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel