Tagged: Horizon – The EU Research and Innovation Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:31 am on August 27, 2021 Permalink | Reply
    Tags: "New form of carbon tantalizes with prospects for electronics", A newly created form of carbon in a mesh just one atom thick is tantalising scientists., Analysing larger-scale samples could help to show if a biphenylene anode could increase the efficiency of lithium-ion batteries., Biphenylene network ribbons a few atoms wide behave electrically like a metal., Horizon - The EU Research and Innovation Magazine, It makes sense to compare the new material with graphene., The material known as biphenylene network is highly conductive., The new geometric arrangement in two dimensions adds to the list of different carbon structures – allotropes.   

    From Horizon The EU Research and Innovation Magazine : “New form of carbon tantalizes with prospects for electronics” 

    From Horizon The EU Research and Innovation Magazine

    24 August 2021

    Rex Merrifield

    Analysing larger-scale samples could help to show if a biphenylene anode could increase the efficiency of lithium-ion batteries, commonly used in mobile phones and electric vehicles. Credit: Valeria Azovskaya / Aalto University [ Aalto-yliopisto](FI).

    A newly created form of carbon in a mesh just one atom thick is tantalising scientists with hints that it could sharply improve rechargeable batteries and allow wires so small that they can operate at a scale where metals fail.

    The material known as biphenylene network is highly conductive and may prove able to store more electrical energy than even graphene, the astonishing atomic-thickness carbon honeycomb material identified nearly 20 years ago.

    In May, scientists announced [Science] that they have been able to tailor the arrangement of carbon atoms into a mesh that, for the first time, includes hexagons, squares and octagons, while ensuring the material is still only one atom thick.

    The new geometric arrangement in two dimensions adds to the list of different carbon structures – allotropes – such as graphite, diamond and graphene. But scientists have found it has very different electronic properties.

    It makes sense to compare the new material with graphene, where carbon atoms bond in a single layer of hexagons to form a mesh with astonishing electrical and thermal characteristics, as well as outstanding mechanical strength, and yet is highly transparent.

    Laboratory research on the new material at University of Marburg [Philipps-Universität Marburg](DE) in Germany and Aalto University in Finland has found that biphenylene network ribbons a few atoms wide behave electrically like a metal. That gives a hint that the material may be developed to make conducting wires in carbon-based electronic circuits.

    ‘If you take similar-width graphene nanoribbons, then they are typically semiconductors and this biphenylene is more readily a metal,’ said Peter Liljeroth, professor in the department of applied physics at Aalto University. That could make the material useful as a nanoscale conductor in future electronic devices, he added.

    He and his team made their findings using an imaging technique called scanning tunnelling spectroscopy to scrutinise strips of biphenylene network up to 21 atoms wide. Those ribbons were made by Prof. Michael Gottfried’s group in the physical chemistry department at Philipps-Universität Marburg, in Germany.

    The Marburg team developed the synthesis route for this material. They made molecular chains containing carbon in specific arrangements that gather on an ultra-smooth, non-reactive gold surface. And then another step – dubbed ‘HF-zipping’ – meshes the chains together to form the biphenylene network strips.

    Peter Liljeroth and his team made their findings using an imaging technique called scanning tunnelling spectroscopy to scrutinise strips of biphenylene network up to 21 atoms wide. Credit – Mikko Raskinen/Aalto University.

    Electrical potential

    Analysing larger-scale samples could help to show if a biphenylene anode could increase the efficiency of lithium-ion batteries, commonly used in mobile phones and electric vehicles.

    ‘If you have bulk or multilayer biphenylene… then there are theoretical predictions that the lithium storage capacity should be higher, much higher, than for graphene,’ Dr Liljeroth said.

    If confirmed, that would make the material hugely attractive in rechargeables. But Prof. Liljeroth emphasises there is a very long way to go before such properties may be potentially harnessed in industrial or consumer applications.

    A challenge in making bulk biphenylene is to increase the accuracy of the synthesis process of zipping together strips or ribbons of biphenylene of sufficient quality to form larger sheets, without parts of the material defaulting to graphene as the carbon atoms aggregate and bond.

    While the Aalto researchers could identify the electrical properties of the material from Marburg, other characteristics of biphenylene network remain unexplored. Research is still needed to nail down its mechanical, thermal and optical qualities. To do that, it would help to have bigger samples.

    Carbon wires

    The confirmed metallic conducting properties already point to the possibility of conducting wires for electronics at the smallest scale.

    Wires made of metals such as copper typically degrade at atomic thicknesses through a process of electromigration – where moving electrons can displace atoms and damage the wires, which become unstable and eventually break.

    A material such as biphenylene network could help to avoid these difficulties in electronic circuits, working like a metal in conducting electrons, but without the drawbacks. That would make for more stable conductors, allowing smaller wires to be used in nanoscale electronics.

    ‘This is one of the problems that has to be overcome or solved, and carbon-based materials are quite good in this respect,’ Prof. Liljeroth said.

    But he added a clear note of caution: “There are many, many steps in between now and actually using this in a microprocessor.”

    These properties, and others yet to be identified, could provide rich fields for exploration and development, as could the novel way of producing the biphenylene network itself.

    Prof. Liljeroth emphasised the potential for the HF-zipping approach used by Prof. Gottfried’s team to make any number of other carbon structures.

    Zipping ribbons

    The Marburg team used carbon-based precursor chemicals containing hydrogen and fluorine to “zip” together different atomic carbon chains. Rather than defaulting to graphene – the most basic form on the surface – the extra step involved chemically tailoring the edges of the ribbons that zip together to form the biphenylene network.

    ‘What I hope comes out of this work is that people start to think about this kind of HF- zipping process to make new materials, (so) you can start with the same concept, tweak the precursors and end up with another 2-D carbon network,’ Prof. Liljeroth added.

    Since the material has so far been produced on a gold surface, another challenge is to perfect the transfer of the biphenylene network off the metal. That is a task where the researchers may draw lessons from work done on graphene – a material where ongoing work also offers some other pointers for developing biphenylene network.

    ‘I would say there’s a lot of potential … now that they have shown that these structures are feasible, they are stable, at least under these conditions,’ said Professor Roman Fasel, who heads the nanotech@surfaces Laboratory at the Swiss Federal Laboratories for Materials Science and Technology (EMPA) and was not involved in the research.

    ‘This is going to be really challenging to scale up,’ he said, but added that work on graphene had shown it was possible to progress from the tiniest specks of material to workable scales.

    ‘One direction is to optimize the synthesis to achieve a large area 2-D network, let’s say for electrodes and things like that, but the other would be to find a way to make well-defined nanoribbons – so just the 1-D variant of the material,’ he said.

    One of the main challenges facing biphenylene is identifying properties that make it an obvious choice for future applications – known in computing terms as a ‘killer app’ – where it is vastly better than rivals, as well as easier and cheaper to make.

    After all, people have been working on graphene for nearly two decades and though it exhibits many outstanding properties and has found uses in paints and coatings, microelectronics and transparent conductors – as well as being used in tennis rackets and ink – it has not completely revolutionised any particular field.

    ‘In some cases, a new material opens up something that was simply not possible to do with the existing technology, and then it can break through faster,’ said Prof. Liljeroth. ‘But I don’t know about biphenylene — we’ll have to see about that.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:06 pm on July 1, 2021 Permalink | Reply
    Tags: "Q&A- How we’re gearing up to deflect asteroids that might cause Earth considerable damage", , Dr Naomi Murdoch, , Horizon - The EU Research and Innovation Magazine,   

    From Horizon The EU Research and Innovation Magazine : “Q&A- How we’re gearing up to deflect asteroids that might cause Earth considerable damage” 


    From Horizon The EU Research and Innovation Magazine

    06 April 2021 [Why now!! This just showed up in social media.]
    Natalie Grover

    “Asteroids hold clues about how our solar system formed. Their physical makeup and composition can also help answer the big question of how life emerged”, tells Dr Naomi Murdoch, planetary scientist specialised in the geophysical evolution of asteroids at the French aeronautics and space institute National Higher School of Mechanics and Aerotechnics [ISAE-ENSMA // École Nationale Supérieure de Mécanique et d’Aérotechnique | Le site de l’école ISAE-ENSMA situé au Futuroscope de Poitiers. (FR). Image credit – Naomi Murdoch.

    Asteroids — the bits and pieces left over from the formation of the inner planets — are a source of great curiosity for those keen to learn about the building blocks of our solar system, and to probe the chemistry of life.

    Humans are also considering mining asteroids for metals, but one of the crucial reasons scientists study this ancient space rubble is planetary defense, given the potential for space debris to cause Earth harm.

    Accordingly, NASA is planning a 2022 planetary defense mission that involves sending a spacecraft to crash into a near-Earth asteroid in an effort to check whether it could be deflected were it on a collision course with Earth.

    Dr Naomi Murdoch — a planetary scientist at the French aeronautics and space institute ISAE-SUPAERO, who specialises in the geophysical evolution of asteroids — is part of a follow-on mission planned by the European Space Agency [Agence spatiale européenne][Europäische Weltraumorganisation](EU).

    She tells Horizon about the mission will characterise the asteroid after impact to obtain data that will inform strategies designed to address any threatening asteroids that might come Earth’s way.

    But are we in any real danger of being wiped out by a big rocky remnant? Not really, but some asteroids can cause considerable damage, which is why we’re shoring up our defences here on Earth, she suggests.

    What makes asteroids interesting?

    Asteroids hold clues about how our solar system formed. Their physical makeup and composition can also help answer the big question of how life emerged.

    How many have we identified – and what are they made of?

    So far, we have identified more than a million asteroids, but there are tens, if not hundreds of millions out there that we don’t know about. This is because unlike stars, asteroids don’t emit a light of their own, they only reflect sunlight, so many of the smaller ones are difficult to spot.

    What they are made of depends on where they were formed in the solar system. The ones that formed closest to the sun have borne the brunt of the heat, losing material that could have been really interesting to study. But the most common ones are those that formed furthest away from the sun: the C (carbonaceous)-type, likely consisting of clay and silicate rocks, are among the most ancient objects in the solar system but are hard to detect because they are relatively dark in colour.

    Then there are brighter options. The M (metallic)-type, composed mainly of metallic iron, largely inhabit the asteroid belt’s middle section. (The asteroid belt lies roughly between Mars and Jupiter). The S (stony)-type, comprising silicate materials and nickel-iron, are most commonly found in the inner asteroid belt.

    Most meteorites (a small piece of an asteroid or comet that survives the journey across Earth’s atmosphere) found on Earth are either metallic or stony. It is less likely that the carbonaceous type will be found on the ground, unless the asteroids were quite large because they have to survive our planet’s atmosphere without completely burning up. Basically, the types of meteorites that we find on the ground are not necessarily representative of the type of asteroids that would even hit our atmosphere.

    So what kind of asteroid are scientists wary of in terms of the danger they pose to our planet?

    Any asteroid size could in principle, hit us, but the largest asteroids are easy to detect — we’ve identified the vast majority of them and they’re not risky. There are many, many more small asteroids than there are large ones, and because they’re small, they’re really difficult to detect and difficult to follow. We have to look for them several times in order to pinpoint their orbit to know where they’re going to be in space.

    What we focus on are those (small asteroids) in the 100-to-500-metre size range. This size range is probably the most dangerous because they could still cause a large amount of damage on Earth, for example on a regional and national scale. But we don’t know yet where they all are, which is why this is the key size range for planetary defence, because there’s a risk of discovering one day that one we didn’t know existed is coming towards us.

    Space scientists are trying to improve our ability to detect these smaller asteroids, then assess whether they are threats, and finally, if need be, (we try to) deflect the object.

    As part of the NEO-MAPP project, we are helping prepare for these planetary defence missions by improving space instruments that are linked to measuring properties of the surface, the subsurface and the internal structure of asteroids, because it’s these parameters that will govern whether a deflection mission is successful or not. Another objective is to develop a better understanding of landing on asteroids, of the consequences of their low gravity environment, and how to interpret data recorded during surface interactions.

    Once you’ve detected an asteroid you want to explore, how do you go about landing on one?

    Before the first space missions, many people thought that asteroids were just boring lumps of rock, but we started to realise that they were actually a lot more interesting. They have their own evolutionary history, which is really important to understand the solar system in general.

    The only way to really probe the mechanical and physical properties of an asteroid is to touch and interact directly with it, but we don’t have a good understanding of the actual surface of asteroids, which harbour a low-gravity environment. It’s a really exotic place, typically covered by granular material like sand, rocks, boulders, depending on the type of asteroid and its size. And this granular material, in that low gravity environment, appears to behave much more like a fluid than the same material would behave on Earth.

    As a result, previous missions have had varying degrees of landing success so we are now studying landing behaviour in gravitational conditions similar to those on asteroids.

    You are part of the European Space Agency’s Hera mission, which will follow-on from NASA’s DART mission to a binary asteroid system. What are these missions hoping to achieve?

    DART is an upcoming planetary defence mission designed to collide with a smaller asteroid moon, called Dimorphos, orbiting with the near-Earth asteroid Didymos. The idea is to test whether Dimorphos’s orbit can be deflected. In the days following, we’ll know whether the deflection was successful or not. Then, Hera will survey and characterise the asteroid pair and the resulting crater.

    The main Hera spacecraft will not touch the surface, and will perform all of the investigations in orbit around the asteroids. However, mini satellites called cubesats will land on the moon. One, for instance, will orbit and study the asteroid (the main instrument is a radar for looking inside it), and then it will descend to the surface. The landing part of the mission is ‘bonus science’ (not necessary to achieve the mission goals), but extremely interesting in order to characterise the physical properties of the asteroid.

    The idea behind these missions is to test a key deflection method and to understand the target. Although Dimorphos is not a threat to Earth, it is a size that is roughly in line with potentially threatening asteroids. What we want to do is have a well-characterised, large-scale experiment that we can use to extrapolate to any potential asteroid threats. In order to do that we need to learn about our targets, including their form, mass density, the impact crater size and the level of debris generated upon collision.

    By measuring the physical properties and characterising the target in detail we can calibrate our numerical (impact) models. If one day a potentially dangerous asteroid comes our way, we can use these models to predict what may happen if we try to deflect it.

    Another feature of Hera is the plan to take a look inside the moon. I think it’s going to be extremely exciting to see what’s in there, because that’s going to tell us a lot about the history of the asteroid-moon pair.

    So we’re gearing up to tackle any asteroids that might cause Earth some damage. But how likely are we to be wiped out completely by an asteroid?

    Small asteroids, including pieces tiny enough to be called space dust, hit our atmosphere every day — that is what shooting stars are. The probability of an asteroid causing large-scale damage is very small. That 100-to-500-metre size range is the most threatening range — so that’s what scientists are working on at the moment.

    Overall, we can all sleep soundly knowing that it is extremely unlikely that we’re going to be wiped out by an asteroid.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 4:57 pm on April 26, 2021 Permalink | Reply
    Tags: "What happens below Earth’s surface when the most powerful earthquakes occur", An unusual feature of megathrust quakes is that they are often followed by a series of other very powerful megathrust quakes several years later and with epicentres hundreds of kilometres away., Geologists find a rock made of a mineral with what’s called an inclusion crystal inside it. This inclusion was trapped inside the mineral as two subducting plates squeezed each other at great depth., , Horizon - The EU Research and Innovation Magazine, It turns out that we can get a unique window on subduction zones as they were millions of years ago., , Megathrust quakes are the result of the subduction of one tectonic plate below another., Science requires a careful examination of seismological and geodetic data at a greater scale than has previously been done., We have very little understanding of the dynamics of the subduction and how it might trigger an instability that leads to another megathrust event a few years later.   

    From Horizon The EU Research and Innovation Magazine : “What happens below Earth’s surface when the most powerful earthquakes occur” 


    From Horizon The EU Research and Innovation Magazine

    26 April 2021
    Caleb Davies

    Megathrust earthquakes happen at subduction zones, where one tectonic plate is forced under another. Credit: Marco Reyes / Unsplash.

    At 03:34 local time on 27 February 2010, Chile was struck by one of the most powerful earthquakes in a century. The shock triggered a tsunami, which devastated coastal communities. The combined events killed more than 500 people. So powerful was the shaking that, by one NASA estimate, it shifted Earth’s axis of spin by a full 8 cm.

    Like nearly all the of the most powerful earthquakes, this was a megathrust earthquake. These happen at subduction zones, places where one tectonic plate is forced under another. If the plates suddenly slip – wallop, you get a massive earthquake. The 2010 Chile quake was a magnitude 8.8: strong enough to shift buildings off their foundations.

    We understand subduction zones poorly, which is why geophysicist Professor Anne Socquet, based at Grenoble Alps University [Université Grenoble Alpes] (FR), had planned to visit Chile. She wanted to install seismic monitoring instruments to collect data. By coincidence, she arrived just a week after the quake. ‘It was terrifying,’ she said. ‘The apartment we had rented had fissures in the walls that you could put your fist inside.’

    Most people who study megathrust quakes focus on the foreshocks that immediately precede the main quake, Prof. Socquet says. But an unusual feature of megathrust quakes is that they are often followed by a series of other very powerful megathrust quakes several years later and with epicentres hundreds of kilometres away. The 2010 Chile quake, for instance, was followed by other events in 2014, 2015 and 2016 centred on areas up and down the Chile coast. Prof. Socquet wanted to look at these sequences of megathrust earthquakes and investigate the potential links between those great quakes. This requires a careful examination of seismological and geodetic data at a greater scale than has previously been done.


    We know that megathrust quakes are the result of the subduction of one tectonic plate below another. But beyond that, we have very little understanding of the dynamics of the subduction and how it might trigger an instability that leads to another megathrust event a few years later. There is some evidence that it could be to do with the release and migration of fluids at great depth. Prof. Socquet’s DEEP-trigger project is about filling that gap. ‘This is kind of virgin territory in terms of observations,’ she said.

    The first step of the six-month-old project was supposed to be adding to the network of about 250 GPS instruments that she has contributed to in Chile since 2007 and building a new instrument network in Peru. Currently unable to travel to South America due to the Covid-19 pandemic, she’s been working with local contacts to begin the installation. She’s also working on computational tools to begin analysing legacy data from the region.

    ‘The critical thing will be to have systematic observations of the link between the slow slip and the seismic fractures at large time and space scales. This will be a very big input to science.’

    At the University of Pavia in Italy, mineralogist Professor Matteo Alvaro is also interested in megaquakes – albeit much, much older ones.

    It turns out that we can get a unique window on subduction zones as they were millions of years ago. There are certain places, few and far between, where rocks that have been through subduction zones are forced up to the surface. By analysing these rocks we can deduce the depths and pressures at which the subduction happened and build up a picture of how subduction works – and maybe how megathrust earthquakes are triggered.

    Prof. Alvaro has just demonstrated the first successful application of a combination of x-ray crystallography and a technique called Raman spectroscopy with a sample of a rock from a location known as the Mir pipe in Siberia. Image credit – Vladimir, licensed under CC BY 3.0.


    It usually works like this. Geologists find a rock made of a mineral with what’s called an inclusion crystal inside it. This inclusion was trapped inside the mineral as two subducting plates squeezed each other at great depth, perhaps 100 km or more below the surface. It will have a particular crystal structure – a specific, repeating spatial arrangement of atoms – which depends on the pressure it experienced as it formed. The crystal can reveal the pressure the inclusion was exposed to and hence depth it was formed at.

    The trouble is, this is an over simplification. It only holds if the inclusion is cube-shaped – and it almost never is. This whole idea of pressure equals depth – we all know this might be incorrect, says Prof. Alvaro. ‘The natural questions is, okay, but by how much are we wrong?’ That’s what he decided to find out in his project TRUE DEPTHS.

    The plan was simple in principle. Prof Alvaro wanted to measure the strain experienced by the crystal while still trapped inside the mineral. If he could understand the tiny displacement of the atoms from their usual positions in a typical, unpressurised crystal structure, that would provide a better measure of the stress applied by the surrounding rock as the crystal was formed and so a more accurate measure of the depth at which it was formed. To study the atomic structure, he uses a combination of x-ray crystallography and a technique called Raman spectroscopy.

    Prof. Alvaro has just demonstrated the first successful application of his techniques. He looked at a sample of a rock from a location known as the Mir pipe in Siberia. This is a shaft of molten kimberlite rock that rose very fast from huge depths. (We get most of our diamonds from kimberlite pipes like this, and indeed, Mir has been mined extensively.) Prof. Alvaro looked at rocks of garnet with a tiny quartz inclusions inside that were brought up. ‘The kimberlite is the elevator that brings it to the surface,’ he said.


    By measuring the strain on the inclusions, he could confirm it formed at pressure of 1.5 gigaPascals (about 15,000 times that found at Earth’s surface) and a temperature of 850oC. This isn’t entirely surprising, but it is the first proof that Prof. Alvaro’s technique really works. He is now looking to make more measurements and build a library of examples.

    He also wonders, more speculatively, if it’s possible that the formation and deformation of the inclusions might act as the very first trigger of megathrust earthquakes. The idea would be that these tiny changes set off cracks in larger rocks that eventually lead a fault to slip out of place. Prof. Alvaro is planning to explore this further.

    ‘No one knows what the initial trigger is, the thing that triggers the first slip,’ said Prof. Alvaro. ‘We started thinking – and maybe it’s a completely crazy idea – that maybe it’s these inclusions. A cluster of them, maybe subject to an instantaneous phase change and so a change in volume. Maybe that could be the very first trigger.’


    Earthquake Alert


    Earthquake Alert

    Earthquake Network project is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.


    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan



    About Early Warning Labs, LLC

    Early Warning Labs, LLC (EWL) is an Earthquake Early Warning technology developer and integrator located in Santa Monica, CA. EWL is partnered with industry leading GIS provider ESRI, Inc. and is collaborating with the US Government and university partners.

    EWL is investing millions of dollars over the next 36 months to complete the final integration and delivery of Earthquake Early Warning to individual consumers, government entities, and commercial users.

    EWL’s mission is to improve, expand, and lower the costs of the existing earthquake early warning systems.

    EWL is developing a robust cloud server environment to handle low-cost mass distribution of these warnings. In addition, Early Warning Labs is researching and developing automated response standards and systems that allow public and private users to take pre-defined automated actions to protect lives and assets.

    EWL has an existing beta R&D test system installed at one of the largest studios in Southern California. The goal of this system is to stress test EWL’s hardware, software, and alert signals while improving latency and reliability.

    Earthquake Early Warning Introduction

    The United States Geological Survey (USGS), in collaboration with state agencies, university partners, and private industry, is developing an earthquake early warning system (EEW) for the West Coast of the United States called ShakeAlert. The USGS Earthquake Hazards Program aims to mitigate earthquake losses in the United States. Citizens, first responders, and engineers rely on the USGS for accurate and timely information about where earthquakes occur, the ground shaking intensity in different locations, and the likelihood is of future significant ground shaking.

    The ShakeAlert Earthquake Early Warning System recently entered its first phase of operations. The USGS working in partnership with the California Governor’s Office of Emergency Services (Cal OES) is now allowing for the testing of public alerting via apps, Wireless Emergency Alerts, and by other means throughout California.

    ShakeAlert partners in Oregon and Washington are working with the USGS to test public alerting in those states sometime in 2020.

    ShakeAlert has demonstrated the feasibility of earthquake early warning, from event detection to producing USGS issued ShakeAlerts ® and will continue to undergo testing and will improve over time. In particular, robust and reliable alert delivery pathways for automated actions are currently being developed and implemented by private industry partners for use in California, Oregon, and Washington.

    Earthquake Early Warning Background

    The objective of an earthquake early warning system is to rapidly detect the initiation of an earthquake, estimate the level of ground shaking intensity to be expected, and issue a warning before significant ground shaking starts. A network of seismic sensors detects the first energy to radiate from an earthquake, the P-wave energy, and the location and the magnitude of the earthquake is rapidly determined. Then, the anticipated ground shaking across the region to be affected is estimated. The system can provide warning before the S-wave arrives, which brings the strong shaking that usually causes most of the damage. Warnings will be distributed to local and state public emergency response officials, critical infrastructure, private businesses, and the public. EEW systems have been successfully implemented in Japan, Taiwan, Mexico, and other nations with varying degrees of sophistication and coverage.

    Earthquake early warning can provide enough time to:

    Instruct students and employees to take a protective action such as Drop, Cover, and Hold On
    Initiate mass notification procedures
    Open fire-house doors and notify local first responders
    Slow and stop trains and taxiing planes
    Install measures to prevent/limit additional cars from going on bridges, entering tunnels, and being on freeway overpasses before the shaking starts
    Move people away from dangerous machines or chemicals in work environments
    Shut down gas lines, water treatment plants, or nuclear reactors
    Automatically shut down and isolate industrial systems

    However, earthquake warning notifications must be transmitted without requiring human review and response action must be automated, as the total warning times are short depending on geographic distance and varying soil densities from the epicenter.


    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:17 am on April 22, 2021 Permalink | Reply
    Tags: "Paris to Berlin in an hour by train? Here’s how it could happen", Horizon - The EU Research and Innovation Magazine, , Maglev   

    From Horizon-The EU Research and Innovation Magazine : “Paris to Berlin in an hour by train? Here’s how it could happen” 


    From Horizon-The EU Research and Innovation Magazine

    22 April 2021
    Tom Cassauwers

    The hyperloop is ready for a breakthrough, and Zeleros is one of the concepts in the running. The Spanish start-up has created a unique technology thanks to their approach to their higher-pressure tubes. Artist’s impression – Zeleros hyperloop.

    The hyperloop is what you get when you take a magnetic levitation train and put it into an airless tube. The lack of resistance allows the train, in theory, to achieve unseen speeds, a concept that is edging closer and closer to reality – and could provide a greener alternative to short-haul air travel.

    In November of 2020 two people were shooting through an airless tube at 160 km/h in the desert outside of Las Vegas. This wasn’t a ride invented by a casino or theme park; it was the first crewed ride of a hyperloop by the company Virgin Hyperloop. The ride only lasted 15 seconds, and the speeds they achieved were a far cry from the 1200 km/h they promise they will one day reach, but it represented a step forward.

    The hyperloop might be the future of transportation for medium-length journeys. It could out-compete high-speed rail, and at the same time operate at speeds comparable to aviation, but at a fraction of its environmental and energy costs. It’s a concept which start-ups and researchers have eagerly adopted, including several teams across Europe.

    Open design

    The idea originated with the US entrepreneur Elon Musk, associated with companies like SpaceX and Tesla. After he mentioned it several times in public, a team of SpaceX and Tesla engineers released an open concept in 2013. This initial idea then spawned a range of companies and even student teams, trying to design their own versions. Among them were several students in the Spanish city of Valencia.

    ‘We started in 2015 after Elon Musk’s announcement, when we were still students’, said Juan Vicén Balaguer, co-founder and chief marketing officer of the hyperloop start-up Zeleros, which today employs more than 50 people and raised around €10 million in funding. ‘We’ve been working on this technology for five years, and it can be a real alternative mode of transportation.’

    Yet the idea behind the hyperloop is older than Elon Musk, and it’s similar to an earlier idea called a vactrain or vacuum tube train. A comparable concept was already proposed by 19th century author Michel Verne, son of Jules, and has since then been periodically brought up by science-fiction writers and technologists. Now, however, the hyperloop seems to be getting ready for a breakthrough, and Zeleros is one of the concepts in the running.

    ‘You need to remove the air from the front of the vehicle. If not, the craft would stop. Which is why we use a compressor system at the front of the vehicle’, explained Juan Vicén Balaguer, co-founder and chief marketing officer of Zeleros. Artist’s impression – Zeleros hyperloop.

    Higher-pressure tube

    What makes their technology unique is their approach to the tube. ‘Each company uses a different level of pressure,’ said Vicén. ‘Some are going for space pressure levels. Which means that the atmosphere in the tube is similar to space. It contains almost zero air.’

    This state would allow for very fast speeds, since the train would face almost no friction. Yet it comes with a range of practical issues. It’s very difficult and expensive to achieve and maintain this level of pressure for long stretches of tube. Safety would also be an issue. if something happens to the hull of the train, passengers would be exposed to dangerous vacuum conditions.

    That’s why Zeleros is aiming for higher-pressure tubes. ‘It would be similar to the pressure seen in aviation,’ said Vicén. The pressure in the tubes proposed by Zeleros would extend to around 100 millibars. This, in turn, allows them to copy safety systems from aircraft, such as the oxygen masks that drop from overhead cabins. This design choice also makes their tubes cheaper to build, thereby reducing infrastructure costs. Yet it also means their trains face more air friction when they glide through the tube, which they have to compensate for in other ways.

    ‘You need to remove the air from the front of the vehicle,’ said Vicén. ‘If not, the craft would stop. Which is why we use a compressor system at the front of the vehicle. If there was zero pressure, we wouldn’t need this. But it’s a balance between economics and efficiency.’

    At the front of the train is a compressor, which looks like the front of an airliner engine and which sucks in air and lets it out at the rear, providing propulsion for the craft. A so-called linear motor is also located at key parts of the track, like the start, to give the train its initial propulsion. From there it self-propels along the track, with magnets at the top of the vehicle attracting it to the top of the tube and making it levitate. This proposed craft would carry between 50 and 200 passengers, and would reach up to 1000 km/h. By comparison, the cruising speed of a short-haul passenger aircraft is about 800 km/h.

    Outcompete air

    But why do we need this in the first place? Shouldn’t we just invest more in our regular, high-speed trains? It’s more complicated than that, says Professor María Luisa Martínez Muneta from the Technical University of Madrid [Universidad Politécnica de Madrid] (ES), where she coordinates the HYPERNEX research project. HYPERNEX connects hyperloop start-ups, like Zeleros, with universities, railway companies and regulators, in order to accelerate the technology’s development in Europe.

    ‘Hyperloops face today’s greatest transportation demands: reduction of travel time and of environmental impact,’ said Prof. Martínez Muneta.

    Because of its limited speed – generally around 300-350 km/h – high-speed rail quickly becomes a bad choice for longer range travel if you want to get somewhere in a hurry. This gap is filled by short and medium-distance air travel, but aircraft emit a high volume of emissions compared to trains and are not always convenient, as airports can be located away from city centres.

    A hyperloop could solve the problem. ‘This mode of transport is focused on covering routes between 400 and 1500 kilometres,’ said Prof. Martínez Muneta. In this way a hyperloop would replace most shorter aeroplane travel, with much less of an environmental impact. ‘The hyperloop produces zero direct emissions as it is 100% electrical, while achieving higher speeds and therefore shorter travel times,’ she said.

    With a speed of 1000 km/h, the hyperloop could be a greener and faster alternative to air travel. Image credit – Horizon.

    Labs and regulation

    Bringing this vision into reality will likely take a decade. Vicén from Zeleros predicts that the first commercial passenger routes will come online around 2030, with hyperloops focused on cargo arriving a few years earlier, around 2025-2027.

    One key issue in this timeframe is regulation. ‘The European Union is the first region that has a committee that promotes regulation and standardisation of hyperloops,’ said Vicén, referring to the 2020 founding of a joint technical committee on hyperloops by the European Committee for Standardization and the European Committee for Electrotechnical Standardization.

    According to Zeleros, this is an important step if hyperloops want to become commercially viable. These craft would operate at hitherto unseen speeds, with new safety characteristics like airless tubes. This would in turn require new regulations and standardisations, for example on what to do if the capsule depressurised.

    The pressure in the tubes proposed by Zeleros would extend to around 100 millibars and allows to copy safety systems from aircraft, such as the oxygen masks that drop from overhead cabins. Artist’s impression – Zeleros hyperloop.

    The technology also remains somewhat untested, although real-world experiments are happening more often. Vicén mentions how they have already tested their technology in computer simulations, where they can model things like aerodynamic conditions and electromagnetic dynamics. They also use so-called physical demonstrators or prototypes that test in laboratory conditions how magnetism is affected by high speeds, for example.

    Nevertheless, they are aching to move from the lab to the field. Right now, they are planning to build a 3-km test track at a still-to-be-determined location in Spain, where by 2023 they hope to demonstrate their technology, and they are working with the Port of Valencia to study the use of hyperloops in transporting freight.

    Hyperloops might still be a few years out, but we’ll likely see more of them in the future.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:06 pm on April 19, 2021 Permalink | Reply
    Tags: "How scientists are ‘looking’ inside asteroids", Horizon - The EU Research and Innovation Magazine   

    From Horizon-The EU Research and Innovation Magazine : “How scientists are ‘looking’ inside asteroids” 


    From Horizon The EU Research and Innovation Magazine

    19 April 2021
    Tereza Pultarova

    The shape of asteroids such as 243 Ida can reveal information about what they’re made of, which can, in turn, tell us more about the formation of the solar system. Image credit – National Aeronautics and Space Administration(US)/NASA-JPL/Caltech(US)/USGS – U.S. Geological Survey.

    Asteroids can pose a threat to life on Earth but are also a valuable source of resources to make fuel or water to aid deep space exploration. Devoid of geological and atmospheric processes, these space rocks provide a window onto the evolution of the solar system. But to really understand their secrets, scientists must know what’s inside them.

    Only four spacecraft have ever landed on an asteroid – most recently in October 2020 – but none has peered inside one. Yet understanding the internal structures of these cosmic rocks is crucial for answering key questions about, for example, the origins of our own planet.

    Asteroids are the only objects in our solar system that are more or less unchanged since the very beginning of the solar system’s formation,’ said Dr Fabio Ferrari, who studies asteroid dynamics at the University of Bern [Universität Bern](CH) . ‘If we know what’s inside asteroids, we can understand a lot about how planets formed, how everything that we have in our solar system has formed and might evolve in the future.’

    Then are also more practical reasons for knowing what’s inside an asteroid, such as mining for materials to facilitate human exploration of other celestial bodies, but also defending against an Earth-bound rock.

    NASA’s upcoming Double Asteroid Redirection Test (DART) mission, expected to launch later this year, will crash into the 160m in diameter asteroid moon Dimorphos in 2022, with the aim of changing its orbit.

    The experiment will demonstrate for the first time whether humans can deflect a potentially dangerous asteroid.

    But scientists have only rough ideas about how Dimorphos will respond to the impact as they know very little about both this asteroid moon, and its parent asteroid, Didymos.

    To better address such questions, scientists are investigating how to remotely tell what’s inside an asteroid and discern its type.


    There are many types of asteroids. Some are solid blocks of rock, rugged and sturdy, others are conglomerates of pebbles, boulders and sand, products of many orbital collisions, held together only by the power of gravity. There are also rare metallic asteroids, heavy and dense.

    ‘To deflect the denser monolithic asteroids, you would need a bigger spacecraft, you would need to travel faster,’ said Dr Hannah Susorney, a research fellow in planetary science at the University of Bristol (UK). ‘The asteroids that are just bags of material – we call them rubble piles – can, on the other hand, blow apart into thousands of pieces. Those pieces could by themselves become dangerous.’

    Dr Susorney is exploring what surface features of an asteroid can reveal about the structure of its interior as part of a project called EROS.

    This information could be useful for future space mining companies who would want to know as much as possible about a promising asteroid before investing into a costly prospecting mission as well as knowing more about potential threats.

    ‘There are thousands of near-Earth asteroids, those whose trajectories could one day intersect with that of the Earth,’ she said. ‘We have only visited a handful of them. We know close to nothing about the vast majority.’

    During the fourth ever landing on an asteroid, Bennu was mapped thanks to a mosaic of images collected by NASA’s OSIRIS-REx spacecraft. Peering inside an asteroid is the next crucial step. Image credit – NASA Goddard Space Flight Center(US)/University of Arizona (US).


    Dr Susorney is trying to create detailed topography models of two of the most well-studied asteroids – Itokawa (the target of the 2005 Japanese Hayabusa 1 mission) and Eros (mapped in detail by the NEAR Shoemaker space probe in the late 1990s).

    Itokawa. Credit:Japan Aerospace Exploration Agency (JAXA) (国立研究開発法人宇宙航空研究開発機構] (JP).

    Hayabusa. Credit:Japan Aerospace Exploration Agency (JAXA) (国立研究開発法人宇宙航空研究開発機構] (JP).

    ‘The surface topography can actually tell us a lot,’ Dr Susorney said. ‘If you have a rubble pile asteroid, such as Itokawa, which is essentially just a bag of fluff, you cannot expect very steep slopes there. Sand cannot be held up into an infinite slope unless it’s supported. A solid cliff can. The rocky monolithic asteroids, such as Eros, do tend to have much more pronounced topographical features, much deeper and steeper craters.’

    Susorney wants to take the high-resolution models derived from spacecraft data and find parameters in them that could then be used in the much lower resolution asteroid shape models created from ground-based radar observations.

    “The difference in the resolution is quite substantial, she admits. Tens to hundreds of metres in the high-res spacecraft models and kilometres from ground-based radar measurements. But we have found that, for example, the slope distribution gives us a hint. How much of the asteroid is flat and how much is steep?”

    Coloured topographical maps from Dr Susorney show Eros (left), a rocky monolithic asteroid, as having steeper craters than Itokawa (right), a rubble pile asteroid. Image credit – Hannah Susorney.

    Dr Ferrari is working with the team preparing the DART mission. As part of a project called GRAINS, he developed a tool that enables modelling of the interior of Dimorphos, the impact target, as well as other rubble pile asteroids.

    ‘We expect that Dimorphos is a rubble pile because we think that it formed from matter ejected by the main asteroid, Didymos, when it was spinning very fast,’ Dr Ferrari said. ‘This ejected matter then re-accreted and formed the moon. But we have no observations of its interior.’

    An aerospace engineer by education, Dr Ferrari borrowed a solution for the asteroid problem from the engineering world, from a discipline called granular dynamics.

    “On Earth, this technique can be used to study problems such as sand piling or various industrial processes involving small particles, Dr Ferrari said. It’s a numerical tool that allows us to model the interaction between the different particles (components) – in our case, the various boulders and pebbles inside the asteroid.”

    Rubble pile

    The researchers are modelling various shapes and sizes, various compositions of the boulders and pebbles, the gravitational interactions and the friction between them. They can run thousands of such simulations and then compare them with surface data about known asteroids to understand rubble pile asteroids’ behaviour and make-up.

    ‘We can look at the external shape, study various features on the surface, and compare that with our simulations,’ Dr Ferrari said. ‘For example, some asteroids have a prominent equatorial bulge,’ he says, referring to the thickening around the equator that can appear as a result of the asteroid spinning.

    In the simulations, the bulge might appear more prominent for some internal structures than others.

    For the first time, Dr Ferrari added, the tool can work with non-spherical elements, which considerably improves accuracy.

    ‘Spheres behave very differently from angular objects,’ he said.

    The model suggests that in the case of Dimorphos, the DART impact will create a crater and throw up a lot of material from the asteroid’s surface. But there are still many questions, particularly the size of the crater, according to Dr Ferrari.

    ‘The crater might be as small as ten metres but also as wide as a hundred metres, taking up half the size of the asteroid. We don’t really know,’ said Dr Ferrari. ‘Rubble piles are tricky. Because they are so loose, they might as well just absorb the impact.’

    No matter what happens on Dimorphos, the experiment will provide a treasure trove of data for refining future simulations and models. We can see whether the asteroid behaves as we expected and learn how to make more accurate predictions for future missions that lives on Earth may very well depend on.

    The solar system’s asteroid belt contains C-type asteroids, which likely consist of clay and silicate rocks, M-type, which are composed mainly of metallic iron, and S-type, which are formed of silicate materials and nickel-iron. Image credit – Horizon.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:32 am on January 22, 2021 Permalink | Reply
    Tags: "Unravelling the when where and how of volcanic eruptions", A project called FEVER, , , , Horizon - The EU Research and Innovation Magazine, It’s still difficult to predict when and how these eruptions will happen or how they’ll unfold., Knowing where a volcano will erupt from is one thing but knowing when it will do so is a different matter., The VOLCAPSE project, There are about 1500 potentially active volcanoes worldwide and about 50 eruptions occur each year.,   

    From Horizon-The EU Research and Innovation Magazine: “Unravelling the when where and how of volcanic eruptions” 


    From Horizon-The EU Research and Innovation Magazine

    20 January 2021
    Sandrine Ceurstemont

    When Villarrica erupted in 2015 the volcano spewed ash and lava 1,000m into the air. Credit: Warehouse of Images / shutterstock.

    There are about 1,500 potentially active volcanoes worldwide and about 50 eruptions occur each year. But it’s still difficult to predict when and how these eruptions will happen or how they’ll unfold. Now, new insight into the physical processes inside volcanoes are giving scientists a better understanding of their behaviour, which could help protect the 1 billion people who live close to volcanoes.

    Dome-building volcanoes, which are frequently active, are among the most dangerous types of volcanoes since they are known for their explosive activity. This type of volcano often erupts by first quietly producing a dome-shaped extrusion of thick lava at its summit which is too viscous to flow. When it eventually becomes destabilised, it breaks off and produces fast-moving currents of hot gas, solidified lava pieces and volcanic ash, called pyroclastic clouds, that flow down the sides of the volcano at the speed of a fast train.

    “The hazards associated with them can be very spontaneous and hard to predict,’ said Professor Thomas Walter, a professor of volcanology and geohazards at the University of Potsdam in Germany. ‘That’s why it’s so important to understand this phenomenon of lava domes.’

    Little is known about the behaviour of lava domes, partly because there isn’t much data available. Prof. Walter and his colleagues want to better understand how they form, whether they can vary significantly in shape and what their internal structure is like. Over the last five years, through a project called VOLCAPSE, they have been using innovative techniques to monitor lava domes by using high resolution radar data captured by satellites as well as close-up views from cameras set up near volcanoes.

    ‘Pixel by pixel, we could determine how the shape, morphology and structure of these lava domes changed,’ said Prof. Walter. ‘We compared (the webcam images) to satellite radar observations.’

    The VOLCAPSE project monitors a few dome-building volcanoes around the world using various techniques to better understand this explosive type of volcano. Credit:Thomas Walter/VOLCAPSE.


    The project focussed on a few dome-building volcanoes such as Colima in Mexico, Mount Merapi in Indonesia, Bezymianny in Russia, and Mount Lascar and Lastarria in Chile. It partly involved visiting them and installing instruments such as time-lapse cameras powered by solar panels that could be controlled remotely. If a lava dome started to form, for example, the team could tweak the settings so that it captured higher resolution images more often.

    Due to high altitudes and harsh weather conditions, setting up the cameras was more challenging than expected. ‘It was a sharp learning curve, but also trial and error, because nobody could tell us what to expect at these volcanoes since it was never done before,’ said Prof. Walter.

    During their visits, the team also used drones. These would fly over a lava dome and capture high resolution images from different perspectives, which could be used to create detailed 3D models. Temperature and gas sensors on the drones provided additional information.

    Prof. Walter and his colleagues used the data to create computer simulations, such as how the growth of lava domes changes from eruption to eruption. They found that new lava domes don’t always form in the same location: a lava dome may form at the summit of a volcano during one eruption while the next time it builds up on one of its flanks. The team was puzzled, since a conduit inside a volcano brings magma to the surface during an eruption, which would mean that it changes its orientation between one eruption and the next. ‘That was very surprising for us,’ said Prof. Walter.

    Stress field

    They were able to explain how this happens by examining the distribution of internal forces – or stress field – in a volcano. When magma is expelled during an eruption, it changes how the forces are distributed inside and causes a reorientation of the conduit.

    The team also found that there was a systematic pattern to how the stress field changed, meaning that by studying the position of lava domes they could estimate where they had formed in the past and where they would appear in the future. This could help determine which areas near a volcano are likely to be most affected by eruptions yet to come.

    ‘This is a very cool result for predictive research if you want to understand where the lava dome is going to extrude (or collapse) from in the future,’ he said.

    Fumaroles are a telltale sign of an active volcano, releasing volcanic gases into the atmosphere. Credit: Thomas Walter/VOLCAPSE.

    Knowing where a volcano will erupt from is one thing, but knowing when it will do so is a different matter and the physical factors that govern this are also not well understood. Although there is a relationship between how often eruptions occur and their size, with big eruptions occurring very rarely compared to smaller ones, a lack of reliable data makes it hard to examine the processes that control eruption frequency and magnitude.

    ‘When you go back in the geological record, (the traces of) many eruptions disappear because of erosion,’ said Professor Luca Caricchi, a professor of petrology and volcanology at the University of Geneva in Switzerland.

    Furthermore, it’s not possible to access these processes directly since they occur deep down beneath a volcano, at depths of 5 to 60 kilometres. Measuring the chemistry and textures of magma expelled during an eruption can provide some clues about the internal processes that led to the event. And magma chambers can sometimes be investigated when they pop up at the surface of the Earth due to tectonic processes. Extracting information from specific time periods is still difficult though since the ‘picture’ you get is like a movie where all the frames are collapsed into a single shot. ‘It’s complicated to retrieve the evolution in time – what really happened during the movie,’ said Prof. Caricchi.

    Prof. Caricchi and his colleagues are using a novel approach to forecast the recurrence rate of eruptions. Previous predictions were typically based on statistical analyses of the geological records of a volcano. But through a project called FEVER the team is aiming to combine this method with physical modelling of the processes responsible for the frequency and size of eruptions. A similar approach has been used to estimate when earthquakes and floods will occur again.

    Using physical models should especially be useful to make predictions for volcanoes where there is little data available. ‘To extrapolate our findings from a place where we know a lot, like in Japan, you need a physical model that tells you why the frequency-magnitude relationship changes,’ said Prof. Caricchi.

    To create their model, the team have incorporated variables that affect pressure in the magma reservoir or the rate of accumulation of magma at depth below the volcano. The viscosity of the crust under the volcano and the size of the magma reservoir, for example, play a role. They have performed over a million simulations using all the possible combinations of values that can occur. The relationship between frequency and magnitude they obtained from their model was similar to what was estimated by using volcanic records so they think they were able to capture the fundamental processes involved.

    “It’s sort of a fight between the amount of magma and the properties of the crust, said Prof. Caricchi, They are the two big players that fight each other to finally lead to this relationship.”

    Models that can better predict future eruptions could protect the lives of the 1 billion people who live close to volcanoes. Credit: Thomas Walter/VOLCAPSE.

    Tectonic plates

    However, the team also found that the relationship between the size and frequency of changes across volcanoes in different regions. Prof. Caricchi thinks this is due to differences in the geometry of tectonic plates in each area.

    The tectonic plates of the world were mapped in 1996, USGS.

    “We can see that the rate at which a plate subducts below another, and also the angle of subduction, seem to play an important role in defining the frequency and magnitude of a resulting eruption,” he said. The team is now starting to incorporate this new information into their model.

    Being able to predict the frequency and magnitude of future eruptions using a model could help better assess hazards. In Japan, for example, one of the countries with the most active volcanoes, knowing the probability of future eruptions of various sizes is important when deciding where to build infrastructure such as nuclear power plants.

    It’s also invaluable in densely populated areas, such as in Mexico City, which is surrounded by active volcanoes, including Nevado de Toluca. Prof. Caricchi and his colleagues studied this volcano, which hasn’t erupted for about 3,000 years. They found that once magmatic activity restarts, it would take about 10 years before a large eruption could potentially occur. This knowledge would prevent Mexico City from being evacuated if initial signs of activity are spotted.

    “Once the activity restarts, you know you have ten years to follow the evolution of the situation, said Prof. Caricchi (People) will now know a little bit more about what to expect.”

    The research in this article was funded by the EU’s European Research Council.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 12:16 pm on December 3, 2020 Permalink | Reply
    Tags: "Opening the ‘black box’ of artificial intelligence", , Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Opening the ‘black box’ of artificial intelligence” 


    From Horizon The EU Research and Innovation Magazine

    01 December 2020
    Tom Cassauwers

    When decisions are made by artificial intelligence, it can be difficult for the end user to understand the reasoning behind them. Credit: phylevn/Flickr, licenced under CC BY 2.0.

    Artificial intelligence is growing ever more powerful and entering people’s daily lives, yet often we don’t know what goes on inside these systems. Their non-transparency could fuel practical problems, or even racism, which is why researchers increasingly want to open this ‘black box’ and make AI explainable.

    In February of 2013, Eric Loomis was driving around in the small town of La Crosse in Wisconsin, US, when he was stopped by the police. The car he was driving turned out to have been involved in a shooting, and he was arrested. Eventually a court sentenced him to six years in prison.

    This might have been an uneventful case, had it not been for a piece of technology that had aided the judge in making the decision. They used COMPAS, an algorithm that determines the risk of a defendant becoming a recidivist. The court inputs a range of data, like the defendant’s demographic information, into the system, which yields a score of how likely they are to again commit a crime.

    How the algorithm predicts this, however, remains non-transparent. The system, in other words, is a black box – a practice against which Loomis made a 2017 complaint in the US Supreme Court. He claimed COMPAS used gender and racial data to make its decisions, and ranked Afro-Americans as higher recidivism risks. The court eventually rejected his case, claiming the sentence would have been the same even without the algorithm. Yet there have also been a number of revelations which suggest COMPAS doesn’t accurately predict recidivism.


    While algorithmic sentencing systems are already in use in the US, in Europe their adoption has generally been limited. A Dutch AI sentencing system, that judged on private cases like late payments to companies, was for example shut down in 2018 after critical media coverage. Yet AI has entered into other fields across Europe. It is being rolled out to help European doctors diagnose Covid-19. And start-ups like the British M:QUBE, which uses AI to analyse mortgage applications, are popping up fast.

    These systems run historical data through an algorithm, which then comes up with a prediction or course of action. Yet often we don’t know how such a system reaches its conclusion. It might work correctly, or it might have a technical error inside of it. It might even reproduce some form of bias, like racism, without the designers even realising it.

    This is why researchers want to open this black box, and make AI systems transparent, or ‘explainable’, a movement that is now picking up steam. The EU White Paper on Artificial Intelligence released earlier this year called for explainable AI, major companies like Google and IBM are funding research into it and GDPR even includes a right to explainability for consumers.

    “We are now able to produce AI models that are very efficient in making decisions, said Fosca Giannotti, senior researcher at the Information Science and Technology Institute of the National Research Council in Pisa, Italy. But often these models are impossible to understand for the end-user, which is why explainable AI is becoming so popular.”


    Giannotti leads a research project on explainable AI, called XAI, which wants to make AI systems reveal their internal logic. The project works on automated decision support systems like technology that helps a doctor make a diagnosis or algorithms that recommend to banks whether or not to give someone a loan. They hope to develop the technical methods or even new algorithms that can help make AI explainable.

    “Humans still make the final decisions in these systems, said Giannotti. But every human that uses these systems should have a clear understanding of the logic behind the suggestion.”

    Today, hospitals and doctors increasingly experiment with AI systems to support their decisions, but are often unaware of how the decision was made. AI in this case analyses large amounts of medical data, and yields a percentage of likelihood a patient has a certain disease.

    For example, a system might be trained on large amounts of photos of human skin, which in some cases represent symptoms of skin cancer. Based on that data, it predicts whether someone is likely to have skin cancer from new pictures of a skin anomaly. These systems are not general practice yet, but hospitals are increasingly testing them, and integrating them in their daily work.

    These systems often use a popular AI method called deep learning, that takes large amounts of small sub-decisions. These are grouped into a network with layers that can range from a few dozen up to hundreds deep, making it particularly hard to see why the system suggested someone has skin cancer, for example, or to identify faulty reasoning.

    “Sometimes even the computer scientist who designed the network cannot really understand the logic,” said Giannotti.

    Natural language

    For Senén Barro, professor of computer science and artificial intelligence at the University of Santiago de Compostela in Spain, AI should not only be able to justify its decisions but do so using human language.

    “Explainable AI should be able to communicate the outcome naturally to humans, but also the reasoning process that justifies the result,” said Prof. Barro.

    He is scientific coordinator of a project called NL4XAI which is training researchers on how to make AI systems explainable, by exploring different sub-areas such as specific techniques to accomplish explainability.

    He says that the end result could look similar to a chatbot. “Natural language technology can build conversational agents that convey these interactive explanations to humans,” he said.

    Another method to give explanations is for the system to provide a counterfactual. “It might mean that the system gives an example of what someone would need to change to alter the solution,” said Giannotti. In the case of a loan-judging algorithm, a counterfactual might show to someone whose loan was denied what the nearest case would be where they would be approved. It might say that someone’s salary is too low, but if they earned €1,000 more on a yearly basis, they would be eligible.

    White box

    Giannotti says there are two main approaches to explainability. One is to start from black box algorithms, which are not capable of explaining their results themselves, and find ways to uncover their inner logic. Researchers can attach another algorithm to this black box system – an ‘explanator’ – which asks a range of questions of the black box and compares the results with the input it offered. From this process the explanator can reconstruct how the black box system works.

    “But another way is just to throw away the black box, and use white box algorithms, ” said Giannotti. These are machine learning systems that are explainable by design, yet often are less powerful than their black box counterparts.

    “We cannot yet say which approach is better, cautioned Giannotti. The choice depends on the data we are working on.” When analysing very big amounts of data, like a database filled with high-resolution images, a black box system is often needed because they are more powerful. But for lighter tasks, a white box algorithm might work better.

    Finding the right approach to achieving explainability is still a big problem though. Researchers need to find technical measures to see whether an explanation actually explains a black-box system well. ‘The biggest challenge is on defining new evaluation protocols to validate the goodness and effectiveness of the generated explanation,’ said Prof. Barro of NL4XAI.

    On top of that, the exact definition of explainability is somewhat unclear, and depends on the situation in which it is applied. An AI researcher who writes an algorithm will need a different kind of explanation compared to a doctor who uses a system to make medical diagnoses.

    “Human evaluation (of the system’s output) is inherently subjective since it depends on the background of the person who interacts with the intelligent machine,” said Dr Jose María Alonso, deputy coordinator of NL4XAI and also a researcher at the University of Santiago de Compostela.

    Yet the drive for explainable AI is moving along step by step, which would improve cooperation between humans and machines. “Humans won’t be replaced by AI, said Giannotti. They will be amplified by computers. But explanation is an important precondition for this cooperation.”

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:59 am on December 3, 2020 Permalink | Reply
    Tags: "Q&A- It’s time to rethink the Milky Way", , , , , Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Q&A- It’s time to rethink the Milky Way” 


    From Horizon The EU Research and Innovation Magazine

    03 December 2020
    Kelly Oakes

    How stars and planets form, and how special is the Milky Way, are among the as-yet unanswered questions about our galaxy. Credit: Oliver Griebl/Wikimedia, licenced under CC BY 3.0.

    Milky Way NASA/JPL-Caltech /ESO R. Hurt. The bar is visible in this image.

    The Milky Way might be right on our cosmic doorstep, but a group of astronomers suspect that the way we currently study it is stunting our understanding. Professor Ralf Klessen at Heidelberg University in Germany [Ruprecht-Karls-Universität Heidelberg (DE)] is one of four researchers who have recently begun a six-year project, ECOGAL, to try something new: imagine our home galaxy as one huge galactic ecosystem.

    Prof. Klessen believes that using this lens could answer fundamental questions about how stars and planets form, and how they shape the Milky Way’s future.

    What do you mean by studying the galaxy as an ecosystem?

    Think of it like trying to understand the climate on Earth. To do that you need to understand cloud physics, solar irradiation, the interaction of oceans with the atmosphere, the impact of humans, the carbon cycle, and so on. In order to really understand how all this works (together) you need to come with lots of complementary expertise, and I think the same applies to understanding our Milky Way and how it evolves.

    You’ve described looking at the Milky Way like an ecosystem as a ‘paradigm shift’ for astronomy. What’s wrong with how we study our home galaxy now?

    If you look at an image of the Milky Way, you see these dark patches that block the light in the background: that is a collection of clouds made of gas and dust. These things are large and can gravitationally contract to become more compact and eventually form individual stars, or star clusters.

    Star formation theories so far mostly look at these clouds in isolation. But we know that only a small fraction of the mass of each cloud is converted into stars (and this happens) on a relatively short timescale. That means the environmental conditions they experience do matter.

    So there is a link between what happens in the galaxy on large scales, determining where you can actually see and find these clouds, and then – within the clouds – where you find actual stars, and then – within these star-forming regions – where planetary systems form. There are many complex and intricate feedback loops involved.

    But typically these scales are considered disconnected. This approach really hits limits.

    What are the major outstanding scientific questions about the Milky Way?

    We still do not really fully understand how stars form – in particular, massive stars – and how this connects to their environment. We also have competing theories of how planets form, and how protoplanetary disks – the sites out of which they form – evolve.

    There’s also an interesting question of, ‘Is the Milky Way special or not?’ If you compare the Milky Way with cosmological simulations, there seem to be indications that our history was particularly boring: there was no catastrophic merging event with some neighbouring galaxy, we have apparently fewer satellite systems than you would expect, and so on.

    Could the idea that the Milky Way is especially boring have some bearing on fact that at least one habitable planet – i.e. Earth – has been able to form within it?

    It’s difficult to make that connection. The stability of planetary systems is something that is on scales so much smaller than the Milky Way is a whole. The solar system has a few light hours diameter. But light travels from one end of the Milky Way to the other in 100,000 years.

    Yet in interacting galaxies, you tend to form stars in a more clustered environment, you tend to form them in a more violent fashion. Of course, if you form more stars in a more compact configuration, you have more interactions that can disrupt the protoplanetary disk that will probably prevent the long-term stability of a planetary system. So, in a dense star cluster we would not expect a solar system like ours to exist.

    So, one can make some relations, but only in a statistical sense.

    How do you hope ECOGAL will change our understanding of the Milky Way?

    We want to build a census of star and planet formation in the Milky Way and how the different environmental conditions influence the process. For example, in the galactic centre, the density is much higher, the radiation field is much more intense, it is much more turbulent than in the solar neighbourhood. If you go further out in the galactic disk, it becomes much more boring – densities are lower, timescales are longer, not that much happens. All these peculiarities influence the properties of stellar birth.

    Another aspect is to really understand these feedback loops: how do radiation and winds from stars influence their environment? How do the highly energetic supernova explosions that mark the death of very massive stars contribute to the gas dynamics on large galactic scales? Or on small scales, how do the properties of the host star determine the birth of planetary systems? The question is, in a sense, how does the galactic habitat influence planet formation? Where can you keep planetary systems stable over long-enough timescales for life to form?

    Finally, we want to build a comprehensive model of how (the Milky Way) evolves and use this as a role model for galaxies in the more distant universe.

    What difference will this more holistic view make to the way you conduct research?

    We will really look at the connection of these scales and these feedback loops and try to understand them, both observationally with real data, and from a theoretical, computational point of view.

    This is very much a team effort. My group works on theoretical models of the Milky Way as a whole and we zoom into individual star-forming regions. Patrick Henebelle (at CEA Paris-Saclay, France) takes individual clouds and zooms in further into individual star formation sites (to study) protoplanetary accretion disks. Each of these scales is then intimately coupled to observation. Sergio Molinari at INAF Rome (Italy) looks at the large-scale distribution of clouds in the Milky Way and young star-forming regions. Leonardo Testi at the European Southern Observatory (ESO) in Garching (Germany) looks at the distribution of protostellar and protoplanetary disks, making the connection observationally between planet formation, star formation and star cluster formation.

    ESA’s Gaia mission is set to build a 3D map of the Milky Way by analysing one billion stars, and is releasing new data on 3 December. How will Gaia advance our knowledge? Will you use the data in ECOGAL?

    ESA (EU)/GAIA satellite .

    It sounds a bit strange, but getting good distances is extremely difficult in astronomy. For the interstellar medium – the gas between stars – this is so much more difficult (than for stars). We can use the information from Gaia and combine it with distance estimates from modelling of the Milky Way. In that sense Gaia is essential to help build a three-dimensional picture of the gas distribution in the Milky Way.

    But because it is an optical instrument, Gaia cannot really see very deeply into the enshrouded regions where young clusters (of stars) form. For that reason, infrared or submillimetre observations, like those by the Atacama large millimetre/submillimetre array (ALMA) in Chile operated by ESO, and other radio telescopes, are our bread and butter.

    What is the most important thing someone should know about the Milky Way?

    Star formation is not something that happened a long time ago and (then stopped) – the universe is full of dynamics. Stars and new planetary systems are born all the time around us, in our vicinity, in the Milky Way… that is not something that has been all done and decided an eternity ago.

    This dynamic picture of the universe is something that I find extremely fascinating, and I think it is a picture that maybe not so many people are aware of. They think of the heavens as this eternal thing that does not change. That is absolutely not the case.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:15 am on March 20, 2020 Permalink | Reply
    Tags: "The prostheses that could alleviate amputees’ phantom limb pain", , Horizon - The EU Research and Innovation Magazine,   

    From Horizon The EU Research and Innovation Magazine: “The prostheses that could alleviate amputees’ phantom limb pain” 


    From Horizon The EU Research and Innovation Magazine

    19 March 2020
    Ian Le Guillou

    When it comes to improving amputees’ quality of life, one of the biggest challenges is making prostheses that feel like a natural part of the user. Image credit – Johns Hopkins University Applied Physics Laboratory/Wikimedia Commons

    New prosthetic technologies that stimulate the nerves could pave the way for prostheses that feel like a natural part of the body and reduce the phantom limb pain commonly endured by amputees.

    Silvestro Micera, a professor of translational neuroengineering at Ecole Polytechnique Fédérale de Lausanne in Switzerland, has spent the past 20 years figuring out how to make better prostheses for people with amputated limbs. He became interested in prostheses as a teenager.

    ‘I loved comics and science fiction movies – things like Doctor Octopus from Spiderman,’ he said. ‘In the beginning, it was the scientific interest of a teenager, but then it became an idea of helping people to get back what they’ve lost.’

    A project that he led, called NEBIAS, developed a robotic hand that provides sensory feedback to the user. The technology behind it was ground-breaking – an implant positioned under the skin that connects to the person’s nerves. It transmits information from sensors in the hand by stimulating the nerves with electrical signals. This allows, for instance, a person to tell if an object they are holding is soft or hard.

    Through a follow-up project called SensAgain, Prof. Micera worked to further develop the technology and take it to market.

    ‘In five to ten years from now, the technology from NEBIAS is going to be provided to patients around the world,’ he said.

    To get the technology into practice as quickly as possible, he switched focus to developing prosthetic legs rather than hands, as leg amputations affect more people. Last year, he published a paper [Science Translational Medicine] showing how sensory feedback in prosthetic legs can help people.

    The users were able to walk better and move around in a less tiring and more confident way, says Prof. Micera. ‘If you’re walking better and faster, then you’re also able to reduce other effects, like back pain and cardiovascular disease,’ he said.


    To scale up this work, he set up a company called SensArs, being funded through a project called GOSAFE, with the aim of getting more evidence of the impact that the prosthesis makes on people’s lives.

    ‘We showed that longer-term, like six months or more, the technology works,’ he said. ‘The challenge is to go from a few patients for a few months to many patients for many years.’

    With only a handful of people testing the prosthesis, it is difficult to draw broad conclusions, but the early signs have been encouraging.

    ‘Very quickly, almost immediately, the subjects learn how to use this kind of prosthesis. They are able to exploit it in a very effective way. They get a very good embodiment, where they feel like the prosthesis is part of the body,’ Prof. Micera said.

    Making a prosthesis that feels like a part of a person is one of the biggest challenges for improving quality of life for amputees. Aside from the need to have sensory feedback, it is very complex to replicate the muscles that control a limb so that a prosthesis feels natural. In the hand alone, for instance, there are more than 30 individual muscles to control it.

    Professor Giovanni Di Pino, from the Campus Bio-Medico University of Rome in Italy, is a neurophysiologist who worked with Prof. Micera previously. He is applying his expertise to study neural interfaces, the devices that connect directly to a person’s nerves.

    ‘I want to try to understand how to develop a hand prosthesis that feels like part of the body,’ said Prof. Di Pino.

    Some commercial prosthetic hands are controlled through electrodes placed on the skin. These can detect the stimulation of specific muscles, which the person uses to manipulate the prosthesis. However, Prof. Di Pino says that many upper limb amputees are unhappy with their prosthetic hand, as it does not feel like a part of their body.

    Prof. Di Pino is running a project called RESHAPE, which is testing new ways to connect a controllable hand to the body. The ultimate goal is for the amputee to feel complete, and this comes back to how the body is represented in the mind.

    ‘The image of the body is like a map in our brain,’ said Prof. Di Pino. ‘We are going to describe the representation of the hand.’

    Brain scans

    Prof. Di Pino is using brain scans to understand how the neural connections change when the person is trying to move their missing hand.

    These connections and representations could be significant to help reduce the impact of phantom limb pain. This is a phenomenon where amputees can feel, sometimes very intense, pain that appears to come from their missing limb.

    ‘Phantom limb pain is extremely common in upper limb amputees,’ Prof. Di Pino said. ‘The subject is in pain because he cannot feel the hand, but he or she can feel the cortex that used to feel the hand.’

    The connections in the brain that used to control the hand and sense pain are still there, but now they are not receiving the same sensory feedback. In one of the subjects of his study, Prof. Di Pino found that using interface implants helped reduce phantom limb pain.

    ‘In the beginning, he had a lot of pain, and after three months of tests we (got rid of) 70% of his pain,’ he said.

    The advances made through these research projects will still take many years before they can help improve prostheses. However, the insights and technology developed could have a much broader impact.

    ‘Many years ago, someone asked me “why do you want to work on a niche like prostheses?”,’ Prof. Micera said. ‘Now I think I can show why, because if the technology works for sensory feedback in the nervous system, then you can use it for many things: blind people, paralysed people, diabetic patients and many others.’

    His team is already developing their technology to restore sight in blind people by stimulating the optic nerve. ‘We are also planning to apply it to restoring motor function in tetraplegic patients with spinal cord injury,’ he said.

    ‘What we did a few years ago for prostheses, I hope to do for patients with optic nerve implants a few years from now.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:00 pm on March 13, 2020 Permalink | Reply
    Tags: By 2030 a fifth of the fuel that motorists put into the petrol tanks of their cars could be alcohol in the form of ethanol., Horizon - The EU Research and Innovation Magazine   

    From Horizon: “Why raising the alcohol content of Europe’s fuels could reduce carbon emissions” 


    From Horizon The EU Research and Innovation Magazine

    09 March 2020
    Richard Gray

    E20 fuel would double the amount of ethanol in petrol and could reduce the EU’s emissions from gasoline by 8.2%. Image credit – Piqsels, licenced under CCO.

    By 2030, a fifth of the fuel that motorists put into the petrol tanks of their cars could be alcohol, according to research concluding that new petrol and ethanol blends can reduce carbon emissions from Europe’s transport sector with little additional cost to consumers.

    Labels that carry a single letter followed by a number are found on petrol pumps across Europe. Many motorists probably don’t notice these codes, or aren’t aware that when they use a pump which has one, they’re putting alcohol into their cars.

    The alcohol, in the form of ethanol derived from plants, is part of efforts to make the fuels we put in our vehicles more environmentally friendly. Most petrol now sold at pumps in Europe is a blend of 5% bioethanol and 95% gasoline, denoted by an E5 label, while some countries have moved to a new generation of fuel that contains up to 10% bioethanol, known as E10.

    And as the world looks to reduce its impact on climate change by cutting emissions from fossil fuels, motorists in the European Union could soon be putting even more alcohol into their tanks.


    The European Committee for Standardization (CEN) commissioned research looking at the costs and benefits of introducing a fuel containing 20% bioethanol, or E20. The results from the project, which concluded towards the end of 2019, will help them develop new quality and specification standards that will be required before it can be sold.

    ‘The conclusion we have reached is that all the vehicles coming onto the market and those since 2011 should be able to handle fuels with up to 20% ethanol,’ said Ortwin Costenoble, a senior standardisation consultant at the Royal Netherlands Standardization Institute (NEN), which led the project. ‘We were working on the basis that in 2030, countries would adopt E20 as the main source of fuel.’

    Under the EU’s renewable energy directive, 10% of the fuel used in transport will need to come from renewable sources such as biofuel by the end of 2020. The 2018 revision of this directive set a target of 14% renewable energy being used in all transport by 2030.

    At present, the majority of EU member states use E5 petrol in their vehicles. Some countries, however, have started moving to E10. In January, Denmark, Hungary, Lithuania and Slovakia became the latest countries to introduce E10 to their forecourts, bringing the total number of EU member states to sell the fuel at the majority of retail stations to 13.


    While bioethanol still produces carbon dioxide when it burns, because it is made from plants rather than fossil fuels that take millions of years to form, it is considered to be a renewable fuel. It is also considered to be greener, partly because as the plants grow, they absorb carbon dioxide from the air and store it before it is converted into fuel and burned. This means they are not releasing additional carbon into the atmosphere as happens when fossil fuels are burned.

    A litre of pure ethanol also produces about two thirds of the carbon emissions compared to a litre of ordinary petrol. But ethanol contains less energy per litre than petrol, so a non-optimised car will need more alcohol to travel the same distance as it would with fossil fuel. This eats away at the carbon emission savings that are possible from using ethanol. And to produce the alcohol in the first place also requires energy, probably using fossil fuels, which can further reduce carbon savings.

    But the CEN study found that while fuel consumption would go up if countries switched to using E20 fuel, due to the increased amount of ethanol, carbon dioxide emissions overall would go down 10% compared to all cars using E10.

    ‘If you use a normal octane fuel blend with 20% bioethanol, the fuel consumption increases only by 4%,’ said Costenoble. But with more ethanol you may allow the octane component of gasoline to rise, and vehicles running on fuels with a higher-octane rating tend to be more efficient.

    The researchers estimated that if all 28 EU countries (the UK was still part of the EU at the time of the study) adopted E20, it could reduce greenhouse gas emissions by the equivalent of 25.4Mt (mega-tonnes) of carbon dioxide – about 8.2% of the current emissions from gasoline in the EU.

    They estimated that further savings could be made if the fuel’s petrol component had a higher octane rating of 102 – most fuel on sale today has an octane rating of 95.


    And there are concerns about how sustainable large-scale bioethanol production can be. Most bioethanol sold in the EU is produced by fermenting sugars contained in primary crops like maize, wheat, and sugar beet. This can take up land and resources that could otherwise be used to grow food.

    Efforts, however, are underway to produce a second generation of biofuels that could overcome this problem.

    ‘Some of our members are starting to use agricultural wastes and residues left behind from food crops,’ said Victor Bernabeu, senior technical and regulatory affairs manager at the European Renewable Ethanol Association, also known as ePURE. But justifying investment into such technologies has been difficult because there have been regular changes to the renewable energy policy framework, says Bernabeu.

    Existing policies are one of the roadblocks standing in the way of E20 fuel from coming on the market in the EU. The fuel quality directive, for example, currently only allows 10% of a fuel to be replaced with ethanol, a measure originating from a time when the impact of increasing alcohol levels on vehicle emissions was unknown.

    ‘It seems like a logical step to introduce E20 and everyone we spoke to seems to want it, but at the moment it is an illegal fuel,’ said Costenoble. A change in the regulations will be needed before it can be introduced, but he hopes manufacturers and standardisation writers will begin preparing for E20 before that happens.

    Public acceptance

    Another hurdle will be public acceptance. As most vehicles currently on the road are able to run on E10 and can move to E20 with some calibration or inexpensive upgrades costing a few hundred euros, there is unlikely to be much public opposition, according to Bernabeu.

    But if the cost of fuel itself increases because it contains higher levels of ethanol, it is likely to be welcomed far less. The work by Costenoble and his colleagues, however, found that E20 could be produced with current refinery infrastructure, which would need minimal adjustments.

    Making the fuel supply logistics chain compatible with E20 would cost less than one cent per litre, says Costenoble.

    But the cost of fuel to consumers mainly depends on varying market price of oil and ethanol, combined with the tax applied by different countries. Currently ethanol costs slightly more than gasoline, but many countries in Europe do not levy tax on the ethanol in fuel. This could help to offset any additional cost to consumers, says Costenoble.

    Bernabeu believes that the reduced environmental impact of shifting to E10 and then E20 fuels could also make them more acceptable to motorists.

    ‘Lots of people are probably not aware they are consuming ethanol in their cars at the moment already,’ said Bernabeu. He points to countries where E10 has been introduced, such as Belgium and France, where he says there have been major public information campaigns. ‘(E10) has been pitched as a greenhouse gas reduction measure, so it has been widely accepted.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc