Tagged: Horizon – The EU Research and Innovation Magazine Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:56 am on June 14, 2019 Permalink | Reply
    Tags: "Quantum – a double-edged sword for cryptography", , , Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Quantum – a double-edged sword for cryptography” 


    From Horizon The EU Research and Innovation Magazine

    11 June 2019
    Jon Cartwright

    Cryptography that would be impossible for a regular computer to crack, would take a quantum computer just seconds. Image credit – Pixabay/ joffi, licensed under pixabay license

    Quantum computers pose a big threat to the security of modern communications, deciphering cryptographic codes that would take regular computers forever to crack. But drawing on the properties of quantum behaviour could also provide a route to truly secure cryptography.

    Defence, finance, social networking – communications everywhere rely on cryptographic security. Cryptography involves jumbling up messages according to a code, or key, that has too many combinations for even very powerful computers to try out.

    But quantum computers have an advantage. Unlike regular computers, which process information in ‘bits’ of definite ones and zeros, quantum computers process information in ‘qubits’, the states of which remain uncertain until the final calculation.

    The result is that a quantum computer can effectively try out many different keys in parallel. Cryptography that would be impenetrable to regular computers could take a quantum computer mere seconds to crack.

    Practical quantum computers that can be used to break encryption are expected to be years, if not decades, away. But that should not be of any reassurance: even if a hacker cannot decipher confidential information now, they could save it and simply wait until a quantum computer is available.

    ‘The problem already exists,’ said Professor Valerio Pruneri of the Institute of Photonic Sciences in Barcelona, Spain, and the coordinator of a quantum security project called CiViQ. ‘A hacker can take what is stored now, and break its key at a later date.’

    The answer, says Prof. Pruneri, is another quantum technology. Known as quantum key distribution (QKD), it is a set of rules for encrypting information – known as a cryptography protocol – that is almost impossible to crack, even by quantum computers.


    QKD involves two parties sharing a random quantum key, according to which some separate information is encoded. Because in quantum theory it is impossible to observe something without corrupting it, the two parties will know whether someone else has eavesdropped on the key – and therefore whether it is safe, or not, to share their coded information.

    Until now, QKD has usually involved specialist technology, such as single-photon detectors and emitters, which are difficult for people outside labs to implement. In the CiViQ project, however, Prof. Pruneri and his team are developing a variant of QKD that works with conventional telecommunications technology.

    They have already created prototypes, and performed some field demonstrations. Now, the researchers are working with industry telecoms clients including Telefónica in Spain, Orange in France and Deutsche Telekom in Germany to create systems that work to their respective requirements, with the hope that the first systems could be online within three years.

    Prof. Pruneri’s hope is to create highly secure communication systems up to 100 km in size suitable for governmental, finance, medical and other high-risk sectors within cities. It could even be used by everyday consumers, although Prof. Pruneri says that QKD currently reaches shorter distances and lower speed than regular communication.


    Like normal cryptography, QKD needs random keys – strings of numbers – to be generated in the first place. The more random these keys are, the greater the security of the system, as there is less chance of the keys being guessed. But the problem is that the numbers generated with traditional methods often aren’t totally random.

    Here, quantum mechanics can again come to the rescue. The behaviour of atoms, photons and electrons is believed to be truly random and this can be used as a way of generating numbers that cannot be predicted.

    Professor Hugo Zbinden of the University of Geneva in Switzerland said: “Quantum random-number generators profit from the intrinsic randomness of quantum physics, whereas classical true random number generators are based on chaotic systems, which are deterministic and, in theory, to some extent predictable.”

    Quantum random-number generators already exist, but to​ make them more widely applicable Prof. Zbinden and his colleagues working on a project called QRANGE are improving their speed and reliability, as well as reducing their cost. Currently, they are trying to develop prototypes with a ‘high technology readiness level’ – in other words, prototypes that demonstrate that the technology is ripe for use in the real world.

    The work is an important step in ensuring that, while being a threat to the security of our current communications, quantum approaches also provide a path to more secure systems.

    ​‘Quantum computers threaten classical cryptography,’ says Prof. Zbinden. ‘Quantum cryptography can be a solution, (but) it needs high-quality random numbers.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:38 am on June 7, 2019 Permalink | Reply
    Tags: EU’s €1 billion 10-year Quantum Flagship initiative to kickstart a competitive European industry in quantum technologies., Europe's plans are quicly eclipsing both the U.S. and China., Horizon - The EU Research and Innovation Magazine,   

    From Horizon The EU Research and Innovation Magazine: “Quantum computers will soon outperform classical machines” 


    From Horizon The EU Research and Innovation Magazine

    04 June 2019
    Joanna Roberts

    As a quantum computer can be in many states at the time it enables the calculation of many possibilities at once, says Dr Thomas Monz. Image credit – Flickr/IBM Research, licensed under CC BY-ND 2.0.

    European scientists have spent 100 years developing the field of quantum mechanics – a branch of physics dealing with the atomic and subatomic scale – and we need to reap the profits now that quantum computers and other technologies are becoming a reality, according to Dr Thomas Monz from the University of Innsbruck, Austria.

    He is leading a project to develop a fully scalable quantum computer. The project is part of the EU’s €1 billion, 10-year Quantum Flagship initiative to kickstart a competitive European industry in quantum technologies.

    What is a quantum computer and how does it differ from classical computers?

    ‘The big difference compared to a classical computer is that a quantum computer is following a different rule set. It’s not using zeros and ones like classical computers are – bits and bytes – but it is actually able to work with something called qubits.

    ‘Qubits are quantum bits, and have the special property that at the same time they can be zero and one. The classical computer can only be – like a light switch – either on or off, and the quantum bits can be on and off at the same time.’

    What’s the effect of that?

    ‘This superposition essentially allows it to do things that a classical computer can’t do. Because it’s in many states at the same time, in simplified terms, it allows you to probe many possibilities at the same time. (For example), if you are working in finance and you want to say which portfolio has the largest profit, you need to take many, many different cases into account and then find the best one. And this is something that a quantum computer, because it essentially allows you to calculate many things at the same time, is notably more suitable for.

    ‘(Another) prominent example is energy material design. Think about the power line you get at home. You have friction – ohmic resistance – in the cable. That’s why an electric motor or your hairdryer gets warm. Quite a bit of a power is lost from the power plant before it gets to your house. Can we come up with a new material, which doesn’t have ohmic resistance, so we don’t have (energy) losses in the cable? The inherent properties of how friction in materials work, that’s partially governed by quantum mechanics. And a quantum computer (finds it) easy to follow the rules of quantum.

    ‘It allows you to do material design and check what are good candidates for materials that wouldn’t have, say, ohmic resistance, and suddenly we save a couple of percent on global energy loss from the power plants to the consumer.’

    Where are we now in the development of quantum computers?

    ‘There are currently several proof of concept implementations. (For example) companies are working in the finance area on portfolio optimisation. Certain companies are working towards chemistry – one prominent example is how to generate fertiliser.

    ‘(But) I think the key question is, regardless of what quantum computer we actually talk about, give me one case where it will outperform the best classical computer worldwide.

    ‘And the timescale on that, I would guess it’s in the order of another year.’

    One of the big challenges is going to be writing algorithms to program quantum computers. Where are we with that?

    ‘There are a couple of (algorithms) already, but obviously you want to have more.

    ‘Everything started with Shor’s algorithm (which can find the factors of prime numbers on which today’s encryption systems are based). This was an algorithm that could convince (government) agencies to look into quantum computing, because it can break some of the most prominent encryption methods that we currently use. That was the starting point about two decades ago.

    ‘In the meantime, people have been working on (algorithms for) optimisation calculations. If you want to optimise a (financial) portfolio or make sure no-one gets stuck in a traffic jam – mathematically they are all very similar.’

    Will our use of quantum computers depend on the algorithms that we’re able to develop?

    ‘Sure. Think about your smartphone. Your smartphone is a computer and depending on which app you load, it can be something where you send out a message or you hear some music. A quantum computer is also fully programmable. The more algorithms we have, the more apps we can build with those algorithms, and then you want to have your quantum app store.’

    Is it fair to say that we won’t have quantum computers at home in the future because they’re something very specialised?

    ‘Partially. Say there is a quantum computer available right now, would you buy one? If you say, “I mainly write emails, watch videos and store my pictures,” you wouldn’t need a quantum computer – not for the moment.

    ‘(But) think about your classic computer. For graphics, it has a graphics card. It’s likely there will be a quantum co-processor in the long-run. It will be an add-on to your classical computer to give you some additional capabilities for special computing (or something such as secure communication).

    ‘(Or) it could be that there are quantum computers available and you can have access to them via a simple cloud interface.’

    You run a project called AQTION, which is trying to build a quantum computer. Can you tell us a bit about it?

    ‘IBM, Intel and Google are building on solid-state semiconductors (used to make computer chips) whereas we are using single atoms. In solid-state systems, you start with a lump of material and try to control it so it becomes quantum. Our approach is the opposite. We start with an atom, which is already quantum, and look at how to control it. We already have the quantum properties, so we only – only in quotation marks – have to focus on the classical (engineering) part of it.

    ‘Another aspect is that most of these (solid-state) computers are built in a lab environment. (Our quantum computer) is meant to operate in an office environment. If the air conditioning breaks, it still ought to work. If you want to ship it to a partner, you disassemble it, put the boxes into wooden crates, you ship that, you assemble it, and it ought to work. Rather than this once-in-a-lifetime prototype that only exists in one lab.’

    So the question is still open about the best way to build a quantum computer?

    ‘Yes. I would argue there are probably five or six approaches. The two most promising for the moment are trapped ions – that is what we pursue – and superconducting systems.’

    We always hear about the private companies that are driving forward quantum computing. What effect will the EU’s Quantum Flagship have?

    ‘Intel, IBM, Google all pursue a technology (to build quantum computers) that’s close to their hearts. If you have a hammer, everything looks like a nail; if you are in the semiconductor industry you want to address every new potential application with semiconductor technology. That’s what they do.

    ‘I think what the funding from the EU allows us to do is to compete with these entities because there are not that many companies in Europe (pursuing quantum technologies). The second part is, maybe you want to have a different tool using a new technology (not just semiconductors). So if one technology might be a dead end in the long run, we have in Europe a plan B and a plan C because we don’t put everything into one bucket.

    ‘The Flagship is looking not only at computing, but also clocks, sensors, communications. It has a long-term vision of quantum computing as well.’

    Where does Europe stand in the global race on quantum technologies?

    ‘Think about Schrödinger, Einstein – all of quantum physics was developed 100 years ago in Europe. This is something that we excel at. We have invested in fundamental research for essentially the last 100 years, and now that there is the chance of turning research into technology and applications, we shouldn’t miss that train. We have a very good chance of making that work.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 11:05 am on May 4, 2019 Permalink | Reply
    Tags: "Sponges and corals: Seafloor assessments to help protect against climate change", , , Horizon - The EU Research and Innovation Magazine, , Sponge grounds have an effect on ocean health, The role of vast sea sponge grounds   

    From Horizon The EU Research and Innovation Magazine: “Sponges and corals: Seafloor assessments to help protect against climate change” 


    From Horizon The EU Research and Innovation Magazine

    29 April 2019
    Sandrine Ceurstemont

    The glass sponge Vazella pourtalesi, found on the Scotian Shelf, is one of about 8,500 sponge species that is known to exist. Image credit – Fisheries and Oceans Canada

    Little is known about deep ocean environments. But scientists focussing on the depths of the North Atlantic are now learning more about their ecosystems – including the role of vast sea sponge grounds – and how to safeguard them against the effects of climate change and industry.

    Deep-sea sponges – aquatic invertebrates that spend their lives attached to the seabed and are found in almost all areas of the deep ocean – have been particularly neglected when it comes to research and conservation. But they are an important component of their ecosystems.

    ‘Given their huge filtering capacity and their pronounced role in pumping and cleaning the ocean, sponge grounds have an effect on ocean health,’ said Professor Hans Tore Rapp from the University of Bergen in Norway.

    But studying sponges is not easy. Found at depths of up to 4,000 metres, sponges are hard to access and most cannot handle exposure to air which makes it difficult to conduct lab experiments.

    Telling species apart is tricky too because many have limited distinguishing features. ‘Nowadays a combination of morphological information and DNA has made things a bit easier but it is still a challenging and very time-consuming task,’ said Prof. Rapp.

    Professor Rapp and his colleagues are identifying different species for a wide-ranging project called SponGES. The scientists are investigating sponges’ ecological functions, how these animals can be used in biotechnology as well as the resilience of their ecosystems.

    ‘We will be using modelling tools to look into the future, to see how these sponge grounds will be impacted by climate change or any kind of stressors,’ said Prof. Rapp.

    Scientists want to understand how fragile cold-water coral ecosystems are being affected by sectors such as deep-sea mining. Image credit – © Changing Oceans Expedition 2012 (cruise JC073)

    Sponge genomes

    So far, the scientists have discovered more than 30 new species of sponges and produced the largest sponge genomic data sets ever, which should reveal how different species and populations are related. They also performed experiments in the lab to investigate their ecosystem functions, such as how they absorb and turn carbon and inorganic nutrients like nitrogen and phosphorus into nourishment for the rest of the habitat.

    Now they are conducting experiments on the seafloor. ‘(We are) looking at sponges in pristine areas then comparing how they function in areas that are more impacted, whether it’s from oil and gas or mining,’ said Prof. Rapp.

    The project is also taking a novel approach to drug discovery. The chemicals that sponges use to defend themselves could potentially be used to treat cancer and infectious diseases.

    Sponges are typically ground up and tested to identify compounds that could be used to develop drugs. The project, however, is trying to zero in on the genes involved in making these compounds so that it can sustainably produce them in the lab.

    ‘We’ve already identified some of the gene sequences that are related to the production of anti-cancer compounds,’ said Dr Shirley Pomponi from Florida Atlantic University in the US and Wageningen University in the Netherlands, who is leading the biotechnology arm of the project.

    Dr Pomponi and her project colleagues are also one step closer to creating bone implants that make use of sponge architecture. Sponges produce microscopic skeletal elements, or spicules, made of biosilica that are the building blocks of their structures. Biosilica has been found to induce bone-forming cells to produce more bone. The scientists therefore hope to make implant scaffolds with bone-forming cells.

    They achieved a breakthrough by creating a cell line in the lab from deep-sea sponge cells, which Dr Pomponi claims is the first time this has been done for any marine invertebrate.

    Dr Pomponi says the cell lines are exciting as they will enable the scientists to study how sponges produce their skeletons as well as their defensive chemicals. The team is focussing on how to produce biosilica and these chemicals in tissue culture, she says.


    Results from the project are already being recognised by policymakers too. Sponge grounds have now been included in the Norwegian Red List for endangered habitats, for example.

    ‘We are now also contributing to getting sponge grounds into the management plan for the Nordic Seas,’ said Prof. Rapp.

    In addition to sponges, other elements of deep North Atlantic ocean ecosystems need to be better understood. To tackle this, a project called ATLAS is undertaking the biggest assessment of the area to date.

    The deep Atlantic is home to a number of vulnerable ecosystems, says Professor Murray Roberts from the University of Edinburgh in the UK, the project coordinator.

    ‘We need to understand the corals, the sponges, the clams, we need to understand the seamounts,’ he said.

    ‘And critically we need to understand how industry active in these areas already, and proposing to increase its operations, could impact these systems.’

    The project is monitoring the deep ocean by using climate-monitoring instruments, along with new equipment such as sensor arrays to measure carbon dioxide and acidity to provide regular readings for the first time which will be made publicly available.

    The new information will help to better understand the physics of the ocean such as circulation patterns, for example, so that changes can be predicted.

    The project has published 49 scientific papers, revealing, for example, how corals on the seafloor are nourished in an environment where there is little food available [Scientific Reports].

    Simulations showed that water currents interact with coral mounds, which can grow hundreds of metres tall to draw organic matter down to them from the surface.

    ‘It’s an amazing example of ecosystem engineering on a scale we’ve never really seen before,’ said Prof. Roberts. The scientists will follow up by taking measurements in the field to see if they agree with their model.


    Another aspect of the project involves bringing together different sectors that use the ocean, such as fisheries and oil and gas companies, to plan out marine space in a more sustainable way. ‘It’s like town planning in a sense for the oceans,’ said Prof. Roberts.

    The team’s goal is to make sure that ocean activities are sustainable and that ecosystems are preserved.

    They have been working with multinational oil and gas companies, for example, to assess the areas in which they operate, where there are vulnerable ecosystems such as sponge grounds and coral reefs. The impact of climate change also needs to be addressed.

    ‘With warming of the Atlantic Ocean and gradual acidification, areas that have been protected are going to end up as unsuitable for the very things that they’ve been closed to protect,’ said Prof. Roberts.

    Based on scientific findings from the project, the team plans to come up with management strategies for sectors such as deep-sea mining and renewable energy where growth is forecast. The team also developed new models showing the distribution of deep Atlantic species which will provide a good starting point.

    ‘We have a much better understanding of how likely it is that vulnerable species occur in areas that industries are looking to exploit,’ said Prof. Roberts. ‘We’re (now) taking that into industry and policy.’

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 1:25 pm on March 29, 2019 Permalink | Reply
    Tags: "Fleets of autonomous satellites to coordinate tasks among themselves", , , For the cost of a multi-tonne satellite it is possibile to use groups of small satellites – even hundreds of them – to set up a sensor network., Horizon - The EU Research and Innovation Magazine, MiRAGE software, NetSat project Würzburg University and Centre for Telematics, Researchers are using automation and artificial intelligence to make smaller autonomous satellites smarter and more effective.   

    From Horizon The EU Research and Innovation Magazine: “Fleets of autonomous satellites to coordinate tasks among themselves” 


    From Horizon The EU Research and Innovation Magazine

    27 March 2019
    Rex Merrifield

    Smaller, autonomous satellites could help analyse the internal structure of clouds to give a more detailed view of Earth’s changing climate. Image credit – Earth Science and Remote Sensing Unit, NASA Johnson Space Center

    NASA Johnson Space Center, Houston TX, USA campus

    Space missions have long benefited from some autonomous operations being carried out aboard spacecraft, but with a sharp increase expected in the number of satellites being launched in the next few years, researchers are using automation and artificial intelligence to make them smarter and more effective.

    Technology firms and researchers see scope for giving satellites more onboard control, to circumvent difficulties in communicating with Earth and reduce the need for continuous hands-on supervision and intervention from afar. That will reduce operating costs and potentially allow them to do more sophisticated tasks independently of their Earth-bound supervisors.

    Smaller, autonomous spacecraft could close the gaps in coverage between much larger, more expensive telecommunications satellites, or be used in formations to monitor space weather or observe Earth from different perspectives simultaneously – such as three-dimensional real-time analysis of clouds or monitoring volcanic plumes.

    In doing so, they would be able to correct and maintain their trajectory, avoid collisions and supervise their on-board systems all on their own – all at a substantially lower operating cost.

    Professor Klaus Schilling, chair of robotics and telematics at Würzburg University in Germany, has been working on the technology for groups of small, autonomous satellites to fly in formation, communicating directly with each other to organise and coordinate tasks. Success would mark a world first.


    For the cost of a multi-tonne satellite, he sees the possibility to use groups of small satellites – even hundreds of them – to set up a sensor network. The fleet would need more advanced coordination and control, but would be able to provide better temporal and spatial resolution than one giant craft.

    While miniaturisation can present difficulties for satellites, such as susceptibility to noise in electronic circuits, sophisticated software can detect and correct these problems and cooperation between small spacecraft can also enhance their capabilities, Prof. Schilling says.

    “This is even the case with a single satellite, but it becomes critical at the multi-satellite level, in the context of the formation,” said Prof. Schilling, who also heads the German research firm the Centre for Telematics.

    His NetSat project aims to launch four small satellites at the end of this year, to orbit the Earth and test formations with varying degrees of autonomy, with light-touch supervision from ground control.

    The satellites will be around 3 kilograms each – a mere fraction of the size of the biggest satellites – and will be placed in a low Earth orbit, about 600 kilometres above the surface.

    To date, Prof. Schilling and his team have used satellites already in orbit to develop and demonstrate systems for communication, positioning and orientation, and they are currently testing an electrical propulsion system for NetSat.

    The technology also incorporates two decades of learning from research into controlling formations of mobile robots, extending into three dimensions the swarm-like behaviour used to coordinate terrestrial rovers.

    Klaus Schilling with the German first pico-satellite (a satellite with 1 kg of mass), designed and realized by his team 2005. Image credit – University Würzburg


    The NetSat spacecraft will be able to coordinate with each other over distances from about 100 kilometres down to 10 metres, as well as change their formation depending on the tasks they need to perform.

    ‘For us it will be like having a laboratory in space, where we can do a lot of operation tests, a lot of control tests and a lot of sensor tests, which will help us for future missions,’ Prof. Schilling said.

    NetSat works by distributing computing power between satellites in a formation, but another approach under exploration is to use artificial intelligence (AI) to increase satellite autonomy.

    AI can make a satellite aware of its surroundings and decide autonomously when and how to carry out operational tasks, such as gathering images, analysing and processing them, and then selecting only the essential data for downloading to the Earth station.

    The aim could be to identify specific targets that can be monitored or tracked, perhaps a building or a ship or a vehicle on the surface of the Earth, or filtering out clouds to improve image quality.

    Such a satellite could also recognise new events that need monitoring or anomalies demanding action, says Dr Lorenzo Feruglio, founder and chief executive of Italian space-technology start-up AIKO, based in Turin.

    ‘In a sense you need to detect the conditions and what is happening and then you react to those conditions autonomously, using AI rather than traditional algorithms,’ Dr Feruglio said.

    He leads a project called MiRAGE, which is using AI tools such as deep learning to automate satellite operations.

    Lower cost

    Such smart, AI-based on-board systems ensure the spacecraft can complete its tasks without the delays involved in awaiting new instructions or decisions from ground control, which can then focus on critical issues rather than routine tasks – with sharply reduced staffing levels and at much lower cost.

    The MiRAGE software, some of which has its roots in the functionality demanded by drones or autonomous cars, will be launched as an on-board experiment on a small satellite in the last quarter of this year, with a view to being rolled out on larger spacecraft in future. One of the aims is to demonstrate the adaptability of AI to different tasks and mission objectives – including the possibility of deep space exploration.

    ‘In general, AI and deep learning are proving their worth in many different industries and the benefits (for space missions) are way from being fully explored yet,’ Dr Feruglio added.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:05 am on February 22, 2019 Permalink | Reply
    Tags: "Fuel buildup", Erratic firestorms known as pyroCbs, Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “‘It eats everything’ – the new breed of wildfire that’s impossible to predict” 


    From Horizon The EU Research and Innovation Magazine

    21 February 2019
    Annette Ekin

    The 2017 Chilean wildfires, along with those in Portugal, were confirmation that the new type of fire was here to stay. Image credit – Pablo Trincado, licensed under CC BY 2.0

    We’re fighting a different kind of wildfire whose behaviour experts are struggling to predict.

    Climate change and negligent forest management are causing higher-intensity, faster-moving fires that can generate enough energy to evolve into erratic firestorms, known as pyroCbs, in the face of which first responders can do little.

    “Traditionally we could predict the fire behaviour and the direction of the fire but under those conditions and those moments it’s not possible,” said Marc Castellnou, president of the Spanish independent wildfire prevention group Pau Costa Foundation.

    As a wildland fire analyst with the Catalan fire services, Castellnou reconstructs wildfires using simulations, satellite, on-the-ground and other data.

    This wildfire shows a different behaviour than those of the past, he says. ‘It eats everything.’

    While these fires are rare, when one strikes it can generate 100,000 kilowatts of energy per metre. In firefighting terms, this is 10 times what a firefighter can handle, but even at 4,000 kilowatts, firefighters cannot go near the flames and require aerial support. “The old way of fighting fires by sending firefighters – that’s gone,” Castellnou said.

    New normal

    There have been signs of trouble since the 1990s, according to Castellnou.

    “This change has been cooking for a long time, but the first time we realised something wrong was happening were the years 2009 and 2012,” he said, referring to the Black Saturday bushfires in the Australian state of Victoria that killed 173 people and wildfires in Spain, Portugal, Chile and California, US. Many in the fire community initially thought these were just abnormal events, he says.

    But then wildfires in Chile and Portugal in 2017 indicated that those weren’t simply extreme years. “That was the new normal arriving. 2018 has confirmed that,” he said, referring to the deadly wildfires in Greece and in California.

    On October 15, 2017, Castellnou was in central Portugal to conduct analysis then support the local services as the wildfires became firestorms.

    “What I saw was the pace of the fires … You think: ‘Well that cannot be real.’ When you go there (and see the damage) you understand that that is the reality,” he said.

    Castellnou, who spoke about the future of fighting wildfires at the EU’s security research event in December 2018, first joined the Catalan fire and rescue services as a seasonal firefighter when he was a teenager. In the past, he says, a fire that destroyed 25,000 hectares a day was considered extreme. According to his figures, the October fires in Portugal consumed 220,000 hectares of forest, an area 22 times the size of Lisbon and killed more than 40 people. Castellnou says that at their peak, wildfires burned at a rate of 10,000 hectares per hour over seven hours.

    “This is something that blew my mind and I cannot use technology to simulate that because models can’t predict it,” he said. The challenge is now predicting how they will behave, he says. “We’re still not there. We’re struggling.”


    Wildfire experts say that climate change, causing a long-term rise in temperature and less rainfall, is creating unprecedented flammable conditions that are making forests burn with more intensity. Wildfires now occur in the wintertime and affect regions in latitudes beyond the fire season-prone countries of Spain, Greece, Italy, Portugal and France. Castellnou says that wildfires are expected to affect highly populated areas like central Europe.

    “Last summer, it was the first time in history we were having wildfires in (nearly) every single country in Europe,” he said.

    “It’s not that climate change will create these new scenarios. No, no. The new scenario is already here, and it has come a lot faster than expected.”

    According to experts, urbanisation and poor forest management for reducing fuel – the grasses and shrubs that fires feed on – are also to blame.

    David Caballero, who also spoke at the security research event, assesses the wildfire risks in populated areas, focusing on the wildland-urban interface, where infrastructure and urban development intermingle with forests and other wildlands. He is contributing to a project called Clarity that is working to join up different IT systems to protect cities and infrastructures from the effects of climate change.

    He says we’re seeing more fast-growing, high-energy fires affecting populated areas.

    “We have to be prepared. Whenever we have forest in Europe, we eventually will have forest fires,” he said.

    He travelled to the seaside village of Mati, Greece, in the immediate aftermath of Europe’s deadliest wildfires last year which killed 99 people in the region of Attica. Speaking to firefighters and survivors, he learnt that many people did not expect the fires to cross the highway that runs parallel to the coast. In the past the fires had halted at this point, but this time they leapt across, burning through Mati.

    “There was an enormous amount of fuel due to the lack of management for 40 years,” he said. The fires tore through the village and reached the coast in just 20 minutes.

    Caballero says that all along the Mediterranean coast, unregulated construction with little regard for safety and evacuation routes and lax vegetation management mean that more places are at risk. He says local and regional authorities can no longer afford to be negligent. ‘We are living surrounded by fuel,’ he said.

    Culture of risk

    Pau Costa Foundation, established to speed up the sharing of information and know-how between fire services and society, works on a number of prevention campaigns. For a project called Heimdall, set up to contribute to an EU-wide information system about fires and other emergencies, the foundation is ensuring that the general public has a voice in shaping it.

    One of the foundation’s aims is to change the social perception of wildfires. A tendency to fight every fire, small or large, has let landscapes thrive artificially, Castellnou says. “Not all fire is bad,” he said. By clearing old trees, fires can make way for the growth of new forests that are adapted to climate change.

    Smaller fires, through activities such as prescribed burning, also have a role to play in creating scars in the land which break up a bigger fire’s path. “A mosaic of landscape of different ages and low-intensity fires is the best protection against the big fires,” he said.

    Oriol Vilalta, director of the foundation and a volunteer firefighter, says with wildfires killing more people in Europe, causing more than 200 deaths in the past three years, it’s time we learnt how to coexist with them.

    “We need to create a culture of risk. The Japanese know very well what to do in case of an earthquake, but we don’t know what to do in Europe with fires,” Vilalta said.

    In the past, the tendency was to evacuate people, but the general public must become part of the solution through self-protection, he says. “(That’s) what to do and what not to do, where to stay and where not to stay in case of a fire.”

    The research projects in this article are funded by the EU. If you liked this article, please consider sharing it on social media.

    What can be done? The expert view

    Create an EU-wide programme to teach people how to live with wildfires and manage landscapes, similar to FireSmart in Canada or Firewise in the US [Don’t look to the USA for expertise, we do not have it. One of the biggest problems we have is the lack of removal of “fuel buildup”, the removal of dead trees. I am a hiker and I see this in every venue I have for hiking.].

    Establish long-term or permanent research structures to understand our future with fire.

    Help municipalities, firefighters and communities work together to raise awareness and knowledge about wildfire risks and how fires behave.

    Reduce the vegetation that makes forests flammable [especially dead “fuel buildup”], so fire and rescue services have the capacity to fight fires.

    Create an incentive to clear vegetation, such as constructing buildings from wood or using biomass to heat public buildings and hospitals.

    In the case of a fire, provide clear information for residents about when to evacuate and when to stay in their homes.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:16 am on September 14, 2018 Permalink | Reply
    Tags: , , , , , , Dark matter clusters could reveal nature of dark energy, Horizon - The EU Research and Innovation Magazine   

    From Horizon The EU Research and Innovation Magazine: “Dark matter clusters could reveal nature of dark energy” 


    From Horizon The EU Research and Innovation Magazine

    10 September 2018
    Jon Cartwright

    Gravitational lensing in galaxy clusters such as Abell 370 are helping scientists to measure the dark matter distribution. Image credit – NASA, ESA, the Hubble SM4 ERO Team and ST-ECF

    Scientists are hoping to understand one of the most enduring mysteries in cosmology by simulating its effect on the clustering of galaxies.

    That mystery is dark energy – the phenomenon that scientists hypothesise is causing the universe to expand at an ever-faster rate. No-one knows anything about dark energy, except that it could be, somehow, blowing pretty much everything apart.

    Dark Energy Survey

    Dark Energy Camera [DECam], built at FNAL

    NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

    Meanwhile, dark energy has an equally shady cousin – dark matter.

    Dark Matter Research

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Dark Matter Particle Explorer China

    DEAP Dark Matter detector, The DEAP-3600, suspended in the SNOLAB deep in Sudbury’s Creighton Mine

    LUX Dark matter Experiment at SURF, Lead, SD, USA

    ADMX Axion Dark Matter Experiment, U Uashington

    This invisible substance appears to have been clustering around galaxies, and preventing them from spinning themselves apart, by lending them an extra gravitational pull.

    Such a clustering effect is in competition with dark energy’s accelerating expansion. Yet studying the precise nature of this competition might shed some light on dark energy.

    ‘Many dark energy models are already ruled out with current data,’ said Dr Alexander Mead, a cosmologist at the University of British Columbia in Vancouver, Canada, who is working on a project called Halo modelling. ‘Hopefully in future we can rule more out.’

    Gravitational lensing

    Currently, the only way dark matter can be observed is by looking for the effects of its gravitational pull on other matter and light. The intense gravitational field it produces can cause light to distort and bend over large distances – an effect known as gravitational lensing.

    By mapping the dark matter ​in distant parts of the cosmos, scientists can work out how much dark matter clustering there is – and in principle how that clustering is being affected by dark energy.

    The link between gravitational lensing and dark matter clustering is not straightforward, however. To interpret the data from telescopes, scientists must refer to detailed cosmological models – mathematical representations of complex systems.

    Dr Mead is developing a clustering model that he hopes will have enough accuracy to distinguish between different dark-energy hypotheses.

    ‘An analogy I like a lot is with turbulence. In turbulent fluid flow you can talk about currents and eddies, which are nice words, but the reality of how fluid in a pipe goes from flowing calmly to flowing in a turbulent fashion is extremely complicated.’


    ‘If dark energy turns out to be a dynamical phenomenon this will have a profound implication not only on cosmology, but on our understanding of fundamental physics.’

    Dr Pier Stefano Corasaniti, Paris Observatory, France

    Fifth force

    One of the more exotic theories is that dark energy is the result of a hitherto undetected fifth force, in addition to nature’s four known forces – gravity, electromagnetism, and the strong and weak nuclear forces inside atoms.

    A more common hypothesis for dark energy, however, is known as the cosmological constant, which was put forward by Albert Einstein as part of his general theory of relativity. It is often believed to describe an all-pervading sea of virtual particles that are continually popping into and out of existence throughout the universe.

    One way to rule out the cosmological constant hypothesis, of course, is to prove that dark energy is not constant at all. This is the goal of Dr Pier Stefano Corasaniti of the Paris Observatory in France, who – in a project called EDECS – is approaching dark-matter clustering from a different direction.

    Instead of attempting to model clustering from gravitational lensing data, he is beginning specifically with a dynamical – that is, not constant – hypothesis of dark energy, and trying to predict how dark matter would cluster if this was the case.

    Pushing the limits

    There are, in principle, infinite ways dark energy can vary in space and time, although many theories have already been ruled out by existing observations. Dr Corasaniti is focussing his simulations on types of dynamical dark energy that push at the edges of these observational limits, paving the way for tests with future experiments.

    The simulations, which trace the evolution of numerous, ‘N-body’ dark matter particles, require supercomputers running for long periods of time, processing several petabytes (one thousand million million bytes) of data.

    ‘We have run among the largest cosmological N-body simulations ever realised,’ Dr Corasaniti said.

    Dr Corasaniti’s simulations predict that the way dark energy evolves over time ought to affect dark matter clustering. This, in turn, alters the efficiency with which galaxies form in ways that would not be the case with constant dark energy.

    The predictions his models are making could be tested with the help of forthcoming telescopes such as the Large Synoptic Survey Telescope in Chile and the Square Kilometre Array in Australia and South Africa, as well as by satellite missions such as Euclid (EUropean Cooperation for LIghtning Detection) and WFIRST (Wide Field Infrared Survey Telescope).


    LSST Camera, built at SLAC

    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    SKA Square Kilometer Array

    SKA South Africa

    ESA/Euclid spacecraft


    ‘If dark energy turns out to be a dynamical phenomenon this will have a profound implication not only on cosmology, but on our understanding of fundamental physics,’ said Dr Corasaniti.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 9:54 am on September 14, 2018 Permalink | Reply
    Tags: , , , Earthquake research, EPOS-European Plate Observing System, Horizon - The EU Research and Innovation Magazine, ,   

    From Horizon The EU Research and Innovation Magazine : “Plate tectonics observatory to create seismic shift in earthquake research” 


    From Horizon The EU Research and Innovation Magazine

    13 September 2018
    Gareth Willmer

    A 6.2-magnitude earthquake in Amatrice, Italy, in August 2016 killed nearly 300 people. Image credit – Amatrice Corso by Mario1952 is licensed under Creative Commons CC-BY-SA-2.5 and 2016 Amatrice earthquake by Leggi il Firenzepost is licensed under CC BY 3.0

    We may never be able to entirely predict earthquakes such as those that hit central Italy in 2016, but we could better assess how they’re going to play out by joining up data from different scientific fields in a new Europe-wide observatory, say scientists.

    In 2016 and early 2017, a series of major earthquakes rocked central Italy. In the hill town of Amatrice, one magnitude-6.2 earthquake devastated the town and claimed the lives of nearly 300 people, with hundreds more injured.

    Richard Walters, an assistant professor in the Department of Earth Sciences at Durham University, UK, has been studying a variety of datasets to understand how these quakes played out.

    Durham U bloc

    From Durham University

    He and his colleagues found that a network of underground faults meant there was a series of seismic events rather than one major earthquake – a finding that could help scientists predict how future seismic events unroll.

    ‘We were only able to achieve this by analysing a huge variety of datasets,’ said Dr Walters. These included catalogues of thousands of tiny aftershocks, maps of earthquake ruptures measured by geologists clambering over Italian hillslopes, GPS-based ground-motion measurements, data collected by a satellite hundreds of kilometres up, and seismological data from a global network of instruments.

    ‘Many of these datasets or processed products were generously shared by other scientists for free, and were fundamental to our results,’ he said. ‘This is how we make big advances.’

    At the moment, this type of research can rely on having a strong network of contacts and disadvantage those without them. That’s where a new initiative called the European Plate Observing System (EPOS), set to launch in 2020, comes in.

    The aim is to create an online tool that brings together data products and knowledge into a central hub across the solid Earth science disciplines.

    ‘The idea is that a scientist can go to the EPOS portal, where they can find a repository with all the earthquake rupture models, historical earthquake data and strain maps, and use this data to make an interpretative model,’ said Professor Massimo Cocco, the project’s coordinator.

    ‘A scientist studying an earthquake, a volcano, a tsunami, and so on, needs to be able to access very different data generated by different communities.’


    ‘While in Europe’s current climate politicians may be putting up borders, scientists in those same countries are trying even harder to break down national barriers.’

    Dr Richard Walters, Durham University, UK


    At the moment, findings on solid Earth science at a European scale are scattered among a mosaic of hundreds of research organisations. The challenge is to incorporate a variety of accessible information from many different scientific fields, using a combination of real-time, historical and interpretative data.

    EPOS will integrate data from 10 areas of Earth science, including seismology, geodesy, geological data, volcano observations, satellite data products and anthropogenic – or human-influenced – hazards.

    It will help build on the type of data integration that happened after the Amatrice quake, in which the lead organisation behind EPOS – Italy’s National Institute of Geophysics and Volcanology (INGV) – was involved in coordinating and fostering data sharing.

    This included real-time data from temporary sensor deployments, as well as seismic hazard maps, satellite data products and geophysical data – leading to a first model of the quake’s causative source within 48 hours to aid emergency planning.

    So far, a prototype of the portal has been developed and it will now be tested by users over the coming year to make sure it meets needs.

    Dr Walters said that EPOS is right on time. ‘Projects like EPOS are especially timely and valuable right now, as many of the subdisciplines that make up solid Earth geoscience are entering the era of big data,’ he said.


    The eruption of Icelandic volcano Eyjafjallajökull in 2010 highlights another issue that EPOS is hoping to improve – the challenge of coordination across borders. Though this event did not cost human lives, it had a much wider impact in Europe, leading to flights being grounded throughout the region and costing airlines an estimated €1.3 billion.

    In such cases, said Prof. Cocco, it helps to know factors such as the ash’s composition, something that affects how a plume travels but is not necessarily included in the models of meteorologists. That knowledge could be gained through access to volcanology data, and also used by aviation authorities and airlines, potentially to design systems to protect engines.

    Prof. Cocco said the idea is that EPOS could also be used by people outside the research community to ‘increase the resilience of society to geohazards’. An engineer or organisation could use data on ground shaking or earthquake occurrence to aid safe exploitation of resources or evaluate risks in building a nuclear power plant, for example.

    In addition, the aim is to make it easier for students or young scientists to interpret data through tools, software, tutorials and discovery services, rather than having access to just raw data. ‘Otherwise, you are providing only usability to skilled scientists,’ said Prof. Cocco. ‘This, to me, is the only way to achieve open science.’

    At present, the EPOS community comprises about 50 partners across 25 European countries, with hundreds of research infrastructures, institutes and organisations providing data. The organisation has, meanwhile, submitted a final application to become a legal entity known as a European Research Infrastructure Consortium (ERIC), with a decision establishing the ERIC expected within the next two months. This official status will aid integration with other national and European organisations, and have benefits in the allocation of funding, said Prof. Cocco.

    Professor Giulio Di Toro, a structural geologist at the University of Padova in Italy, said it is great to have this type of hub to bring information together and improve access, but also important to ensure that it doesn’t lead to an increase in bureaucracy. If institutions come up against funding issues, it could also pose a challenge to their ability to share data, he added: ‘If for some years you don’t get grants, you will not produce data to share.’

    Meanwhile, Dr Walters sees a positive spirit reflected in these types of initiative. ‘While in Europe’s current climate politicians may be putting up borders,’ he said, ‘scientists in those same countries are trying even harder to break down national barriers, and working together to build something better for everyone.’

    The implementation phase of EPOS is being part-funded by the EU. If you liked this article, please consider sharing it on social media.

    Earthquake Alert


    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.


    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    See the full article here.

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:59 am on July 20, 2018 Permalink | Reply
    Tags: Antimatter plasma reveals secrets of deep space signals, , , , Computer model called OSIRIS, , Horizon - The EU Research and Innovation Magazine, , The exact conditions necessary to produce a plasma containing positrons remain unclear   

    From Horizon The EU Research and Innovation Magazine: “Antimatter plasma reveals secrets of deep space signals” 


    From Horizon The EU Research and Innovation Magazine

    16 July 2018
    Jude Gonzalez

    Mysterious radiation emitted from pulsars – like this one shown leaving a long tail of debris as it races through the Milky Way – have puzzled astronomers for decades. Image credit – NASA

    Mysterious radiation emitted from distant corners of the galaxy could finally be explained with efforts to recreate a unique state of matter that blinked into existence in the first moments after the Big Bang.

    For 50 years, astronomers have puzzled over strange radio waves and gamma rays thrown out from the spinning remnants of dead stars called pulsars.

    Researchers believe that these enigmatic, highly-energetic pulses of radiation are produced by bursts of electrons and their antimatter twins, positrons. The universe was briefly filled with these superheated, electrically charged particles in the seconds that followed the Big Bang before all antimatter vanished, taking the positrons with it. But astrophysicists think the conditions needed to forge positrons may still exist in the powerful electric and magnetic fields generated around pulsars.

    ‘These fields are so strong, and they twist and reconnect so violently, that they essentially apply Einstein’s equation of E = mc^2 and create matter and antimatter out of energy,’ said Professor Luis Silva at the Instituto Superior Técnico in Lisbon, Portugal. Together, the electrons and positrons are thought to form a super-heated form of matter known as a plasma around a pulsar.

    But the exact conditions necessary to produce a plasma containing positrons remain unclear. Scientists also still do not understand why the radio waves emitted by the plasma around pulsars have properties similar to light in a laser beam – a wave structure known as coherence.

    To find out, researchers are now turning to powerful computer simulations to model what might be going on. In the past, such simulations have struggled to mimic the staggering number of particles generated around pulsars. But Prof. Silva and his team, together with researchers at the University of California, Los Angeles in the United States, have adapted a computer model called OSIRIS so that it can run on supercomputers, allowing it to follow billions of particles simultaneously.

    The updated model, which forms part of the InPairs project, has identified the astrophysical conditions necessary for pulsars to generate electrons and positrons when magnetic fields are torn apart and reattached to their neighbours in a process known as magnetic reconnection.

    OSIRIS also predicted that the gamma rays released by electrons and positrons as they race across a magnetic field will shine in discontinuous spurts rather than smooth beams.

    The findings have added weight to theories that the enigmatic signals coming from pulsars are produced by the destruction of electrons as they recombine with positrons in the magnetic fields around these dead stars.

    Prof. Silva is now using the data from these simulations to search for similar burst signatures in past astronomical observations. The tell-tale patterns would reveal details on how magnetic fields evolve around pulsars, offering fresh clues about what is going on inside them. It will also help confirm the validity of the OSIRIS model for researchers trying to create antimatter in the laboratory.

    The OSIRIS computer model predicts how powerful magnetic fields around pulsars evolve, helping scientists understand where matter and antimatter can be created out of the vacuum of space. Image credit – Fabio Cruz

    Blasting lasers

    Insights gained from the simulations are already being used to help design experiments that will use high-powered lasers to mimic the huge amounts of energy released by pulsars. The Extreme Light Infrastructure will blast targets no wider than a human hair with petawatts of laser power. Under this project, lasers are under construction at three facilities around Europe – in Măgurele in Romania, Szeged in Hungary, and Prague in the Czech Republic. If successful, the experiments could create billions of electron-positrons pairs.

    ‘OSIRIS is helping researchers optimise laser properties to create matter and antimatter like pulsars do,’ said Prof. Silva. ‘The model offers a road map for future experiments.’

    But there are some who are attempting to wield matter-antimatter plasmas in even more controlled ways so they can study them.

    Professor Thomas Sunn Pedersen, an applied physicist at the Max Planck Institute for Plasma Physics in Garching, Germany, is using charged metal plates to confine positrons alongside electrons as a first step towards creating a matter-antimatter plasma on a table top.

    Although Prof. Sunn Pedersen works with the most intense beam of low-energy positrons in the world, concentrating enough particles to ignite a matter-antimatter plasma remains challenging. Researchers use electro-magnetic ‘cages’ generated under vacuum to confine antimatter, but these require openings for the particles to be injected inside. These same openings allow particles to leak back out, however, making it difficult to build up enough particles for a plasma to form.

    Prof. Sunn Pedersen has invented an electro-magnetic field with a ‘trap door’ that can let positrons in before closing behind them. Last year, the new design was able to boost the time the antimatter particles remained confined in the field by a factor of 20, holding them in place for over a second.

    ‘No one has ever achieved that in a fully magnetic trap,’ said Prof. Sunn Pedersen. ‘We have proven that the idea works.’

    But holding these elusive antimatter particles in place is only one milestone towards creating a matter-antimatter plasma in the laboratory. As part of the PAIRPLASMA project, Prof. Sunn Pedersen is now increasing the quality of the vacuum and generating the field with a levitating ring to confine positrons for over a minute. Studying the properties of plasmas ignited under these conditions will offer valuable insights to neighbouring fields.

    In June, for example, Prof. Sunn Pedersen used a variation of this magnetic trap to set a new world record in nuclear fusion reactions ignited in conventional-matter plasmas.

    ‘Collective phenomena like turbulence currently complicate control over big fusion plasmas,’ said Prof. Sunn Pedersen. ‘A lot of that is driven by the fact that the ions are much heavier than the electrons in them.’

    He hopes that by producing electron-positron plasmas like those created by the Big Bang, it may be possible to sidestep this complication because electrons and positrons have the exact same mass. If they can be controlled, such plasmas could help to validate complex models and recreate the conditions around pulsars so they can be studied up close in the laboratory for the first time.

    If successful it may finally give astronomers the answers they have puzzled over for so long.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

  • richardmitnick 10:16 am on June 1, 2018 Permalink | Reply
    Tags: , Horizon - The EU Research and Innovation Magazine, ICEYE, ICEYE X1 – PSLV C40, , One of ICEYE’s key initial focuses has been ice surveillance for companies involved in Arctic operations   

    From Horizon: “Microsatellite swarms could paint clearer picture of our planet” 


    From Horizon The EU Research and Innovation Magazine

    28 May 2018
    Gareth Willmer

    Microsatellites such as those developed by ICEYE not only reduce the size of the satellite but also cut costs significantly. Image credit – ICEYE

    Space is not just a hostile place for life, but also for business. Building and launching a traditional bus-sized satellite tens of thousands of kilometres above Earth can cost hundreds of millions of euros, but thanks to miniature satellites, the economics are changing.

    Among the start-ups seeking new ways to tap into space’s potential is microsatellite manufacturer ICEYE.


    ICEYE X1 – PSLV C40

    It aims to cut satellite prices to less than one-hundredth of traditional satellites, using a series of microsatellites partly built with off-the-shelf mobile electronics.

    In January, the company sent what it described as the world’s first microsatellite based on synthetic-aperture radar – technology that allows satellites to see through clouds and into the dark – into a low-Earth orbit of about 500 kilometres.

    Suitcase-sized and weighing just 70 kilograms, ICEYE-X1 is the first of three satellites that the company plans to launch this year, with a goal of having 18 in the sky by the end of 2020.

    ICEYE says that gloomy conditions can make imagery using optical-based systems unavailable up to 75 % of the time, a problem their technology avoids.

    ‘That means you can image in any place in the world at any time,’ said Pekka Laurila, CFO and co-founder of ICEYE.

    At present, requests from companies for data can take satellite providers days to process, and are often updated only once every 12 hours. ICEYE believes it can get this down to two hours once it gets six microsatellites into the sky, and even further with more launches.

    ‘If you’re able to do monitoring on a scale of a few hours, you are actually catching a set of completely new phenomena that has never been monitored from space before,’ said Laurila. ‘It gives you access to understanding these phenomena on a human timescale.’

    Ice surveillance

    There are all sorts of areas in which this could be applied, from agricultural production to tracking climate change, but one of ICEYE’s key initial focuses has been ice surveillance for companies involved in Arctic operations – where vessels moving at several knots need rapid updates on ice-field movements.

    ‘That’s an area where continuous coverage is extremely important,’ said Laurila.

    This revolutionary approach has arrived at a time when unprecedented amounts of data are being generated by satellites.

    The surge in data is driven by a range of factors, including more detailed Earth observation services. One way to process this increasing flow of information is to find better ways of getting satellite data back down to ground.

    At the moment, a lot of satellite data gets lost in transit to and from Earth, or ‘thrown overboard’, according to John Mackey, CEO of mBryonics, a technology development company based in Galway, Ireland.

    He coordinates a project called RAVEN, which is working to improve signal transmission. To do so, mBryonics is harnessing a technology called adaptive optics, which is used in telescopes to give astronomers clear images of stars by reducing the twinkle when viewing them through the distortion of Earth’s atmosphere.

    Adapting this technique to beam data up and down from satellites helps create a much stronger signal and a higher data rate by lessening such atmospheric interference, said Mackey.

    Moving this data faster could also help with a challenge facing future low-orbit satellites – seeing less of the Earth than those satellites higher up. Low-orbit satellites have a more limited line of sight to ground stations and therefore a smaller window to beam data down when they pass by – maybe just 10 to 15 minutes, said Mackey. Speeding up the data rate means they can transfer more during this period.

    Additionally, mBryonics is seeking to use its technology to create links between satellites, which could help create constellations to intelligently route data in the most efficient way possible. ‘Then, if I send my data up to the satellite, it can fire it across the satellite constellation and get me to my destination much faster,’ said Mackey.

    And not only can that cut the number of ground stations needed, but it could also help move the data faster and thus avoid big delays in providing costly satellite-related services. mBryonics is aiming to demo a full commercial system of its satellite technology within the next two years.

    See the full article here .

    Please help promote STEM in your local schools.

    Stem Education Coalition

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: