Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:43 pm on May 20, 2019 Permalink | Reply
    Tags: "Searching for Electroweak SUSY: not because it is easy but because it is hard", , , , , ,   

    From CERN ATLAS: “Searching for Electroweak SUSY: not because it is easy, but because it is hard” 

    CERN/ATLAS detector

    CERN ATLAS Higgs Event

    CERN ATLAS another view Image Claudia Marcelloni ATLAS CERN


    CERN ATLAS New II Credit CERN SCIENCE PHOTO LIBRARY


    From CERN ATLAS

    20th May 2019
    ATLAS Collaboration

    The Standard Model is a remarkably successful but incomplete theory.

    Standard Model of Particle Physics

    Supersymmetry (SUSY) offers an elegant solution to the Standard Model’s limitations, extending it to give each particle a heavy “superpartner” with different “spin” properties (an important “quantum number”, distinguishing matter particles from force particles and the Higgs boson). For example, “sleptons” are the spin 0 superpartners of spin 1/2 electrons, muons and tau leptons, while “charginos” and “neutralinos” are the spin 1/2 counterparts of the spin 0 Higgs bosons (SUSY postulates a total of five Higgs bosons) and spin 1 gauge bosons.

    If these superpartners exist and are not too massive, they will be produced at CERN’s Large Hadron Collider (LHC) and could be hiding in data collected by the ATLAS detector. However, unlike most processes at the LHC, which are governed by strong force interactions, these superpartners would be created through the much weaker electroweak interaction, thus lowering their production rates. Further, most of these new SUSY particles are expected to be unstable. Physicists can only search for them by tracing their decay products – typically into a known Standard Model particle and a “lightest supersymmetric particle” (LSP), which could be stable and non-interacting, thus forming a natural dark matter candidate.

    ______________________________________________
    If sleptons, charginos and neutralinos exist, they will be produced at the LHC and could be hiding in Run 2 data. New searches from the ATLAS Collaboration look for these particles around unexplored corners.
    ______________________________________________

    Today, at the Large Hadron Collider Physics (LHCP) conference in Puebla, Mexico, and at the SUSY2019 conference in Corpus Christi, USA, the ATLAS Collaboration presented numerous new searches for SUSY based on the full Run-2 dataset (taken between 2015 and 2018), including two particularly challenging searches for electroweak SUSY. Both searches target particles that are produced at extremely low rates at the LHC, and decay into Standard Model particles that are themselves difficult to reconstruct. The large amount of data successfully collected by ATLAS in Run 2 provides a unique opportunity to explore these scenarios with new analysis techniques.

    Search for the “stau”

    Collider and astroparticle physics experiments have set limits on the mass of various SUSY particles. However, one important superpartner – the tau slepton, known as the “stau” – has yet to be searched for beyond the exclusion limit of around 90 GeV found at the LHC’s predecessor at CERN, the Large Electron-Positron collider (LEP). A light stau, if it exists, could play a role in neutralino co-annihilation, moderating the amount of dark matter in the visible universe, which otherwise would be too abundant to explain astrophysical measurements.

    The search for a light stau is experimentally challenging due to its extremely low production rate in LHC proton-proton collisions, requiring advanced techniques to reconstruct the Standard Model tau leptons it can decay into. In fact, during Run 1, only a narrow parameter region around a stau mass of 109 GeV and a massless lightest neutralino could be excluded by LHC experiments.

    2
    Figure 1: Observed (expected) limits on the combined left and right stau pair production are shown by the red line (black dashed line). The mass of stau is shown on the x-axis, while the mass of the LSP is shown on the y-axis. (Image: ATLAS Collaboration/CERN)

    3
    Figure 2: Observed (expected) limits on the stau-left pair production are shown by the red line (black dashed line). The mass of stau is shown on the x-axis, while the mass of the LSP is shown on the y-axis. (Image: ATLAS Collaboration/CERN)

    This first ATLAS Run 2 stau search targets the direct production of a pair of staus, each decaying into one tau lepton and one invisible LSP. Each tau lepton further decays into hadrons and an invisible neutrino. Signal events would thus be characterised by the presence of two sets of close-by hadrons and large missing transverse energy (ETmiss) originating from the invisible LSP and neutrinos. Events are further categorized into regions with medium and high ETmiss, to examine different stau mass scenarios.

    The ATLAS data did not reveal hints for stau pair production and thus new exclusion limits were set on the mass of staus. These limits are shown in Figures 1 and 2 using different assumptions on the presence of both possible stau types (left and right, referring to the two different spin states of the tau partner lepton). The limits obtained are the strongest obtained so far in these scenarios.

    Compressed search

    One of the reasons physicists have yet to see charginos and neutralinos may be because their masses are “compressed”. In other words, they are very close to the mass of the LSP. This is expected in scenarios where these particles are “higgsinos”, the superpartners of the Higgs bosons.

    Compressed higgsinos decay to pairs of electrons or muons with very low momenta. It is challenging to identify and reconstruct these particles in an environment with more than a billion high-energy collisions every second and a detector designed to measure high-energy particles – like trying to locate a whispering person in a very crowded and noisy room.

    3
    Figure 3: The distribution of the electron/muon and track pair mass, where the signal events tend to cluster at low mass values. The solid histogram indicates the Standard Model background process, the points with error bars indicate the data, and the dashed lines indicate hypothetical Higgsino events. The bottom plot shows the ratio of the data to the total Standard Model background. (Image: ATLAS Collaboration/CERN)

    4
    Figure 4: Observed (expected) limits on higgsino production are shown by the red line (blue dashed line). The mass of the produced higgsino is shown on the x-axis, while the mass difference to the LSP is shown on the y-axis. The grey region represents the models excluded by the LEP experiments. The blue region represents the constraint from the previous ATLAS search for higgsinos.(Image: ATLAS Collaboration/CERN)

    A new search for higgsinos utilizes muons measured with unprecedentedly low – for ATLAS, so far – momenta. It also benefits from new and unique analysis techniques that allow physicists to look for higgsinos in areas that were previously inaccessible. For example, the search uses charged particle tracks, which can be reconstructed with very low momentum, as a proxy for one of the electrons or muons in the decay pair. Because of the small mass difference between the higgsinos, the mass of the electron/muon and track pair is also expected to be small, as shown in Figure 3.

    Once again, no signs of higgsinos were found in this search. As shown in Figure 4, the results were used to extend constraints on higgsino masses set by ATLAS in 2017 and by the LEP experiments in 2004.

    Overall, both sets of results place strong constraints on important supersymmetric scenarios, which will guide future ATLAS searches. Further, they provide examples of how advanced reconstruction techniques can help improve the sensitivity of new physics searches.

    See the full article for further reseach materials.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN Courier

    Quantum Diaries
    QuantumDiaries

    CERN map


    CERN LHC Grand Tunnel
    CERN LHC particles

    QuantumDiaries


    Quantum Diaries

     
  • richardmitnick 3:16 pm on May 20, 2019 Permalink | Reply
    Tags: "HPE to Acquire Cray for $1.3 Billion", , HPE,   

    From insideHPC: “HPE to Acquire Cray for $1.3 Billion” 

    From insideHPC

    1
    2

    Today HPE announced that the company has entered into a definitive agreement to acquire Cray for approximately $1.3 billion.

    “This pending deal will bring together HPE, the global HPC market leader, and Cray, whose Shasta architecture is under contract to power America’s two fastest supercomputers in 2021,” said Steve Conway from Hyperion Research. “The Cray addition will boost HPE’s ability to pursue high-end procurements and will speed the combined company’s development of next-generation technologies that will benefit HPC and AI-machine learning customers at all price points. Cray will benefit from joining a company with greater resources and much more experience in commercial markets that are increasingly adopting HPC technology to accelerate R&D and business operations.”

    The Explosion of Data is Driving Strong HPC Growth

    The explosion of data from artificial intelligence, machine learning, and big data analytics and evolving customer needs for data-intensive workloads are driving a significant expansion in HPC.

    Over the next three years the HPC segment of the market and associated storage and services is expected to grow from approximately $28 billion in 2018 to approximately $35 billion in 2021, a compound annual growth rate of approximately 9 percent. Exascale is a growing segment of overall HPC opportunities and more than $4 billion of Exascale opportunities are expected to be awarded over the next five years.

    Addressing complex challenges and advancing critical academic research, including predicting future weather patterns, delivering breakthrough medical discoveries, and preventing cyber-attacks, requires significant computational capabilities, up to and through Exascale level architecture. Exascale capable systems enable solutions to these problems with much greater precision and insight.

    “This is an amazing opportunity to bring together Cray’s leading-edge technology and HPE’s wide reach and deep product portfolio, providing customers of all sizes with integrated solutions and unique supercomputing technology to address the full spectrum of their data-intensive needs,” said Peter Ungaro, President and CEO of Cray. “HPE and Cray share a commitment to customer-centric innovation and a vision to create the global leader for the future of high performance computing and AI. On behalf of the Cray Board of Directors, we are pleased to have reached an agreement that we believe maximizes value and are excited for the opportunities that this unique combination will create for both our employees and our customers.”

    Cray is a Leading Innovator in Supercomputer Solutions

    Cray is the premier provider of high-end supercomputing solutions that address customers’ most challenging, data-intensive workloads for making critical decisions. Cray has a leadership position in the top 100 supercomputer installations around the globe. With a history tying back to Cray Research, which was founded in 1972, Cray is headquartered in Seattle, Washington, with US-based manufacturing, and approximately 1,300 employees worldwide. The company delivered revenue of $456 million in its most recent fiscal year, up 16 percent year over year.

    Cray’s supercomputing systems, delivered through their current generation XC and CS platforms, and next-generation Shasta series platform, have the ability to handle massive data sets, converged modeling, simulation, AI, and analytics workloads. In addition to supercomputers, they offer high-performance storage, low-latency high performance HPC interconnects, a full HPC system software stack and programming environment, data analytics, and AI solutions – all currently delivered through integrated systems.

    Cray recently announced an Exascale supercomputer contract for over $600 million for the U.S. Department of Energy’s Oak Ridge National Laboratory. The system, which is targeted to be the world’s fastest system, will enable ground-breaking research and AI at unprecedented scale, using Cray’s new Shasta system architecture and Slingshot interconnect. The company was also part of an award with Intel for the first U.S. Exascale contract from the U.S. Department of Energy’s Argonne National Laboratory, with Cray’s portion of the contract valued at over $100 million.

    Depiction of ANL ALCF Cray Inetl SC18 Shasta Aurora exascale supercomputer

    Cray Strengthens and Expands HPE’s High Performance Computing Portfolio

    High performance computing is a key component of HPE’s vision and growth strategy and the company currently offers world-class HPC solutions, including HPE Apollo and SGI, to customers worldwide. This portfolio will be further strengthened by leveraging Cray’s foundational technologies and adding complementary solutions. The combined company will also reach a broader set of end markets, offering enterprise, academic and government customers a broad range of solutions and deep expertise to solve their most complex problems. Together, HPE and Cray will have enhanced opportunities for growth and the integrated platform, scale and resources to lead the Exascale era of high performance computing.

    The combination of HPE and Cray is expected to deliver significant customer benefits including:

    Future HPC-as-a-Service and AI / ML analytics through HPE GreenLake
    A comprehensive end-to-end portfolio of HPC infrastructure – compute, high-performance storage, system interconnects, software and services supplementing existing HPE capabilities to address the full spectrum of customers’ data-intensive needs
    Differentiated next-generation technology addressing data intensive workloads
    Increased innovation and technological leadership from leveraging greater scale, combined talent and expanded technology capabilities
    Enhanced supply chain capabilities leveraging US-based manufacturing

    Significant Economic Upside Expected to be Realized from the Combination

    Bringing together HPE and Cray enables an enhanced financial profile for the combined company that includes several revenue growth opportunities and cost synergies.

    The companies expect the combination to drive significant revenue growth opportunities by:

    Capitalizing on the growing HPC segment of the market and Exascale opportunities
    Enhancing HPE’s customer base with a complementary footprint in federal business and academia and the company’s ability to accelerate commercial supercomputing adoption
    Introducing new offerings in AI / ML and HPC-as-a-service with HPE GreenLake

    We also expect to deliver significant cost synergies through efficiencies and by leveraging proprietary Cray technology, like the Slingshot interconnect, to lower costs and improve product performance.

    The transaction is expected to close by the first quarter of HPE’s fiscal year 2020, subject to regulatory approvals and other customary closing conditions.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 2:19 pm on May 20, 2019 Permalink | Reply
    Tags: "The Protein Folding Problem", , ,   

    From Medium: “The Protein Folding Problem” 

    From Medium

    Apr 23, 2019
    Aryan Misra

    1

    Recent advancements on the ultimate problem in biology.

    The key to finding the cure to diseases such as Alzheimer’s and Parkinson’s lie in a fundamental biomolecule; the protein. Biologists and physicists alike have been trying to solve the protein folding problem for over half a century with no real progress until recently. This article will give insight into how proteins fold, why it’s so difficult to predict how it folds, and solutions that can be designed around an accurate protein folding algorithm, as well as various other topics that may be of interest.

    What are proteins?

    Proteins are really complex macromolecules that are made of strings of hundreds or thousands of amino acids. They perform every biological function in your body and are absolutely key in every organism. From fighting diseases to providing structure for cells, proteins play every role in your body and in every other living organism. Our DNA contains all the information for creating all these proteins, in the form of the nucleotides: A, C, G, and T. Then, DNA is transcribed to mRNA which is an intermediary molecule in the process of protein creation. In mRNA, the T’s are replaced with U’s, when transcription occurs. Finally, mRNA gets transcribed into the 20 different amino acids that make up proteins.

    2

    3

    Protein Structure

    Proteins start off as a really long sequence of amino acids, in this state the protein is unstable as it’s not at its lowest energy state. To reach this state, the protein folds into a complex 3D shape which is determined by the sequence of amino acids it started off as, as well as the environment it’s in.

    This structure is really important because the way it’s folded completely defines how the protein functions in your body. For example, if the protein holds a T shaped structure, it is likely an antibody that binds to viruses and bacteria to help protect the body. If the structure of the protein is globular, it is likely used for transporting other small molecules throughout your body.

    By understanding this folding code, we could essentially eradicate neurological diseases such as Alzheimers and Parkinsons; diseases that are known to be caused by proteins misfolding in your brain, creating clumps of protein that disrupt brain activity. The protein folding problem can essentially be broken into three parts, outlined well be the following quote.

    ______________________________________
    The protein folding problem is the most important unsolved problem in structural biochemistry. The problem consists of three related puzzles: i) what is the physical folding code? ii) what is the folding mechanism? and iii) can we predict the 3D structure from the amino acid sequences of proteins?

    (Jankovic and Polovic 2017)
    ______________________________________

    Knowing this code could completely change the way we deal with treatments for many different diseases, and could potentially lead to completely artificially made materials and prosthetics. Knowing how proteins fold also opens up new potential within targetted drug discovery. It would also result in us being able to create biomaterials that could be used to create incredibly accurate prosthetics or really anything that requires compatibility with a living organism. Advances in biodegradable enzymes could help us reduce the effect of pollutants such as plastic or oil by helping break them down efficiently.

    However, the problem here is trying to predict the 3D structure of a protein is nigh on impossible because of the sheer amount of combinations of structures a protein can be in. According to the acclaimed, Levinthal’s paradox, it would take longer than the age of the universe to find iterate through every combination of a typical protein’s structure.

    The way a protein folds is dependant on the amino acids it’s made of and the environment that it’s in. This folding is a result of amino acids coming together through attraction across disparate lengths of the protein and is driven by a process called energy minimization.

    Energy Minimization

    Picture going on a hiking expedition across fields of rolling hills. The tops of these hills represent protein fold combinations that are really unlikely to occur, and valleys represent states that proteins are drawn towards. For a protein of x amino acids, there are 3^x states it can be in, and when the number of amino acids ranges from hundreds to thousands, this soon becomes way too many combinations for modern computers to even attempt to solve. This is why we can’t just try every combination and see which has the least energy.

    4
    Protein’s structure changing at different energy points. Source.

    With recent surges in the amount of genetic data we have access to as a result of cheaper genome sequencing, many biologists are turning to a data-driven solution to the protein folding problem. The cost of sequencing a genome has dropped from billions of dollars to a price that is far more practical.

    5
    Graph illustrating the decreasing cost of genome sequencing.

    The Data-Driven Approach

    Previous methods for trying to predict protein structures include things with long names such as x-ray crystallography, cryo-electron microscopy, and nuclear magnetic resonance. These approaches are really expensive and take months to do, so more and more researchers are moving towards an approach using machine learning to predict proteins. CASP (Critical Assessment of Protein Structure Predictions) is a bi-yearly experiment where research groups test their prediction methodology in competition with dozens of the world’s leading teams.

    CASP had seen lots of stagnation in improvements for protein predictions in the early 2000s, however, this has changed recently, with every year bringing significant improvement in the quality of the results.

    In the most recent CASP, Google’s Deepmind. took everyone by surprise by winning first place by a huge margin. Their algorithm, coined Alphafold, was designed to do de novo prediction, which is modelling target proteins from scratch, a daunting task.

    Alphafold

    Alphafold’s algorithm consists of a dual residual network architecture where one of the networks predicts the distance between pairs of amino acid bonds, and the other network predict the angles at chemical bonds which connect each amino acid. On top of this, they used a Generative Adversarial Network (GAN) to create new protein fragments, which were then fed into the network to continuously improve it. Their code and research paper has yet to be released, however, there have been some attempts to replicate their concept. (MiniFold)

    6
    Optimization of protein structure.

    Using this, they were able to create predictions for many different proteins, and create a state of the art algorithm in this field. The current limitations that are holding Alphafold and other machine learning algorithms back are the lack of data available for this at the moment. Currently, there are about 145,000 different proteins catalogued in the Protein Data Bank, while the total number of different proteins found in nature is thought to be around 10¹². Yet the total number of combinations for proteins are immense; 200 amino acids gives us nearly 20²⁰⁰ possible combinations for possible structures, leaving lots of room for scientific discovery.

    It is very likely that the integration of various technologies and algorithms in the fields of deep learning, quantum computing and bioinformatics that will lead to the creation of an algorithm that can successfully solve this problem. Utilizing techniques such as deep q-learning, quantum computing, and capsule networks, we hope to foresee a future in which we can positively change the lives of billions of people through the solution of the protein folding problem.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About Medium

    Medium is an online publishing platform developed by Evan Williams, and launched in August 2012. It is owned by A Medium Corporation. The platform is an example of social journalism, having a hybrid collection of amateur and professional people and publications, or exclusive blogs or publishers on Medium, and is regularly regarded as a blog host.

    Williams developed Medium as a way to publish writings and documents longer than Twitter’s 140-character (now 280-character) maximum.

     
  • richardmitnick 1:47 pm on May 20, 2019 Permalink | Reply
    Tags: , Large Earthquake in Papua New Guinea re-ruptures major fault in just 19 years: More to follow?, , ,   

    From temblor: “Large Earthquake in Papua New Guinea re-ruptures major fault in just 19 years: More to follow?” 

    1

    From temblor

    May 19, 2019
    Tiegan Hobbs, Ph.D., Postdoctoral Seismic Risk Scientist at Natural Resources Canada (@THobbsGeo)

    A magnitude-7.5 quake broke the same fault that produced a magnitude-8.0 quake in 2000, an extraordinarily short recurrence time that also broke all our rules.

    A major earthquake struck eastern Papua New Guinea (PNG) on Tuesday, May 14th at 22:58 local time. No injuries have been reported, although shaking from this Mw 7.5 earthquake was felt up to 250 km (150 mi) away from the epicenter. The maximum shaking intensity (the so-called ‘Modified Mercalli level VII’) would have been sufficient to cause considerable damage in poorly built houses which are common in the region.

    1
    Map showing the location of the 14 May 2019 Mw 7.5 Papua New Guinea Earthquake, as well as the M=7.1 quake on the other side of the country, which struck just a week beforehand.

    According to Dr. Baptiste Gombert, postdoctoral researcher at Oxford University, the event “occurred on the left-lateral Weitin fault [WF in the map below], a major structure of the New Ireland”. ‘Left-lateral’ means that whatever side you are on, the other side moved to the left. This fault marks the boundary between the North and South Bismarck microplates.

    Beyond the Weitin Fault, this region has “every type of plate boundary” according to Dr. Jason Patton from the California Geological Survey and Adjunct Professor at Humboldt State University. For example, compression and shear between the Pacific and Australian Plates results in subduction along the New Britain Trench, rifting in the Woodlark Basin in addition to the observed strike-slip activity in the area of Tuesday’s quake.

    2
    Modified from Holm et al., [2019], this map shows the regional tectonics. Looking like broken shards of glass, there is a complex interaction of possibly inactive subduction from the north and south, along with rifting, subduction, thrusting, and strike slip faults in between. The USGS moment tensor (beachball) from Tuesday’s Mw 7.5 event (blue star) suggests left-lateral motion on the Weitin Fault between the North and South Bismarck Plates. The event rattled residents of New Ireland (NI), the elongate island through which the Weitin Fault runs.

    First Ever Measurement of Onshore Repeated Rupture

    What makes this event so exciting, though, is that it’s not the first major earthquake in this location. A Mw 8.0 event in the year 2000 resulted in up to 11 m of slip along a 275-km-long (165 mi) fault, with 20 aftershocks with magnitude greater than 5 [Tregoning et al., 2001]. The proximity of this week’s hypocenter to the larger quake 19 years ago had Dr. Sotiris Valkaniotis, geological consultant, wondering if they ruptured the same portion of the fault. With some quick work processing satellite imagery, Dr. Valkaniotis produced what is believed to be the first recording of repeated on-land rupture of a fault.

    ________________________________________

    And we have slip! Co-seismic displacement on Weitin Fault, New Ireland, #PNG after the strong M7.5 May 15 2019 #earthquake. Displacement analysis from optical image correlation using #Sentinel2 images from @CopernicusEU and #MicMac. Repeat rupture on the same fault as 2000! pic.twitter.com/5PFZdfOdPj

    — Sotiris Valkaniotis (@SotisValkan) May 16, 2019
    ________________________________________

    The figure in the above tweet, reproduced below, shows several meters of offset across the fault for both earthquakes. It’s preliminary, but it suggests that this fault is extremely active. For reference, Dr. Gombert describes the Weitin Fault as having a strain rate that is approximately 4 times that of the San Andreas in California. That’s important, because it presents a rare opportunity to study an entire seismic cycle from one large earthquake to the next in under 20 years—which appears to be unprecedented. These observations could help answer important questions about whether earthquakes repeatedly rupture the same patch, and what tends to initiate these events. In many places, such as the Cascadia Subduction Zone with its roughly 500-year recurrence period, this is simply not possible.

    3
    Surface displacements in the North-South direction for the most recent Mw 7.5 event and the 2000 Mw 8.0 event on the Weitin Fault. Measurements made using optical correlation of Sentinel-2 and Landsat-7 satellite data.

    2000 Mw 8.0 Event Triggered Large Nearby Earthquakes

    Within 40 hours of the 16 November 2000 earthquake on the Weitin Fault, which was itself preceded by a 29 October 2000 Mw 6.8 foreshock, two events of magnitude 7.4 and 7.5 were recorded nearby [Park & Mori, 2007]. The events were found to be consistent with static stress triggering from the mainshock, and with a previous observation of Lay and Kanamori [1980] that earthquakes in this part of the world tend to occur in doublets: two large mainshocks that are close in space and time rather than the typical mainshock-aftershock sequence. It begs the question “will there be more?”

    Triggering of Aftershocks From This Sequence?

    Three strong aftershocks have so far struck near the mainshock: two Mw 5.0 events on Tuesday May 14th and Thursday May 16th, and a Mw 6.0 on Friday May 17th. Although we don’t yet know the type of faulting that occurred in these events, we can evaluate how the Mw 7.5 mainshock may have promoted them. A Coulomb Stress calculation shows that the epicentral locations of these events experienced stress loading of 112, 4, and 2 bars, respectively, assuming a similar fault geometry. This is well in excess of a 1 bar triggering threshold, suggesting that all three of these fault locations were brought closer to failure by the mainshock. In the map below, regions of red shading indicate areas prone to aftershocks – extending along an over 100 km swath of New Ireland. Given that the previous event in 2000 was able to trigger relatively large earthquakes on the Weitin [Geist and Parsons, 2005], the coming days and weeks could bring more large events to the region.

    Without doubt, the data from this earthquake sequence will illuminate the stress evolution of this rapidly straining strike-slip fault and serve as a helpful natural laboratory for understanding similar strike-slip systems which are slower to reveal their mysteries.

    5
    Stress change caused by the 14 May 2019 mainshock (green star), for faults with similar orientation. Red indicates areas of positive Coulomb stress change (up to 5 bars), and cyan shows regions with negative stress change (to -5 bars). The two Mw 5.0 and one Mw 6.0 aftershocks (white diamonds) experienced Coulomb stress loading upwards of the triggering threshold.

    Tsunami Warnings for Papua New Guinea and the Solomon Islands

    Strike-slip faults, like the Weitin and the San Andreas in California, generate dominantly horizontal motions, and so are fortunately unlikely to launch large tsunami unless they trigger undersea landslides. Some 9 minutes after the earthquake started, the Pacific Tsunami Warning Center assessed a tsunami threat for regions within 1000 km of the quake: mainly Papua New Guinea and the Solomon Islands. The threat was called off within about an hour and a half, with wave heights reaching less than 0.3 m (about a foot).

    It is important to remember in the coming days and weeks, however, that aftershocks are also capable of producing dangerous tsunami. Following the Mw 8.0 New Ireland earthquake on the same fault in 2000, runups from the mainshock and triggered aftershocks were greater than 3 meters (9 feet) in some locations [Geist and Parsons, 2005]. This was partly due to the thrust mechanism of the aftershocks, which causes greater vertical displacement and therefore larger potential for tsunami. Because many populations in this region live close to the coast, the safest strategy is self-evacuation. This means that if you feel shaking that is strong or long, head to high ground without waiting to be told.

    Read More:

    USGS reports

    https://earthquake.usgs.gov/earthquakes/eventpage/us70003kyy/executive

    https://earthquake.usgs.gov/earthquakes/eventpage/us70003l05/executive

    https://earthquake.usgs.gov/earthquakes/eventpage/usd000a1im/executive

    https://earthquake.usgs.gov/earthquakes/eventpage/us70003mus/executive

    Tsunami warnings

    https://www.tsunami.gov/events/PHEB/2019/05/14/19134000/1/WEPA40/WEPA40.txt

    https://www.tsunami.gov/events/PHEB/2019/05/14/19134000/3/WEPA40/WEPA40.txt

    Social Media:

    https://twitter.com/SotisValkan/status/1129069849131401216 (imagery based surface displacement measurement comparison)

    Geist, E. L., & Parsons, T. (2005). Triggering of tsunamigenic aftershocks from large strike‐slip earthquakes: Analysis of the November 2000 New Ireland earthquake sequence. Geochemistry, Geophysics, Geosystems, 6(10).

    Holm, R. J., Tapster, S., Jelsma, H. A., Rosenbaum, G., & Mark, D. F. (2019). Tectonic evolution and copper-gold metallogenesis of the Papua New Guinea and Solomon Islands region. Ore Geology Reviews, 104, 208-226.

    Lay, T., & Kanamori, H. (1980). Earthquake doublets in the Solomon Islands. Physics of the Earth and Planetary Interiors, 21(4), 283-304.

    Park, S. C., & Mori, J. (2007). Triggering of earthquakes during the 2000 Papua New Guinea earthquake sequence. Journal of Geophysical Research: Solid Earth, 112(B3).

    Tregoning, P., McQueen, H., Lambeck, K., Stanaway, R., Saunders, S., Itikarai, I., Nohou, J., Curley, B., Suat, J. (2001). Progress Report on Geodetic Monitoring of the November 16, 2000 – New Ireland Earthquake. Australian National University, Research School of Earth Sciences, Special Report 2001/3. http://rses.anu.edu.au/geodynamics/tregoning/RSES_SR_2001-3.pdf

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network project

    Earthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States
    1

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

     
  • richardmitnick 7:26 am on May 20, 2019 Permalink | Reply
    Tags: “CoastSnap is a network of simple camera mounts at beaches that invite the public to take a photo and upload it to social media using a specific hashtag” says Dr Mitchell Harley., Citizen science project led by UNSW engineers, CoastSnap,   

    From University of New South Wales: ” Revolutionising coastal monitoring, one social media photo at a time” 

    U NSW bloc

    From University of New South Wales

    20 May 2019
    Cecilia Duong

    1
    CoastSnap is a network of simple camera mounts at beaches that invite the public to take a photo and upload it to social media.

    A citizen science project led by UNSW engineers is leveraging thousands of crowd-sourced photos from social media, helping create new insights into how beaches respond to changing weather and wave conditions, and extreme storms – and now a new study has shown the program to be nearly as accurate and effective as professional shoreline monitoring equipment.

    The study – recently published in the journal Coastal Engineering – is a collaboration between engineers from the UNSW Water Research Laboratory and the NSW Office of Environment and Heritage.

    “To collect data about shoreline change over time, we previously had to rely on expensive monitoring equipment or exhaustive fieldwork to gather data by hand,” explains Dr Mitchell Harley from UNSW’s School of Civil and Environmental Engineering, who has led the study.

    “In our new study, we present an innovative ‘citizen science’ approach to collecting shoreline data, by tapping into the incredible amount of social media images taken at the coast every single day.

    “We rigorously tested this technique at two beaches in Sydney over a 7-month period – Manly and North Narrabeen – and found that the shoreline data obtained from this community-based technology was comparable in accuracy to that collected by professional shoreline monitoring equipment.”

    CoastSnap – a community program founded in 2017 – turns the average community member into a coastal scientist, using only their smartphone to take pictures of the coastline.

    “CoastSnap is a network of simple camera mounts at beaches that invite the public to take a photo and upload it to social media, using a specific hashtag,” says Dr Mitchell Harley.

    Using algorithms to track the shoreline position, the images collected are then analysed to help researchers and the community understand why some beaches are more resilient to change than others. The imagery can also be used to inform coastal management and planning decisions. Despite the technical challenges presented with this method of data collection, which include the low resolution of social media images and the involvement of non-professionals in the gathering of the data, the technique has proven that the research does not need expensive equipment to collect useful data.

    Dr Harley says the collection of photos at all the different stations will be the global eyes observing likely changes to the coastline in the coming years.

    “The data we have collected so far has revealed some very interesting patterns that waves and tides have caused. Some sites have seen the coastline fluctuate by up to 50m back and forth, whereas at some nearby sites, the same coastline has remained stagnant.

    “This type of information is critical to be able to help predict how the coastline changes in response to changing waves and storms,” says Dr Harley.

    “Ultimately we would like to use this information to assist coastal managers in reducing the risk of coastal erosion – and to identify coastal erosion hotspots that need particular attention.”

    2
    Dr Mitchel Harley (far right) at the installation of a CoastSnap station in Fiji. Photo credit: Navneet Lal

    Two years and thousands of images later

    The idea for CoastSnap stems from the Water Research Laboratory’s work in coastal imaging technology, which has been in development for over a decade. The technology made use of high-tech video cameras installed on top of beachfront buildings; but in contrast to CoastSnap, that equipment was quite expensive. Now celebrating the program’s two-year anniversary this month, CoastSnap had humble beginnings, with the first two snap stations installed at Manly and North Narrabeen in May 2017. Since then, over 2,500 images have been submitted from 4 NSW CoastSnap stations from almost 1,000 individual community participants.

    The team says the technique has the potential to revolutionise the way coastlines are monitored worldwide, by expanding to coastlines where there previously was little data coverage, particularly in countries with limited resources.

    “That’s why we’ve already rapidly expanded internationally, with CoastSnap stations located in 9 different countries – Brazil, England, Fiji, France, The Netherlands, Portugal, Spain, USA and Australia,” says Dr Harley.

    The team have had positive community feedback and participation so far.

    “What we often find in coastal engineering and management is that the solution to coastal erosion issues is relatively simple, but that the solution is held back by a range of societal roadblocks,” Dr Harley says.

    “Engaging the community in the data collection process really helps to break down barriers between coastal managers, government and the people that enjoy the coast on a daily basis.

    “This leads to a greater democratisation of decisions being made at the coast, so that better decisions are made for everyone to benefit.”

    How to get CoastSnapping:

    Visit a CoastSnap photo point at North Narrabeen Beach, Manly Beach, Cape Byron or Blacksmiths Beach with your mobile device and follow these simple steps:

    Place your mobile device in the CoastSnap cradle, with the camera facing through the gap in the cradle and the screen facing you. This is important: if you don’t place your phone in the cradle we can’t use your snap.
    Push your mobile device up against the left side of the phone cradle.
    Take a standard photo with your mobile device camera, without using zoom or filters.
    Carefully remove your mobile device from the phone cradle.
    Share or submit your CoastSnap photo so that we can measure the beach:
    share on Facebook, Instagram or Twitter using the hashtag shown on the sign
    submit your photo via email to Coast.Snap@environment.nsw.gov.au

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U NSW Campus

    Welcome to UNSW Australia (The University of New South Wales), one of Australia’s leading research and teaching universities. At UNSW, we take pride in the broad range and high quality of our teaching programs. Our teaching gains strength and currency from our research activities, strong industry links and our international nature; UNSW has a strong regional and global engagement.

    In developing new ideas and promoting lasting knowledge we are creating an academic environment where outstanding students and scholars from around the world can be inspired to excel in their programs of study and research. Partnerships with both local and global communities allow UNSW to share knowledge, debate and research outcomes. UNSW’s public events include concert performances, open days and public forums on issues such as the environment, healthcare and global politics. We encourage you to explore the UNSW website so you can find out more about what we do.

     
  • richardmitnick 3:45 pm on May 19, 2019 Permalink | Reply
    Tags: Another current focus area is lasers which can be used to sharpen images by creating artificial stars in a technique called adaptive optics., Detectors are the key element of any astronomical instrument — they are like the eyes that see the light from the Universe., ESO’s Technology Development Programme, , Lasers in Adaptive Optics, , To avoid wasted efforts we tend to focus our work into themes for example detectors or adaptive optics.   

    From ESOblog: “Shaping the future” 

    ESO 50 Large

    From ESOblog

    1

    17 May 2019

    Pushing the limits of our knowledge about the Universe requires the constant development of new trailblazing technologies. We speak to Mark Casali, former Head of ESO’s Technology Development Programme to find out more about how ESO keeps itself at the cutting-edge of scientific research.

    2
    Mark Casali

    Q. Firstly, could you tell us about ESO’s Technology Development Programme?

    A. We are a team of about a dozen people developing new technology that enables ESO to reach its ambitious scientific goals. This means we work on concepts with a fairly long timeframe — looking into completely innovative techniques and developing technology for astronomical instruments that will exist in the future, rather than those that exist today.

    Technology takes a long time to emerge, so we can’t wait to develop it until we want to produce an instrument that does something specific. Rather we need to already have the technology in place before the instrument is developed, in order to speed up production.

    One question people often ask is how we decide what new technology to explore. Of course, there are always more brilliant ideas than funds available, but our research is science-driven, meaning that when developing new technology we always focus on things that will eventually enable us to do the most exciting science.

    3
    The main mirror of the Extremely Large Telescope is visible at the centre of this image. It consists of 798 segments that together form a mirror 39 metres wide.
    Credit: ESO/L. Calçada/ACe Consortium

    Q. What kind of new technology are you developing at the moment?

    A. To avoid wasted efforts, we tend to focus our work into themes, for example detectors or adaptive optics. This means that we solve many problems in a single area so that area can move forward. This is more efficient than investing in several different areas and making only a small amount of progress in each.

    Detectors are the key element of any astronomical instrument — they are like the eyes that see the light from the Universe. Many very good detectors exist commercially, but often it’s useful to have detectors with really specific properties. So we put a lot of effort into developing new detectors.

    Another current focus area is lasers, which can be used to sharpen images by creating artificial stars, in a technique called adaptive optics. We also develop new mirror technology, for example mirrors that can change shape to create sharper images.

    Glistening against the awesome backdrop of the night sky above ESO_s Paranal Observatory, four laser beams project out into the darkness from Unit Telescope 4 UT4 of the VLT, a major asset of the Adaptive Optics system

    3
    The Paranal engineer Gerhard Hüdepohl checks the huge camera attached at the Cassegrain focus of VISTA, directly below the cell of the main mirror. VISTA´s powerful eye is a 3-tonne camera containing 16 special detectors sensitive to infrared light with a combined total of 67 megapixels. It will have widest coverage of any astronomical near-infrared camera. VISTA is the largest telescope in the world dedicated to surveying the sky at near-infrared wavelengths. Credit: ESO

    Q. Do you develop all of this new technology here at ESO?

    A. We actually only do about half of the technology development in house, and the other half is contracted out to external industry, universities or institutes around Europe. The decision to work on projects internally or externally depends on factors like whether we have the relevant expertise within our small team.

    Working with external partners is really a collaboration from which both parties benefit. We gain expertise, as well as a good quality product, because industry usually provides well-engineered products that are reliable and don’t break down easily. Also working with industrial partners gives us access to expensive machinery that we wouldn’t be able to afford ourselves for a single project. On the other hand, we are really looking to develop cutting-edge technology, so when we fund a company to develop something new, they really get ahead of the pack and could be the only company to be producing a certain product commercially. It’s a win-win for all involved!

    Working with industrial, academic and institutional partners also allows us to support our Member States, providing them with a return on the money they invest in ESO. Our collaborations with external partners occur through calls to tender and typically last about two years.

    Q. And what about the other way around? Does technology developed within ESO ever go on to have commercial success?

    A. Occasionally, yes! Although we don’t have a specific department for technology transfer, it does happen organically every now and then.

    For example, a couple of years ago we created a special laser called a Raman fibre laser that is used to create an artificial laser guide star by exciting sodium atoms high up in Earth’s atmosphere. We then signed a license agreement with two commercial partners for them to use this novel laser technology.

    3
    ESO’s Raman fibre laser feeding the frequency-doubling Second Harmonic Generation unit,, which produces a 20W laser at 589 nm. The light is used to create an artificial laser guide star by exciting sodium atoms in the Earth’s atmosphere at 90 km altitude. ESO has signed an agreement to license its cutting-edge laser technology to two commercial partners, Toptica Photonics and MPB Communications. This marks the first time that ESO has transferred patented technology and know-how to the private sector, offering significant opportunities both for business and for ESO. Credit: ESO

    Q. The Technology Development Programme sounds like an interesting place to work. What experience do you have that made you a good fit to oversee such a department?

    A. After a PhD in astronomy and then a stint as a researcher, I got involved in telescope construction projects. I believe that this combination of scientific and technical understanding was really helpful in my role.

    I do agree that this is a great team to work in! Not only is it really interesting to bring ideas for revolutionary technology through to actual deliverables, but I also feel that it’s a very important part of ESO’s work. Without new and improved technology, we wouldn’t be able to continue to make new discoveries and remain at the forefront of astronomical research.

    Q. Isn’t ESO involved in the European Union’s ATTRACT initiative to develop new technology?

    A. ESO is indeed one of the partners in the ATTRACT consortium, which consists mostly of large European infrastructures like CERN and EMBL, as well as universities and some industrial partners. The consortium recently received funding from the European Commission to run a competition for ideas in detector and imaging technology. This could be applied to many different fields, for example detecting light for astronomical or medical applications, or imaging particles for particle physics applications.

    We received over 1200 technology development ideas, of which 170 will be awarded 100 000 euros each. After a year and a half of work on their projects, a few of these will receive funding for a scale-up project. Hopefully these will include some astronomy-related projects!

    Through ATTRACT, ESO is able to work with other big organisations to develop the European economy, and I believe that the initiative is improving lives by creating products, services, jobs and even new companies.

    Q. Is there anything else you’d like to mention?

    A. Two things. The first being that the future of technology development at ESO is very bright; it’s likely that our programme will be supported and continue to grow long into the future. The second is that anybody can look at the list of ESO-developed technologies online, which we keep updated so that the public can see where their money goes.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Visit ESO in Social Media-

    Facebook

    Twitter

    YouTube

    ESO Bloc Icon

    ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is a major partner in ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre European Extremely Large Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.

    ESO VLT at Cerro Paranal in the Atacama Desert, •ANTU (UT1; The Sun ),
    •KUEYEN (UT2; The Moon ),
    •MELIPAL (UT3; The Southern Cross ), and
    •YEPUN (UT4; Venus – as evening star).
    elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo,


    ESO LaSilla
    ESO/Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT 4 lasers on Yepun


    ESO Vista Telescope
    ESO/Vista Telescope at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO NTT
    ESO/NTT at Cerro LaSilla 600 km north of Santiago de Chile at an altitude of 2400 metres.

    ESO VLT Survey telescope
    VLT Survey Telescope at Cerro Paranal with an elevation of 2,635 metres (8,645 ft) above sea level.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    ESO/E-ELT,to be on top of Cerro Armazones in the Atacama Desert of northern Chile. located at the summit of the mountain at an altitude of 3,060 metres (10,040 ft).


    ESO APEX
    APEX Atacama Pathfinder 5,100 meters above sea level, at the Llano de Chajnantor Observatory in the Atacama desert.

    Leiden MASCARA instrument, La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    Leiden MASCARA cabinet at ESO Cerro la Silla located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft)

    ESO Next Generation Transit Survey at Cerro Paranel, 2,635 metres (8,645 ft) above sea level

    ESO Speculoos telescopes four 1m-diameter robotic telescopes at ESO Paranal Observatory 2635 metres 8645 ft above sea level


    ESO TAROT telescope at Paranal, 2,635 metres (8,645 ft) above sea level

    ESO ExTrA telescopes at Cerro LaSilla at an altitude of 2400 metres

     
  • richardmitnick 3:05 pm on May 19, 2019 Permalink | Reply
    Tags: ,   

    From CERN ATLAS: “Exploring the scientific potential of the ATLAS experiment at the High-Luminosity LHC” 

    CERN/ATLAS detector

    CERN ATLAS Higgs Event

    CERN ATLAS another view Image Claudia Marcelloni ATLAS CERN


    CERN ATLAS New II Credit CERN SCIENCE PHOTO LIBRARY


    From CERN ATLAS

    17th May 2019

    1
    Display of a simulated HL-LHC collision event in an upgraded ATLAS detector. The event has an average of 200 collisions per particle bunch crossing. (Image: ATLAS Collaboration/CERN)

    The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) is scheduled to begin colliding protons in 2026. This major improvement to CERN’s flagship accelerator will increase the total number of collisions in the ATLAS experiment by a factor of 10. To cope with this increase, ATLAS is preparing a complex series of upgrades including the installation of new detectors using state-of-the-art technology, the replacement of ageing electronics, and the upgrade of its trigger and data acquisition system.

    What discovery opportunities will be in reach for ATLAS with the HL-LHC upgrade? How precisely will physicists be able to measure properties of the Higgs boson? How deeply will they be able to probe Standard Model processes for signs of new physics? The ATLAS Collaboration has carried out and released dozens of studies to answer these questions – the results of which have been valuable input to discussions held this week at the Symposium on the European Strategy for Particle Physics, in Granada, Spain.

    “Studying the discovery potential of the HL-LHC was a fascinating task associated with the ATLAS upgrades,” says Simone Pagan Griso, ATLAS Upgrade Physics Group co-convener­. “The results are informative not only to the ATLAS Collaboration but to the entire global particle-physics community, as they reappraise the opportunities and challenges that lie ahead of us.” Indeed, these studies set important benchmarks for forthcoming generations of particle physics experiments.

    Pagan Griso worked with Leandro Nisati, the ATLAS representative on the HL-LHC Physics Potential ‘Yellow Report’ steering committee, and fellow ATLAS Upgrade Physics Group co-convener, Sarah Demers, to coordinate these studies for the collaboration. “A CERN Yellow Report, with publication in its final form forthcoming, will combine ATLAS’ results with those from other LHC experiments, as well as input from theoretical physicists,” says Nisati.

    Estimating the performance of a machine that has not yet been built, which will operate under circumstances that have never been confronted, was a complex task for the ATLAS team. “We took two parallel approaches,” explains Demers. “For one set of analysis projections, we began with simulations of the challenging HL-LHC experimental conditions. These simulated physics events were then passed through custom software to show us how the particles would interact with an upgraded ATLAS detector. We then developed new algorithms to try to pick the physics signals from the challenging amount of background events.” Dealing with abundant background will be a common complication for HL-LHC operation.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    CERN Courier

    Quantum Diaries
    QuantumDiaries

    CERN map


    CERN LHC Grand Tunnel
    CERN LHC particles

    QuantumDiaries


    Quantum Diaries

     
  • richardmitnick 2:20 pm on May 19, 2019 Permalink | Reply
    Tags: "Manipulating atoms one at a time with an electron beam", , Developing a method that can reposition atoms with a highly focused electron beam and control their exact location and bonding orientation., , Scanning transmission electron microscopes, Ultimately the goal is to move multiple atoms in complex ways.   

    From MIT News: “Manipulating atoms one at a time with an electron beam” 

    MIT News

    From MIT News

    May 17, 2019
    David L. Chandler

    1
    This diagram illustrates the controlled switching of positions of a phosphorus atom within a layer of graphite by using an electron beam, as was demonstrated by the research team. Courtesy of the researchers.

    2
    Microscope images are paired with diagrams illustrating the controlled movement of atoms within a graphite lattice, using an electron beam to manipulate the positions of atoms one a time. Courtesy of the researchers.

    New method could be useful for building quantum sensors and computers.

    The ultimate degree of control for engineering would be the ability to create and manipulate materials at the most basic level, fabricating devices atom by atom with precise control.

    Now, scientists at MIT, the University of Vienna, and several other institutions have taken a step in that direction, developing a method that can reposition atoms with a highly focused electron beam and control their exact location and bonding orientation. The finding could ultimately lead to new ways of making quantum computing devices or sensors, and usher in a new age of “atomic engineering,” they say.

    The advance is described today in the journal Science Advances, in a paper by MIT professor of nuclear science and engineering Ju Li, graduate student Cong Su, Professor Toma Susi of the University of Vienna, and 13 others at MIT, the University of Vienna, Oak Ridge National Laboratory, and in China, Ecuador, and Denmark.

    “We’re using a lot of the tools of nanotechnology,” explains Li, who holds a joint appointment in materials science and engineering. But in the new research, those tools are being used to control processes that are yet an order of magnitude smaller. “The goal is to control one to a few hundred atoms, to control their positions, control their charge state, and control their electronic and nuclear spin states,” he says.

    While others have previously manipulated the positions of individual atoms, even creating a neat circle of atoms on a surface, that process involved picking up individual atoms on the needle-like tip of a scanning tunneling microscope and then dropping them in position, a relatively slow mechanical process. The new process manipulates atoms using a relativistic electron beam in a scanning transmission electron microscope (STEM), so it can be fully electronically controlled by magnetic lenses and requires no mechanical moving parts. That makes the process potentially much faster, and thus could lead to practical applications.

    Custom-designed scanning transmission electron microscope at Cornell University by David Muller/Cornell University

    3
    MIT scanning transmission electron microscope

    Using electronic controls and artificial intelligence, “we think we can eventually manipulate atoms at microsecond timescales,” Li says. “That’s many orders of magnitude faster than we can manipulate them now with mechanical probes. Also, it should be possible to have many electron beams working simultaneously on the same piece of material.”

    “This is an exciting new paradigm for atom manipulation,” Susi says.

    Computer chips are typically made by “doping” a silicon crystal with other atoms needed to confer specific electrical properties, thus creating “defects’ in the material — regions that do not preserve the perfectly orderly crystalline structure of the silicon. But that process is scattershot, Li explains, so there’s no way of controlling with atomic precision where those dopant atoms go. The new system allows for exact positioning, he says.

    The same electron beam can be used for knocking an atom both out of one position and into another, and then “reading” the new position to verify that the atom ended up where it was meant to, Li says. While the positioning is essentially determined by probabilities and is not 100 percent accurate, the ability to determine the actual position makes it possible to select out only those that ended up in the right configuration.

    Atomic soccer

    The power of the very narrowly focused electron beam, about as wide as an atom, knocks an atom out of its position, and by selecting the exact angle of the beam, the researchers can determine where it is most likely to end up. “We want to use the beam to knock out atoms and essentially to play atomic soccer,” dribbling the atoms across the graphene field to their intended “goal” position, he says.

    “Like soccer, it’s not deterministic, but you can control the probabilities,” he says. “Like soccer, you’re always trying to move toward the goal.”

    In the team’s experiments, they primarily used phosphorus atoms, a commonly used dopant, in a sheet of graphene, a two-dimensional sheet of carbon atoms arranged in a honeycomb pattern. The phosphorus atoms end up substituting for carbon atoms in parts of that pattern, thus altering the material’s electronic, optical, and other properties in ways that can be predicted if the positions of those atoms are known.

    Ultimately, the goal is to move multiple atoms in complex ways. “We hope to use the electron beam to basically move these dopants, so we could make a pyramid, or some defect complex, where we can state precisely where each atom sits,” Li says.

    This is the first time electronically distinct dopant atoms have been manipulated in graphene. “Although we’ve worked with silicon impurities before, phosphorus is both potentially more interesting for its electrical and magnetic properties, but as we’ve now discovered, also behaves in surprisingly different ways. Each element may hold new surprises and possibilities,” Susi adds.

    The system requires precise control of the beam angle and energy. “Sometimes we have unwanted outcomes if we’re not careful,” he says. For example, sometimes a carbon atom that was intended to stay in position “just leaves,” and sometimes the phosphorus atom gets locked into position in the lattice, and “then no matter how we change the beam angle, we cannot affect its position. We have to find another ball.”

    Theoretical framework

    In addition to detailed experimental testing and observation of the effects of different angles and positions of the beams and graphene, the team also devised a theoretical basis to predict the effects, called primary knock-on space formalism, that tracks the momentum of the “soccer ball.” “We did these experiments and also gave a theoretical framework on how to control this process,” Li says.

    The cascade of effects that results from the initial beam takes place over multiple time scales, Li says, which made the observations and analysis tricky to carry out. The actual initial collision of the relativistic electron (moving at about 45 percent of the speed of light) with an atom takes place on a scale of zeptoseconds — trillionths of a billionth of a second — but the resulting movement and collisions of atoms in the lattice unfolds over time scales of picoseconds or longer — billions of times longer.

    Dopant atoms such as phosphorus have a nonzero nuclear spin, which is a key property needed for quantum-based devices because that spin state is easily affected by elements of its environment such as magnetic fields. So the ability to place these atoms precisely, in terms of both position and bonding, could be a key step toward developing quantum information processing or sensing devices, Li says.

    “This is an important advance in the field,” says Alex Zettl, a professor of physics at the University of California at Berkeley, who was not involved in this research. “Impurity atoms and defects in a crystal lattice are at the heart of the electronics industry. As solid-state devices get smaller, down to the nanometer size scale, it becomes increasingly important to know precisely where a single impurity atom or defect is located, and what are its atomic surroundings. An extremely challenging goal is having a scalable method to controllably manipulate or place individual atoms in desired locations, as well as predicting accurately what effect that placement will have on device performance.”

    Zettl says that these researchers “have made a significant advance toward this goal. They use a moderate energy focused electron beam to coax a desirable rearrangement of atoms, and observe in real-time, at the atomic scale, what they are doing. An elegant theoretical treatise, with impressive predictive power, complements the experiments.”

    Besides the leading MIT team, the international collaboration included researchers from the University of Vienna, the University of Chinese Academy of Sciences, Aarhus University in Denmark, National Polytechnical School in Ecuador, Oak Ridge National Laboratory, and Sichuan University in China. The work was supported by the National Science Foundation, the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the Austrian Science Fund, the European Research Council, the Danish Council for Independent Research, the Chinese Academy of Sciences, and the U.S. Department of Energy.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:29 pm on May 19, 2019 Permalink | Reply
    Tags: , , , RNA messages in the cell drive function, , Today there is no medical treatment for autism.   

    From The Conversation: “New autism research on single neurons suggests signaling problems in brain circuits” 

    Conversation
    From The Conversation

    1
    Artist impression of neurons communicating in the brain. whitehoune/Shutterstock.com

    May 17, 2019
    Dmitry Velmeshev

    Autism affects at least 2% of children in the United States – an estimated 1 in 59. This is challenging for both the patients and their parents or caregivers. What’s worse is that today there is no medical treatment for autism. That is in large part because we still don’t fully understand how autism develops and alters normal brain function.

    One of the main reasons it is hard to decipher the processes that cause the disease is that it is highly variable. So how do we understand how autism changes the brain?

    Using a new technology called single-nucleus RNA sequencing, we analyzed the chemistry inside specific brain cells from both healthy people and those with autism and identified dramatic differences that may cause this disease. These autism-specific differences could provide valuable new targets for drug development.

    I am a neuroscientist in the lab of Arnold Kreigstein, a researcher of human brain development at the University of California, San Francisco. Since I was a teenager, I have been fascinated by the human brain and computers and the similarities between the two. The computer works by directing a flow of information through interconnected electronic elements called transistors. Wiring together many of these small elements creates a complex machine capable of functions from processing a credit card payment to autopiloting a rocket ship. Though it is an oversimplification, the human brain is, in many respects, like a computer. It has connected cells called neurons that process and direct information flow – a process called synaptic transmission in which one neuron sends a signal to another.

    When I started doing science professionally, I realized that many diseases of the human brain are due to specific types of neurons malfunctioning, just like a transistor on a circuit board can malfunction either because it was not manufactured properly or due to wear and tear.

    RNA messages in the cell drive function

    Every cell in any living organism is made of the same types of biological molecules. Molecules called proteins create cellular structures, catalyze chemical reactions and perform other functions within the cell.

    Two related types of molecules – DNA and RNA – are made of sequences of just four basic elements and used by the cell to store information. DNA is used for hereditary long-term information storage; RNA is a short-lived message that signals how active a gene is and how much of a particular protein the cell needs to make. By counting the number of RNA molecules carrying the same message, researchers can get insights into the processes happening inside the cell.

    When it comes to the brain, scientists can measure RNA inside individual cells, identify the type of brain cell and and analyze the processes taking place inside it – for instance, synaptic transmission. By comparing RNA analyses of brain cells from healthy people not diagnosed with any brain disease with those done in patients with autism, researchers like myself can figure out which processes are different and in which cells.

    Until recently, however, simultaneously measuring all RNA molecules in a single cell was not possible. Researchers could perform these analyses only from a piece of brain tissue containing millions of different cells. This was complicated further because it was possible to collect these tissue samples only from patients who have already died.

    New tech pinpoints neurons affected in autism

    However, recent advances in technology allowed our team to measure RNA that is contained within the nucleus of a single brain cell. The nucleus of a cell contains the genome, as well as newly synthesized RNA molecules. This structure remains intact ever after the death of a cell and thus can be isolated from dead (also called postmortem) brain tissue.

    3
    Neurons in the upper (left) and deep layers of the human developing cortex. Chen & Kriegstein, 2015 Science/American Association for the Advancement of Science, CC BY-SA

    By analyzing single cellular nuclei from this postmortem brain of people with and without autism, we profiled the RNA within 100,000 single brain cells of many such individuals.

    Comparing RNA in specific types of brain cells between the individuals with and without autism, we found that some specific cell types are more altered than others in the disease.

    In particular, we found [Science]that certain neurons called upper-layer cortical neurons that exchange information between different regions of the cerebral cortex have an abnormal number of RNA-encoding proteins located at the synapse – the points of contacts between neurons where signals are transmitted from one nerve cell to another. These changes were detected in regions of the cortex vital for higher-order cognitive functions, such as social interactions.

    This suggests that synapses in these upper-layer neurons are malfunctioning, leading to changes in brain functions. In our study, we showed that upper-layer neurons had very different quantities of certain RNA compared to the same cells in healthy people. That was especially true in autism patients who suffered from the most severe symptoms, like not being able to speak.

    4
    New results suggest that the synapse formed by neurons in the upper layers of the cerebral cortex are not functioning correctly. CI Photos/Shutterstock.com

    Glial cells are also affected in autism

    In addition to neurons that are directly responsible for synaptic communication, we also saw changes in the RNA of other non-neuronal cells – called glia. Glia play important roles in regulating the behavior of neurons, including how they send and receive messages via the synapse. These may also play an important role in causing autism.

    So what do these findings mean for future medical treatment of autism?

    From these results, I and my colleagues understand that the same parts of the synaptic machinery which are critical for sending signals and transmitting information in the upper-layer neurons might be broken in many autism patients, leading to abnormal brain function.

    If we can repair these parts, or fine-tune neuronal function to a near-normal state, it might offer dramatic relief of symptoms for the patients. Studies are underway to deliver drugs and gene therapy to specific cell types in the brain, and many scientists including myself believe such approaches will be indispensable for future treatments of autism.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Conversation launched as a pilot project in October 2014. It is an independent source of news and views from the academic and research community, delivered direct to the public.
    Our team of professional editors work with university and research institute experts to unlock their knowledge for use by the wider public.
    Access to independent, high quality, authenticated, explanatory journalism underpins a functioning democracy. Our aim is to promote better understanding of current affairs and complex issues. And hopefully allow for a better quality of public discourse and conversation.

     
  • richardmitnick 1:03 pm on May 19, 2019 Permalink | Reply
    Tags: , , , , Reversing traditional plasma shaping provides greater stability for fusion reactions.   

    From MIT News: “Steering fusion’s ‘D-turn'” 

    MIT News

    From MIT News

    May 17, 2019
    Paul Rivenberg | Plasma Science and Fusion Center

    1
    Cross sections of pressure profiles are shown in two different tokamak plasma configurations (the center of the tokamak doughnut is to the left of these). The discharges have high pressure in the core (yellow) that decreases to low pressure (blue) at the edge. Researchers achieved substantial high-pressure operation of reverse-D plasmas at the DIII-D National Fusion Facility.

    Image: Alessandro Marinoni/MIT PSFC

    Research scientist Alessandro Marinoni shows that reversing traditional plasma shaping provides greater stability for fusion reactions.

    Trying to duplicate the power of the sun for energy production on earth has challenged fusion researchers for decades. One path to endless carbon-free energy has focused on heating and confining plasma fuel in tokamaks, which use magnetic fields to keep the turbulent plasma circulating within a doughnut-shaped vacuum chamber and away from the walls. Fusion researchers have favored contouring these tokamak plasmas into a triangular or D shape, with the curvature of the D stretching away from the center of the doughnut, which allows plasma to withstand the intense pressures inside the device better than a circular shape.

    Led by research scientists Alessandro Marinoni of MIT’s Plasma Science and Fusion Center (PSFC) and Max Austin, of the University of Texas at Austin, researchers at the DIII-D National Fusion Facility have discovered promising evidence that reversing the conventional shape of the plasma in the tokamak chamber can create a more stable environment for fusion to occur, even under high pressure. The results were recently published in Physical Review Letters and Physics of Plasmas.

    3
    DIII-D National Fusion Facility. General Atomics

    Marinoni first experimented with the “reverse-D” shape, also known as “negative triangularity,” while pursuing his PhD on the TCV tokamak at Ecole Polytechnique Fédérale de Lausanne, Switzerland.

    4
    The Tokamak à configuration variable (TCV, literally “variable configuration tokamak”) is a Swiss research fusion reactor of the École polytechnique fédérale de Lausanne. Its distinguishing feature over other tokamaks is that its torus section is three times higher than wide. This allows studying several shapes of plasmas, which is particularly relevant since the shape of the plasma has links to the performance of the reactor. The TCV was set up in November 1992.

    The TCV team was able to show that negative triangularity helps to reduce plasma turbulence, thus increasing confinement, a key to sustaining fusion reactions.

    “Unfortunately, at that time, TCV was not equipped to operate at high plasma pressures with the ion temperature being close to that of electrons,” notes Marinoni, “so we couldn’t investigate regimes that are directly relevant to fusion plasma conditions.”

    Growing up outside Milan, Marinoni developed an interest in fusion through an early passion for astrophysical phenomena, hooked in preschool by the compelling mysteries of black holes.

    “It was fascinating because black holes can trap light. At that time I was just a little kid. As such, I couldn’t figure out why the light could be trapped by the gravitational force exerted by black holes, given that on Earth nothing like that ever happens.”

    As he matured he joined a local amateur astronomy club, but eventually decided black holes would be a hobby, not his vocation.

    “My job would be to try producing energy through nuclear fission or fusion; that’s the reason why I enrolled in the nuclear engineering program in the Polytechnic University of Milan.”

    After studies in Italy and Switzerland, Marinoni seized the opportunity to join the PSFC’s collaboration with the DIII-D tokamak in San Diego, under the direction of MIT professor of physics Miklos Porkolab. As a postdoc, he used MIT’s phase contrast imaging diagnostic to measure plasma density fluctuations in DIII-D, later continuing work there as a PSFC research scientist.

    Max Austin, after reading the negative triangularity results from TCV, decided to explore the possibility of running similar experiments on the DIII-D tokamak to confirm the stabilizing effect of negative triangularity. For the experimental proposal, Austin teamed up with Marinoni and together they designed and carried out the experiments.

    “The DIII-D research team was working against decades-old assumptions,” says Marinoni. “It was generally believed that plasmas at negative triangularity could not hold high enough plasma pressures to be relevant for energy production, because of macroscopic scale Magneto-Hydro-Dynamics (MHD) instabilities that would arise and destroy the plasma. MHD is a theory that governs the macro-stability of electrically conducting fluids such as plasmas. We wanted to show that under the right conditions the reverse-D shape could sustain MHD stable plasmas at high enough pressures to be suitable for a fusion power plant, in some respects even better than a D-shape.”

    While D-shaped plasmas are the standard configuration, they have their own challenges. They are affected by high levels of turbulence, which hinders them from achieving the high pressure levels necessary for economic fusion. Researchers have solved this problem by creating a narrow layer near the plasma boundary where turbulence is suppressed by large flow shear, thus allowing inner regions to attain higher pressure. In the process, however, a steep pressure gradient develops in the outer plasma layers, making the plasma susceptible to instabilities called edge localized modes that, if sufficiently powerful, would expel a substantial fraction of the built-up plasma energy, thus damaging the tokamak chamber walls.

    DIII-D was designed for the challenges of creating D-shaped plasmas. Marinoni praises the DIII-D control group for “working hard to figure out a way to run this unusual reverse-D shape plasma.”

    The effort paid off. DIII-D researchers were able to show that even at higher pressures, the reverse-D shape is as effective at reducing turbulence in the plasma core as it was in the low-pressure TCV environment. Despite previous assumptions, DIII-D demonstrated that plasmas at reversed triangularity can sustain pressure levels suitable for a tokamak-based fusion power plant; additionally, they can do so without the need to create a steep pressure gradient near the edge that would lead to machine-damaging edge localized modes.

    Marinoni and colleagues are planning future experiments to further demonstrate the potential of this approach in an even more fusion-power relevant magnetic topology, based on a “diverted” tokamak concept. He has tried to interest other international tokamaks in experimenting with the reverse configuration.

    “Because of hardware issues, only a few tokamaks can create negative triangularity plasmas; tokamaks like DIII-D, that are not designed to produce plasmas at negative triangularity, need a significant effort to produce this plasma shape. Nonetheless, it is important to engage the fusion community worldwide to more fully establish the data base on the benefits of this shape.”

    Marinoni looks forward to where the research will take the DIII-D team. He looks back to his introduction to tokamak, which has become the focus of his research.

    “When I first learned about tokamaks I thought, ‘Oh, cool! It’s important to develop a new source of energy that is carbon free!’ That is how I ended up in fusion.”

    This research is sponsored by the U.S. Department of Energy Office of Science’s Fusion Energy Sciences, using their DIII-D National Fusion Facility.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: