Tagged: ANL-ALCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:06 pm on September 26, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , , , ,   

    From ASCR Discovery: “Superstars’ secrets” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    September 2018

    Superstars’ secrets

    Supercomputing power and algorithms are helping astrophysicists untangle giant stars’ brightness, temperature and chemical variations.

    1
    A frame from an animated global radiation hydrodynamic simulation of an 80-solar-mass star envelope, performed on the Mira supercomputer at the Argonne Leadership Computing Facility (ALCF).

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Seen here: turbulent structures resulting from convection around the iron opacity peak region. Density is highest near the star’s core (yellow). The other colors represent low-density winds launched near the surface. Simulation led by University of California at Santa Barbara. Visualization courtesy of Joseph A. Insley/ALCF.

    Since the Big Bang nearly 14 billion years ago, the universe has evolved and expanded, punctuated by supernova explosions and influenced by the massive stars that spawn them. These stars, many times the size and brightness of the sun, have relatively short lives and turbulent deaths that produce gamma ray bursts, neutron stars, black holes and nebulae, the colorful chemical incubators for new stars.

    Although massive stars are important to understanding astrophysics, the largest ones – at least 20 times the sun’s mass – are rare and highly variable. Their brightness changes by as much as 30 percent, notes Lars Bildsten of the Kavli Institute for Theoretical Physics (KITP) at University of California, Santa Barbara (UCSB). “It rattles around on a timescale of days to months, sometimes years.” Because of the complicated interactions between the escaping light and the gas within the star, scientists couldn’t explain or predict this stellar behavior.

    But with efficient algorithms and the power of the Mira IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility, a Department of Energy (DOE) Office of Science user facility, Bildsten and his colleagues have begun to model the variability in three dimensions across an entire massive star. With an allocation of 60 million processor hours from DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, the team aims to make predictions about these stars that observers can test. They’ve published the initial results from these large-scale simulations – linking brightness changes in massive stars with temperature fluctuations on their surfaces – in the Sept. 27 issue of the journal Nature.

    Yan-Fei Jiang, a KITP postdoctoral scholar, leads these large-scale stellar simulations. They’re so demanding that astrophysicists often must limit the models – either by focusing on part of a star or by using simplifications and approximations that allow them to get a broad yet general picture of a whole star.

    The team started with one-dimensional computational models of massive stars using the open-source code MESA (Modules for Experiments in Stellar Astrophysics). Astrophysicists have used such methods to examine normal convection in stars for decades. But with massive stars, the team hit limits. The bodies are so bright and emit so much radiation that the 1-D models couldn’t capture the violent instability in some regions of the star, Bildsten says.

    Matching 1-D models to observations required researchers to hand-tune various features, Jiang says. “They had no predictive power for these massive stars. And that’s exactly what good theory should do: explain existing data and predict new observations.”

    To calculate the extreme turbulence in these stars, Jiang’s team needed a more complex three-dimensional model and high-performance computers. As a Princeton University Ph.D. student, Jiang had worked with James Stone on a program that could handle these turbulent systems. Stone’s group had developed the Athena++ code to study the dynamics of magnetized plasma, a charged, flowing soup that occurs in stars and many other astronomical objects. While at Princeton, Jiang had added radiation transport algorithms.

    That allowed the team to study accretion disks – accumulated dust and other matter – around the edges of black holes, a project that received a 2016 INCITE allocation of 47 million processor hours. Athena++ has been used for hundreds of other projects, Stone says.

    Stone is part of the current INCITE team, which also includes UCSB’s Omer Blaes, Matteo Cantiello of the Flatiron Institute in New York and Eliot Quataert, University of California, Berkeley.

    In their Nature paper, the group has linked variations in a massive star’s brightness with changes in its surface temperature. Hotter blue stars show smaller fluctuations, Bildsten says. “As a star becomes redder (and cooler), it becomes more variable. That’s a pretty firm prediction from what we’ve found, and that’s going to be what’s exciting to test in detail.”

    Another factor in teasing out massive stars’ behaviors could be the quantity of heavy elements in their atmospheres. Fusion of the lightest hydrogen and helium atoms in massive stars produces heavier atoms, including carbon, oxygen, silicon and iron. When supernovae explode, these bulkier chemical elements are incorporated into new stars. The new elements are more opaque than hydrogen and helium, so they capture and scatter radiation rather than letting photons pass through. For its code to model massive stars, the team needed to add opacity data for these other elements. “The more opaque it is, the more violent these instabilities are likely to be,” Bildsten says. The team is just starting to explore how this chemistry influences the stars’ behavior.

    The scientists also are examining how the brightness variations connect to mass loss. Wolf-Rayet stars are an extreme example of this process, having lost their outer envelopes containing hydrogen and instead containing helium and heavier elements only. These short-lived objects burn for a mere 5 million years, compared with 10 billion years for the sun. Over that time, they shed mass and material before collapsing into a neutron star or a black hole. Jiang and his group are working with UC Berkeley postdoctoral scholar Stephen Ro to diagnose that mass-loss mechanism.

    These 3-D simulations are just the beginning. The group’s current model doesn’t include rotation or magnetic fields, Jiang notes, factors that can be important for studying properties of massive stars such as gamma ray burst-related jets, the brightest explosions in the universe.

    The team also hopes to use its 3-D modeling lessons to improve the faster, cheaper 1-D algorithms – codes Bildsten says helped the team choose which systems to model in 3-D and could point to systems for future investigations.

    Three-dimensional models, Bildsten notes, “are precious simulations, so you want to know that you’re doing the one you want.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

    Advertisements
     
  • richardmitnick 3:39 pm on September 25, 2018 Permalink | Reply
    Tags: , , ANL-ALCF, Argonne's Theta supercomputer, Aurora exascale supercomputer, , , , , , ,   

    From Argonne National Laboratory ALCF: “Argonne team brings leadership computing to CERN’s Large Hadron Collider” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    September 25, 2018
    Madeleine O’Keefe

    CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, expects to produce around 50 petabytes of data this year. This is equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    A team of collaborators from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to address this issue with computing resources at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Since 2015, this team has worked with the ALCF on multiple projects to explore ways supercomputers can help meet the growing needs of the LHC’s ATLAS experiment.

    The efforts are especially important given what is coming up for the accelerator. In 2026, the LHC will undergo an ambitious upgrade to become the High-Luminosity LHC (HL-LHC). The aim of this upgrade is to increase the LHC’s luminosity—the number of events detected per second—by a factor of 10. “This means that the HL-LHC will be producing about 20 times more data per year than what ATLAS will have on disk at the end of 2018,” says Taylor Childers, a member of the ATLAS collaboration and computer scientist at the ALCF who is leading the effort at the facility. “CERN’s computing resources are not going to grow by that factor.”

    Luckily for CERN, the ALCF already operates some of the world’s most powerful supercomputers for science, and the facility is in the midst of planning for an upgrade of its own. In 2021, Aurora—the ALCF’s next-generation system, and the first exascale machine in the country—is scheduled to come online.

    It will provide the ATLAS experiment with an unprecedented resource for analyzing the data coming out of the LHC—and soon, the HL-LHC.

    CERN/ATLAS detector

    Why ALCF?

    CERN may be best known for smashing particles, which physicists do to study the fundamental laws of nature and gather clues about how the particles interact. This involves a lot of computationally intense calculations that benefit from the use of the DOE’s powerful computing systems.

    The ATLAS detector is an 82-foot-tall, 144-foot-long cylinder with magnets, detectors, and other instruments layered around the central beampipe like an enormous 7,000-ton Swiss roll. When protons collide in the detector, they send a spray of subatomic particles flying in all directions, and this particle debris generates signals in the detector’s instruments. Scientists can use these signals to discover important information about the collision and the particles that caused it in a computational process called reconstruction. Childers compares this process to arriving at the scene of a car crash that has nearly completely obliterated the vehicles and trying to figure out the makes and models of the cars and how fast they were going. Reconstruction is also performed on simulated data in the ATLAS analysis framework, called Athena.

    An ATLAS physics analysis consists of three steps. First, in event generation, researchers use the physics that they know to model the kinds of particle collisions that take place in the LHC. In the next step, simulation, they generate the subsequent measurements the ATLAS detector would make. Finally, reconstruction algorithms are run on both simulated and real data, the output of which can be compared to see differences between theoretical prediction and measurement.

    “If we understand what’s going on, we should be able to simulate events that look very much like the real ones,” says Tom LeCompte, a physicist in Argonne’s High Energy Physics division and former physics coordinator for ATLAS.

    “And if we see the data deviate from what we know, then we know we’re either wrong, we have a bug, or we’ve found new physics,” says Childers.

    Some of these simulations, however, are too complicated for the Worldwide LHC Computing Grid, which LHC scientists have used to handle data processing and analysis since 2002.

    MonALISA LHC Computing GridMap http:// monalisa.caltech.edu/ml/_client.beta

    The Grid is an international distributed computing infrastructure that links 170 computing centers across 42 countries, allowing data to be accessed and analyzed in near real-time by an international community of more than 10,000 physicists working on various LHC experiments.

    The Grid has served the LHC well so far, but as demand for new science increases, so does the required computing power.

    That’s where the ALCF comes in.

    In 2011, when LeCompte returned to Argonne after serving as ATLAS physics coordinator, he started looking for the next big problem he could help solve. “Our computing needs were growing faster than it looked like we would be able to fulfill them, and we were beginning to notice that there were problems we were trying to solve with existing computing that just weren’t able to be solved,” he says. “It wasn’t just an issue of having enough computing; it was an issue of having enough computing in the same place. And that’s where the ALCF really shines.”

    LeCompte worked with Childers and ALCF computer scientist Tom Uram to use Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to carry out calculations to improve the performance of the ATLAS software.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Together they scaled Alpgen, a Monte Carlo-based event generator, to run efficiently on Mira, enabling the generation of millions of particle collision events in parallel. “From start to finish, we ended up processing events more than 20 times as fast, and used all of Mira’s 49,152 processors to run the largest-ever event generation job,” reports Uram.

    But they weren’t going to stop there. Simulation, which takes up around five times more Grid computing than event generation, was the next challenge to tackle.
    Moving forward with Theta

    In 2017, Childers and his colleagues were awarded a two-year allocation from the ALCF Data Science Program (ADSP), a pioneering initiative designed to explore and improve computational and data science methods that will help researchers gain insights into very large datasets produced by experimental, simulation, or observational methods. The goal is to deploy Athena on Theta, the ALCF’s 11.69-petaflops Intel-Cray supercomputer, and develop an end-to-end workflow to couple all the steps together to improve upon the current execution model for ATLAS jobs which involves a many­step workflow executed on the Grid.

    ANL ALCF Theta Cray XC40 supercomputer

    “Each of those steps—event generation, simulation, and reconstruction—has input data and output data, so if you do them in three different locations on the Grid, you have to move the data with it,” explains Childers. “Ideally, you do all three steps back-to-back on the same machine, which reduces the amount of time you have to spend moving data around.”

    Enabling portions of this workload on Theta promises to expedite the production of simulation results, discovery, and publications, as well as increase the collaboration’s data analysis reach, thus moving scientists closer to new particle physics.

    One challenge the group has encountered so far is that, unlike other computers on the Grid, Theta cannot reach out to the job server at CERN to receive computing tasks. To solve this, the ATLAS software team developed Harvester, a Python edge service that can retrieve jobs from the server and submit them to Theta. In addition, Childers developed Yoda, an MPI-enabled wrapper that launches these jobs on each compute node.

    Harvester and Yoda are now being integrated into the ATLAS production system. The team has just started testing this new workflow on Theta, where it has already simulated over 12 million collision events. Simulation is the only step that is “production-ready,” meaning it can accept jobs from the CERN job server.

    The team also has a running end-to-end workflow—which includes event generation and reconstruction—for ALCF resources. For now, the local ATLAS group is using it to run simulations investigating if machine learning techniques can be used to improve the way they identify particles in the detector. If it works, machine learning could provide a more efficient, less resource-intensive method for handling this vital part of the LHC scientific process.

    “Our traditional methods have taken years to develop and have been highly optimized for ATLAS, so it will be hard to compete with them,” says Childers. “But as new tools and technologies continue to emerge, it’s important that we explore novel approaches to see if they can help us advance science.”
    Upgrade computing, upgrade science

    As CERN’s quest for new science gets more and more intense, as it will with the HL-LHC upgrade in 2026, the computational requirements to handle the influx of data become more and more demanding.

    “With the scientific questions that we have right now, you need that much more data,” says LeCompte. “Take the Higgs boson, for example. To really understand its properties and whether it’s the only one of its kind out there takes not just a little bit more data but takes a lot more data.”

    This makes the ALCF’s resources—especially its next-generation exascale system, Aurora—more important than ever for advancing science.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Aurora, scheduled to come online in 2021, will be capable of one billion billion calculations per second—that’s 100 times more computing power than Mira. It is just starting to be integrated into the ATLAS efforts through a new project selected for the Aurora Early Science Program (ESP) led by Jimmy Proudfoot, an Argonne Distinguished Fellow in the High Energy Physics division. Proudfoot says that the effective utilization of Aurora will be key to ensuring that ATLAS continues delivering discoveries on a reasonable timescale. Since increasing compute resources increases the analyses that are able to be done, systems like Aurora may even enable new analyses not yet envisioned.

    The ESP project, which builds on the progress made by Childers and his team, has three components that will help prepare Aurora for effective use in the search for new physics: enable ATLAS workflows for efficient end-to-end production on Aurora, optimize ATLAS software for parallel environments, and update algorithms for exascale machines.

    “The algorithms apply complex statistical techniques which are increasingly CPU-intensive and which become more tractable—and perhaps only possible—with the computing resources provided by exascale machines,” explains Proudfoot.

    In the years leading up to Aurora’s run, Proudfoot and his team, which includes collaborators from the ALCF and Lawrence Berkeley National Laboratory, aim to develop the workflow to run event generation, simulation, and reconstruction. Once Aurora becomes available in 2021, the group will bring their end-to-end workflow online.

    The stated goals of the ATLAS experiment—from searching for new particles to studying the Higgs boson—only scratch the surface of what this collaboration can do. Along the way to groundbreaking science advancements, the collaboration has developed technology for use in fields beyond particle physics, like medical imaging and clinical anesthesia.

    These contributions and the LHC’s quickly growing needs reinforce the importance of the work that LeCompte, Childers, Proudfoot, and their colleagues are doing with ALCF computing resources.

    “I believe DOE’s leadership computing facilities are going to play a major role in the processing and simulation of the future rounds of data that will come from the ATLAS experiment,” says LeCompte.

    This research is supported by the DOE Office of Science. ALCF computing time and resources were allocated through the ASCR Leadership Computing Challenge, the ALCF Data Science Program, and the Early Science Program for Aurora.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 5:57 pm on September 5, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , , ,   

    From PPPL and ALCF: “Artificial intelligence project to help bring the power of the sun to Earth is picked for first U.S. exascale system” 


    From PPPL

    and

    Argonne Lab

    Argonne National Laboratory ALCF

    August 27, 2018
    John Greenwald

    1
    Deep Learning Leader William Tang. (Photo by Elle Starkman/Office of Communications.)

    To capture and control the process of fusion that powers the sun and stars in facilities on Earth called tokamaks, scientists must confront disruptions that can halt the reactions and damage the doughnut-shaped devices.

    PPPL NSTX-U

    Now an artificial intelligence system under development at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    The project, titled “Accelerated Deep Learning Discovery in Fusion Energy Science” is one of 10 Early Science Projects on data science and machine learning for the Aurora supercomputer, which is set to become the first U.S. exascale system upon its expected arrival at Argonne in 2021. The system will be capable of performing a quintillion (1018) calculations per second — 50-to-100 times faster than the most powerful supercomputers today.

    Fusion combines light elements

    Fusion combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — in reactions that generate massive amounts of energy. Scientists aim to replicate the process for a virtually inexhaustible supply of power to generate electricity.

    The goal of the PPPL/Princeton University project is to develop a method that can be experimentally validated for predicting and controlling disruptions in burning plasma fusion systems such as ITER — the international tokamak under construction in France to demonstrate the practicality of fusion energy. “Burning plasma” refers to self-sustaining fusion reactions that will be essential for producing continuous fusion energy.

    Heading the project will be William Tang, a principal research physicist at PPPL and a lecturer with the rank and title of professor in the Department of Astrophysical Sciences at Princeton University. “Our research will utilize capabilities to accelerate progress that can only come from the deep learning form of artificial intelligence,” Tang said.

    Networks analagous to a brain

    Deep learning, unlike other types of computational approaches, can be trained to solve with accuracy and speed highly complex problems that require realistic image resolution. Associated software consists of multiple layers of interconnected neural networks that are analogous to simple neurons in a brain. Each node in a network identifies a basic aspect of data that is fed into the system and passes the results along to other nodes that identify increasingly complex aspects of the data. The process continues until the desired output is achieved in a timely way.

    The PPPL/Princeton deep-learning software is called the “Fusion Recurrent Neural Network (FRNN),” composed of convolutional and recurrent neural nets that allow a user to train a computer to detect items or events of interest. The software seeks to speedily predict when disruptions will break out in large-scale tokamak plasmas, and to do so in time for effective control methods to be deployed.

    The project has greatly benefited from access to the huge disruption-relevant data base of the Joint European Torus (JET) in the United Kingdom, the largest and most powerful tokamak in the world today.

    Joint European Torus, at the Culham Centre for Fusion Energy in the United Kingdom

    The FRNN software has advanced from smaller computer clusters to supercomputing systems that can deal with such vast amounts of complex disruption-relevant data. Running the data aims to identify key pre-disruption conditions, guided by insights from first principles-based theoretical simulations, to enable the “supervised machine learning” capability of deep learning to produce accurate predictions with sufficient warning time.

    Access to Tiger computer cluster

    The project has gained from access to Tiger, a high-performance Princeton University cluster equipped with advanced image-resolution GPUs that have enabled the deep learning software to advance to the Titan supercomputer at Oak Ridge National Laboratory and to powerful international systems such as the Tsubame 3.0 supercomputer in Tokyo, Japan.

    Tiger supercomputer at Princeton University

    ORNL Cray XK7 Titan Supercomputer

    Tsubame 3.0 supercomputer in Tokyo, Japan

    The overall goal is to achieve the challenging requirements for ITER, which will need predictions to be 95 percent accurate with less than 5 percent false alarms at least 30 milliseconds or longer before disruptions occur.


    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    The team will continue to build on advances that are currently supported by the DOE while preparing the FRNN software for Aurora exascale computing. The researchers will also move forward with related developments on the SUMMIT supercomputer at Oak Ridge.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Members of the team include Julian Kates-Harbeck, a graduate student at Harvard University and a DOE Office of Science Computational Science Graduate Fellow (CSGF) who is the chief architect of the FRNN. Researchers include Alexey Svyatkovskiy, a big-data, machine learning expert who will continue to collaborate after moving from Princeton University to Microsoft; Eliot Feibush, a big data analyst and computational scientist at PPPL and Princeton, and Kyle Felker, a CSGF member who will soon graduate from Princeton University and rejoin the FRNN team as a post-doctoral research fellow at Argonne National Laboratory.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    PPPL campus

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 11:57 am on August 22, 2018 Permalink | Reply
    Tags: , ANL-ALCF, , , Fine-tuning physics, ,   

    From ASCR Discovery and Argonne National Lab: “Fine-tuning physics” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    August 2018

    Argonne applies supercomputing heft to boost precision in particle predictions.

    2
    A depiction of a scattering event on the Large Hadron Collider. Image courtesy of Argonne National Laboratories.

    Advancing science at the smallest scales calls for vast data from the world’s most powerful particle accelerator, leavened with the precise theoretical predictions made possible through many hours of supercomputer processing.

    The combination has worked before, when scientists from the Department of Energy’s Argonne National Laboratory provided timely predictions about the Higgs particle at the Large Hadron Collider in Switzerland. Their predictions contributed to the 2012 discovery of the Higgs, the subatomic particle that gives mass to all elementary particles.


    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    “That we are able to predict so precisely what happens around us in nature is a remarkable achievement,” Argonne physicist Radja Boughezal says. “To put all these pieces together to get a number that agrees with the measurement that was made with something so complicated as the LHC is always exciting.”

    Earlier this year, she was allocated more than 98 million processor hours on the Mira and Theta supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility, through DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ANL ALCF Theta Cray XC40 supercomputer

    Her previous INCITE allocation helped solve problems that scientists saw as insurmountable just two or three years ago.

    These problems stem from the increasingly intricate and precise measurements and theoretical calculations associated with scrutinizing the Higgs boson and from searches for subtle deviations from the standard model that underpins the behavior of matter and energy.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    The approach she and her associates developed led to early, high-precision LHC predictions that describe so-called strong-force interactions between quarks and gluons, which comprise subatomic particles such as protons and neutrons.

    The theory governing strong-force interactions is called QCD, for quantum chromodynamics. In QCD, the thing that quantifies the strong force when exerted in any direction is called the strong coupling constant.

    “At high energies, when collisions happen, quarks and gluons are very close to each other, so the strong force is very weak. It’s almost turned off,” Boughezal explains. How this strong coupling grows – a small parameter called perturbative expansion – gives physicists a yardstick to calculate their predictions. Perturbative expansion is “a method we have used over and over to get these predictions, and it has provided powerful tests of QCD to date.”

    Crucial to these tests is the N-jettiness framework Boughezal and her Argonne and Northwestern University collaborators devised to obtain high-precision predictions for particle scattering processes. Specially adapted for high-performance computing systems, the framework’s novelty stems from its incorporation of existing low-precision numerical codes to achieve part of the desired result. The scientists fill in algorithmic gaps with simple analytic calculations.

    The LHC data lined up completely with predictions the team had obtained from running the N-jettiness code on the Mira supercomputer at Argonne. The agreement carries important implications for the precision goals physicists are setting for future accelerators such as the proposed Electron-Ion Collider (EIC).

    “One of the things that has puzzled us for 30 years is the spin of the proton,” Boughezal says. Planners hope the EIC reveals how the spin of the proton, matter’s basic building block, emerges from its elementary constituents, quarks and gluons.

    Boughezal also is working with LHC scientists in the search for dark matter, which accounts for 96 percent of stuff in the universe. The remainder is ordinary matter, the atoms and molecules that form stars, planets and people.

    “Scientists believe that the mysterious dark matter in the universe could leave a missing energy footprint at the LHC,” she says. Such a footprint would reveal the existence of a new particle that’s currently missing from the standard model. Dark matter particles interact weakly with the LHC’s detectors. “We cannot see them directly.”

    They could, however, be produced with a jet – a spray of standard-model particles made from LHC proton collisions. “We can measure that jet. We can see it. We can tag it.” And by using simple laws of physics such as the conservation of momentum, even if the particles are invisible, scientists would be able to detect them by measuring the jet’s energy.

    For example, when subatomic particles called Z bosons are produced with particle jets, the bosons can decay into neutrinos, ghostly specks that rarely interact with ordinary matter. The neutrinos appear as missing energy in the LHC’s detectors, just as a dark matter particle would.

    In July 2017, Boughezal and three co-authors published a paper in the Journal of High Energy Particle Physics. It was the first to describe new proton-structure details derived from precision high-energy Z-boson experimental data.

    
“If you want to know whether what you have produced is actually coming from a standard model process or something else that we have not seen before, you need to predict your standard model process very well,” she says. If the theoretical predictions deviate from the experimental data, it suggests new physics at play.


    In fact, Boughezal and her associates have precisely predicted the standard model jet process and it agrees with the data. “So far we haven’t produced dark matter at the LHC.”

    Previously, however, the results were so imprecise – and the margin of uncertainty so high – that physicists couldn’t tell whether they’d produced a standard-model jet or something entirely new.

    What surprises will higher-precision calculations reveal in future LHC experiments?

    “There is still a lot of territory that we can probe and look for something new,” Boughezal says. “The standard model is not a complete theory because there is a lot it doesn’t explain, like dark matter. We know that there has to be something bigger than the standard model.”

    Argonne is managed by UChicago Argonne LLC for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 1:17 pm on May 9, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , ,   

    From Argonne National Laboratory ALCF: “E3SM provides powerful, new Earth system model for supercomputers” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne National Laboratory ALCF

    May 8, 2018
    Andrea Manning

    1
    Argonne scientists helped create a comprehensive new model that draws on supercomputers to simulate how various aspects of the Earth — its atmosphere, oceans, land, ice — move. This earth simulation project emerged from Argonne and other U.S. DOE national laboratories, including Brookhaven, Lawrence Livermore, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia, as well as several universities. Credit: E3SM.org

    The Earth — with its myriad shifting atmospheric, oceanic, land, and ice components — presents an extraordinarily complex system to simulate using computer models.

    But a new Earth modeling system, the Energy Exascale Earth System Model (E3SM), is now able to capture and simulate all these components together. Released on April 23, after four years of development, E3SM features weather-scale resolution — i.e., enough detail to capture fronts, storms, and hurricanes — and uses advanced computers to simulate aspects of the Earth’s variability. The system can help researchers anticipate decadal-scale changes that could influence the U.S. energy sector in years to come.

    The E3SM project is supported by the U.S. Department of Energy’s (DOE) Office of Biological and Environmental Research. “One of E3SM’s purposes is to help ensure that DOE’s climate mission can be met — including on future exascale systems,” said Robert Jacob, a computational climate scientist in the Environmental Science division of DOE’s Argonne National Laboratory and one of 15 project co-leaders.

    To support this mission, the project’s goal is to develop an Earth system model that increases prediction reliability. This objective has historically been limited by constraints in computing technologies and uncertainties in theory and observations. Enhancing prediction reliability requires advances on two frontiers: (1) improved simulation of Earth system processes by developing new models of physical processes, increasing model resolution, and enhancing computational performance; and (2) representing the two-way interactions between human activities and natural processes more realistically, especially where these interactions affect U.S. energy needs.

    “This model adds a much more complete representation between interactions of the energy system and the Earth system,” said David Bader, a computational scientist at Lawrence Livermore National Laboratory and overall E3SM project lead. “With this new system, we’ll be able to more realistically simulate the present, which gives us more confidence to simulate the future.”

    The long view

    Simulating the Earth involves solving approximations of physical, chemical, and biological governing equations on spatial grids at the highest resolutions possible.

    In fact, increasing the number of Earth-system days simulated per day of computing time at varying levels of resolution is so important that it is a prerequisite for achieving the E3SM project goal. The new release can simulate 10 years of the Earth system in one day at low resolution or one year of the Earth system at high resolution in one day (a sample movie is available at the project website). The goal is for E3SM to support simulation of five years of the Earth system on a single computing day at its highest possible resolution by 2021.

    This objective underscores the project’s heavy emphasis on both performance and infrastructure — two key areas of strength for Argonne. “Our researchers have been active in ensuring that the model performs well with many threads,” said Jacob, who will lead the infrastructure group in Phase II, which — with E3SM’s initial release — starts on July 1. Singling out the threading expertise of performance engineer Azamat Mametjanov of Argonne’s Mathematics and Computer Science division, Jacob continued: “We’ve been running and testing on Theta, our new 10-petaflops system at the Argonne Leadership Computing Facility, and will conduct some of the high-res simulations on that platform.”

    Researchers using the E3SM can employ variable resolution on all model components (atmosphere, ocean, land, ice), allowing them to focus computing power on fine-scale processes in different regions. The software uses advanced mesh-designs that smoothly taper the grid-scale from the coarser outer region to the more refined region.

    Adapting for exascale

    E3SM’s developers — more than 100 scientists and software engineers — have a longer-term aim: to use the exascale machines that the DOE Advanced Scientific Computing Research Office expects to procure over the next five years. Thus, E3SM development is proceeding in tandem with the Exascale Computing Initiative. (Exascale refers to a computing system capable of carrying out a billion [1018] calculations per second — a thousand-fold increase in performance over the most advanced computers from a decade ago.)

    Another key focus will be on software engineering, which includes all of the processes for developing the model; designing the tests; and developing the required infrastructure, including input/output libraries and software for coupling the models. E3SM uses Argonne’s Model Coupling Toolkit (MCT), as do other leading climate models (e.g., Community Earth System Model [CESM]) to couple the atmosphere, ocean, and other submodels. (A new version of MCT [2.10] was released along with E3SM.)

    Additional Argonne-specific contributions in Phase II will center on:

    Crop modeling: Efforts will focus on better emulating crops such as corn, wheat, and soybeans, which will improve simulated influences of crops on carbon, nutrient, energy, and water cycles, as well as capturing the implications of human-Earth system interactions
    Dust and aerosols: These play a major role in the atmosphere, radiation, and clouds, as well as various chemical cycles.

    Collaboration among – and beyond – national laboratories

    The E3SM project has involved researchers at multiple DOE laboratories including Argonne, Brookhaven, Lawrence Livermore, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia national laboratories, as well as several universities.

    The project also benefits from collaboration within DOE, including with the Exascale Computing Project and programs in Scientific Discovery through Advanced Computing, Climate Model Development and Validation, Atmospheric Radiation Measurement, Program for Climate Model Diagnosis and Intercomparison, International Land Model Benchmarking Project, Community Earth System Model, and Next-Generation Ecosystem Experiments for the Arctic and the Tropics.

    The code is available on GitHub, the host for the project’s open-source repository. For additional information, visit the E3SM website: http://e3sm.org.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:21 am on May 2, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , , , ,   

    From Argonne National Laboratory ALCF: “ALCF supercomputers advance earthquake modeling efforts” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    May 1, 2018
    John Spizzirri

    Southern California defines cool. The perfect climes of San Diego, the glitz of Hollywood, the magic of Disneyland. The geology is pretty spectacular, as well.

    “Southern California is a prime natural laboratory to study active earthquake processes,” says Tom Jordan, a professor in the Department of Earth Sciences at the University of Southern California (USC). “The desert allows you to observe the fault system very nicely.”

    The fault system to which he is referring is the San Andreas, among the more famous fault systems in the world. With roots deep in Mexico, it scars California from the Salton Sea in the south to Cape Mendocino in the north, where it then takes a westerly dive into the Pacific.

    Situated as it is at the heart of the San Andreas Fault System, Southern California does make an ideal location to study earthquakes. That it is home to nearly 24-million people makes for a more urgent reason to study them.

    1
    San Andreas Fault System. Aerial photo of San Andreas Fault looking northwest onto the Carrizo Plain with Soda Lake visible at the upper left. John Wiley User:Jw4nvcSanta Barbara, California

    2
    USGS diagram of San Andreas Fault. http://nationalatlas.gov/articles/geology/features/sanandreas.html

    Jordan and a team from the Southern California Earthquake Center (SCEC) are using the supercomputing resources of the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility, to advance modeling for the study of earthquake risk and how to reduce it.

    Headquartered at USC, the center is one of the largest collaborations in geoscience, engaging over 70 research institutions and 1,000 investigators from around the world.

    The team relies on a century’s worth of data from instrumental records as well as regional and seismic national hazard models to develop new tools for understanding earthquake hazards. Working with the ALCF, they have used this information to improve their earthquake rupture simulator, RSQSim.

    RSQ is a reference to rate- and state-dependent friction in earthquakes — a friction law that can be used to study the nucleation, or initiation, of earthquakes. RSQSim models both nucleation and rupture processes to understand how earthquakes transfer stress to other faults.

    ALCF staff were instrumental in adapting the code to Mira, the facility’s 10-petaflops supercomputer, to allow for the larger simulations required to model earthquake behaviors in very complex fault systems, like San Andreas, and which led to the team’s biggest discovery.

    Shake, rattle, and code

    The SCEC, in partnership with the U.S. Geological Survey, had already developed the Uniform California Earthquake Rupture Forecast (UCERF), an empirically based model that integrates theory, geologic information, and geodetic data, like GPS displacements, to determine spatial relationships between faults and slippage rates of the tectonic plates that created those faults.

    Though more traditional, the newest version, UCERF3, is considered the best representation of California earthquake ruptures, but the picture it portrays is still not as accurate as researchers would hope.

    “We know a lot about how big earthquakes can be, how frequently they occur, and where they occur, but we cannot predict them precisely in time,” notes Jordan.

    The team turned to Mira to run RSQSim to determine whether they could achieve more accurate results more quickly. A physics-based code, RSQSim produces long-term synthetic earthquake catalogs that comprise dates, times, locations, and magnitudes for predicted events.

    Using simulation, researchers impose stresses upon some representation of a fault system, which changes the stress throughout much of the system and thus changes the way future earthquakes occur. Trying to model these powerful stress-mediated interactions is particularly difficult with complex systems and faults like San Andreas.

    “We just let the system evolve and create earthquake catalogs for a hundred thousand or a million years. It’s like throwing a grain of sand in a set of cogs to see what happens,” explains Christine Goulet, a team member and executive science director for special projects with SCEC.

    The end result is a more detailed picture of the possible hazard, which forecasts a sequence of earthquakes of various magnitudes expected to occur on the San Andreas Fault over a given time range.

    The group tried to calibrate RSQSim’s numerous parameters to replicate UCERF3, but eventually decided to run the code with its default parameters. While the initial intent was to evaluate the magnitude of differences between the models, they discovered, instead, that both models agreed closely on their forecasts of future seismologic activity.

    “So it was an a-ha moment. Eureka,” recalls Goulet. “The results were a surprise because the group had thought carefully about optimizing the parameters. The decision not to change them from their default values made for very nice results.”

    The researchers noted that the mutual validation of the two approaches could prove extremely productive in further assessing seismic hazard estimates and their uncertainties.

    Information derived from the simulations will help the team compute the strong ground motions generated by faulting that occurs at the surface — the characteristic shaking that is synonymous with earthquakes. To do this, the team couples the earthquake rupture forecasts, UCERF and RSQSim, with different models that represent the way waves propagate through the system. Called ground motion prediction equations, these are standard equations used by engineers to calculate the shaking levels from earthquakes of different sizes and locations.

    One of those models is the dynamic rupture and wave propagation code Waveqlab3D (Finite Difference Quake and Wave Laboratory 3D), which is the focus of the SCEC team’s current ALCF allocation.

    “These experiments show that the physics-based model RSQSim can replicate the seismic hazard estimates derived from the empirical model UCERF3, but with far fewer statistical assumptions,” notes Jordan. “The agreement gives us more confidence that the seismic hazard models for California are consistent with what we know about earthquake physics. We can now begin to use these physics to improve the hazard models.”

    This project was awarded computing time and resources at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The team’s research is also supported by the National Science Foundation, the U.S. Geological Survey, and the W.M. Keck Foundation.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Earthquake Alert

    1

    Earthquake Alert

    Earthquake Network projectEarthquake Network is a research project which aims at developing and maintaining a crowdsourced smartphone-based earthquake warning system at a global level. Smartphones made available by the population are used to detect the earthquake waves using the on-board accelerometers. When an earthquake is detected, an earthquake warning is issued in order to alert the population not yet reached by the damaging waves of the earthquake.

    The project started on January 1, 2013 with the release of the homonymous Android application Earthquake Network. The author of the research project and developer of the smartphone application is Francesco Finazzi of the University of Bergamo, Italy.

    Get the app in the Google Play store.

    3
    Smartphone network spatial distribution (green and red dots) on December 4, 2015

    Meet The Quake-Catcher Network

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:39 pm on October 31, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , ,   

    From ALCF: “The inner secrets of planets and stars” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    October 31, 2017
    Jim Collins

    1
    2
    3

    Top image: As part of the team’s research on Jupiter’s dynamo, they performed planetary atmospheric dynamics simulations of rotating, deep convection in a 3D spherical shell, with shallow stable stratification. This image is a snapshot from a video that shows the evolution of radial vorticity near the outer boundary from a north polar perspective view. Intense anticyclones (blue) drift westward and undergo multiple mergers, while the equatorial jet flows rapidly to the east. Displayed simulation time and radial vorticity in units of planetary rotation (radial vorticity of 2 is equal to the planetary rotation rate). The video, which shows over 5,300 planetary rotations, can be viewed here: https://www.youtube.com/watch?v=OUICRNiFhpU. (Credit: Moritz Heimpel, University of Alberta)

    Middle image: A 3D rendering of simulated solar convection realized at different rotation rates. Regions of upflow and downflow are rendered in red and blue, respectively. As rotational influence increases from left (non-rotating) to right (rapidly-rotating), convective patterns become increasingly more organized and elongated (Featherstone & Hindman, 2016, ApJ Letters, 830 L15). Understanding the Sun’s location along this spectrum represents a major step toward understanding how it sustains a magnetic field. (Credit: Nick Featherstone and Bradley Hindman, University of Colorado Boulder)

    Bottom image: Radial velocity field (red = positive; blue = negative) on the equatorial plane of a numerical simulation of Earth’s core dynamo. These small-scale convective flows generate a strong planetary-scale magnetic field. (Credit: Rakesh Yadav, Harvard University)

    Using Argonne’s Mira supercomputer, researchers are developing advanced models to study magnetic field generation on the Earth, Jupiter, and the Sun at an unprecedented level of detail. A better understanding of this process will provide new insights into the birth and evolution of the solar system.

    After a five-year, 1.74 billion-mile journey, NASA’s Juno spacecraft entered Jupiter’s orbit in July 2016, to begin its mission to collect data on the structure, atmosphere, and magnetic and gravitational fields of the mysterious planet.

    NASA/Juno

    For UCLA geophysicist Jonathan Aurnou, the timing could not have been much better.

    Just as Juno reached its destination, Aurnou and his colleagues from the Computational Infrastructure for Geodynamics (CIG) had begun carrying out massive 3D simulations at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, to model and predict the turbulent interior processes that produce Jupiter’s intense magnetic field.

    While the timing of the two research efforts was coincidental, it presents an opportunity to compare the most detailed Jupiter observations ever captured with the highest-resolution Jupiter simulations ever performed.

    Aurnou, lead of the CIG’s Geodynamo Working Group, hopes that the advanced models they are creating with Mira, the ALCF’s 10-petaflops supercomputer, will complement the NASA probe’s findings to reveal a full understanding of the Jupiter’s internal dynamics.

    “Even with Juno, we’re not going to be able to get a great physical sampling of the turbulence occurring in Jupiter’s deep interior,” he said. “Only a supercomputer can help get us under that lid. Mira is allowing us to develop some of the most accurate models of turbulence possible in extremely remote astrophysical settings.”

    But Aurnou and his collaborators are not just looking at Jupiter. Their three-year ALCF project also is using Mira to develop models to study magnetic field generation on the Earth and the Sun at an unprecedented level of detail.

    Dynamic dynamos

    Magnetic fields are generated deep in the cores of planets and stars by a process known as dynamo action. This phenomenon occurs when the rotating, convective motion of electrically conducting fluids (e.g., liquid metal in planets and plasma in stars) converts kinetic energy into magnetic energy. A better understanding of the dynamo process will provide new insights into the birth and evolution of the solar system, and shed light on planetary systems being discovered around other stars.

    Modeling the internal dynamics of Jupiter, the Earth, and the Sun all bring unique challenges, but the three vastly different astrophysical bodies do share one thing in common—simulating their extremely complex dynamo processes requires a massive amount of computing power.

    To date, dynamo models have been unable to accurately simulate turbulence in fluids similar to those found in planets and stars. Conventional models also are unable to resolve the broad range of spatial scales present in turbulent dynamo action. However, the continued advances in computing hardware and software are now allowing researchers to overcome such limitations.

    With their project at the ALCF, the CIG team set out to develop and demonstrate high-resolution 3D dynamo models at the largest scale possible. Using Rayleigh, an open-source code designed to study magnetohydrodynamic convection in spherical geometries, they have been able to resolve a range of spatial scales previously inaccessible to numerical simulation.

    While the code transitioned to Mira’s massively parallel architecture smoothly, Rayleigh’s developer, Nick Featherstone, worked with ALCF computational scientist Wei Jiang to achieve optimal performance on the system. Their work included redesigning Rayleigh’s initialization phase to make it run up to 10 times faster, and rewriting parts of the code to make use of a hybrid MPI/OpenMP programming model that performs about 20 percent better than the original MPI version.

    “We did the coding and porting, but running it properly on a supercomputer is a whole different thing,” said Featherstone, a researcher from the University of Colorado Boulder. “The ALCF has done a lot of performance analysis for us. They just really made sure we’re running as well as we can run.”

    Stellar research

    When the project began in 2015, the team’s primary focus was the Sun. An understanding of the solar dynamo is key to predicting solar flares, coronal mass ejections, and other drivers of space weather, which can impact the performance and reliability of space-borne and ground-based technological systems, such as satellite-based communications.

    “We’re really trying to get at the linchpin that is stopping progress on understanding how the Sun generates its magnetic field,” said Featherstone, who is leading the project’s solar dynamo research. “And that is determining the typical flow speed of plasmas in the region of convection.”

    The team began by performing 3D stellar convection simulations of a non-rotating star to fine-tune parameters so that their calculations were on a trajectory similar to observations of flow structures on the Sun’s surface. Next, they incorporated rotation into the simulations, which allowed them to begin making meaningful comparisons against observations. This led to a paper in The Astrophysical Journal Letters last year, in which the researchers were able to place upper bounds on the typical flow speed in the solar convection zone.

    The team’s research also shed light on a mysterious observation that has puzzled scientists for decades. The Sun’s visible surface is covered with patches of convective bubbles, known as granules, which cluster into groups that are about 30,000 kilometers across, known as supergranules. Many scientists have theorized that the clustering should exist on even larger scales, but Featherstone’s simulations suggest that rotation may be the reason the clusters are smaller than expected.

    “These patches of convection are the surface signature of dynamics taking place deep in the Sun’s interior,” he said. “With Mira, we’re starting to show that this pattern we see on the surface results naturally from flows that are slower than we expected, and their interaction with rotation.”

    According to Featherstone, these new insights were enabled by their model’s ability to simulate rotation and the Sun’s spherical shape, which were too computationally demanding to incorporate in previous modeling efforts.

    “To study the deep convection zone, you need the sphere,” he said. “And to get it right, it needs to be rotating.

    Getting to the core of planets

    Magnetic field generation in terrestrial planets like Earth is driven by the physical properties of their liquid metal cores. However, due to limited computing power, previous Earth dynamo models have been forced to simulate fluids with electrical conductivities that far exceed that of actual liquid metals.

    To overcome this issue, the CIG team is building a high-resolution model that is capable of simulating the metallic properties of Earth’s molten iron core. Their ongoing geodynamo simulations are already showing that flows and coupled magnetic structures develop on both small and large scales, revealing new magnetohydrodynamic processes that do not appear in lower resolution computations.

    “If you can’t simulate a realistic metal, you’re going to have trouble simulating turbulence accurately,” Aurnou said. “Nobody could afford to do this computationally, until now. So, a big driver for us is to open the door to the community and provide a concrete example of what is possible with today’s fastest supercomputers.”

    In Jupiter’s case, the team’s ultimate goal is to create a coupled model that accounts for both its dynamo region and its powerful atmospheric winds, known as jets. This involves developing a “deep atmosphere” model in which Jupiter’s jet region extends all the way through the planet and connects to the dynamo region.

    Thus far, the researchers have made significant progress with the atmospheric model, enabling the highest-resolution giant-planet simulations yet achieved. The Jupiter simulations will be used to make detailed predictions of surface vortices, zonal jet flows, and thermal emissions that will be compared to observational data from the Juno mission.

    Ultimately, the team plans to make their results publicly available to the broader research community.

    “You can almost think of our computational efforts like a space mission,” Aurnou said. “Just like the Juno spacecraft, Mira is a unique and special device. When we get datasets from these amazing scientific tools, we want to make them openly available and put them out to the whole community to look at in different ways.”

    This project was awarded computing time and resources at the ALCF through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program supported by DOE’s Office of Science. The development of the Rayleigh code was funded by CIG, which is supported by the National Science Foundation.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:23 am on October 9, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , , , ,   

    From Science Node: “US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021” 

    Science Node bloc
    Science Node

    September 27, 2017
    Tiffany Trader

    ANL ALCF Cray Aurora supercomputer

    At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the “Aurora” supercomputer is on track to be the United States’ first exascale system. Aurora, originally named as the third pillar of the CORAL “pre-exascale” project, will still be built by Intel and Cray for Argonne National Laboratory, but the delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).

    2

    The fate of the Argonne Aurora “CORAL” supercomputer has been in limbo since the system failed to make it into the U.S. DOE budget request, while the same budget proposal called for an exascale machine “of novel architecture” to be deployed at Argonne in 2021.

    Until now, the only official word from the U.S. Exascale Computing Project was that Aurora was being “reviewed for changes and would go forward under a different timeline.”

    Officially, the contract has been “extended,” and not cancelled, but the fact remains that the goal of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative to stand up two distinct pre-exascale architectures was not met.

    According to sources we spoke with, a number of people at the DOE are not pleased with the Intel/Cray (Intel is the prime contractor, Cray is the subcontractor) partnership. It’s understood that the two companies could not deliver on the 180-200 petaflops system by next year, as the original contract called for. Now Intel/Cray will push forward with an exascale system that is some 50x larger than any they have stood up.

    It’s our understanding that the cancellation of Aurora is not a DOE budgetary measure as has been speculated, and that the DOE and Argonne wanted Aurora. Although it was referred to as an “interim,” or “pre-exascale” machine, the scientific and research community was counting on that system, was eager to begin using it, and they regarded it as a valuable system in its own right. The non-delivery is regarded as disruptive to the scientific/research communities.

    Another question we have is that since Intel/Cray failed to deliver Aurora, and have moved on to a larger exascale system contract, why hasn’t their original CORAL contract been cancelled and put out again to bid?

    With increased global competitiveness, it seems that the DOE stakeholders did not want to further delay the non-IBM/Nvidia side of the exascale track. Conceivably, they could have done a rebid for the Aurora system, but that would leave them with an even bigger gap if they had to spin up a new vendor/system supplier to replace Intel and Cray.

    Starting the bidding process over again would delay progress toward exascale – and it might even have been the death knell for exascale by 2021, but Intel and Cray now have a giant performance leap to make and three years to do it. There is an open question on the processor front as the retooled Aurora will not be powered by Phi/Knights Hill as originally proposed.

    These events beg the question regarding the IBM-led effort and whether IBM/Nvidia/Mellanox are looking very good by comparison. The other CORAL thrusts — Summit at Oak Ridge and Sierra at Lawrence Livermore — are on track, with Summit several weeks ahead of Sierra, although it is looking like neither will make the cut-off for entry onto the November Top500 list as many had speculated.

    ORNL IBM Summit supercomputer depiction

    LLNL IBM Sierra supercomputer

    We reached out to representatives from Cray, Intel and the Exascale Computing Project (ECP) seeking official comment on the revised Aurora contract. Cray and Intel declined to comment and we did not hear back from ECP by press time. We will update the story as we learn more.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:17 am on October 7, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , Leaning into the supercomputing learning curve,   

    From ALCF: “Leaning into the supercomputing learning curve” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    1
    Recently, 70 scientists — graduate students, computational scientists, and postdoctoral and early-career researchers — attended the fifth annual Argonne Training Program on Extreme-Scale Computing (ATPESC) in St. Charles, Illinois. Over two weeks, they learned how to seize opportunities offered by the world’s fastest supercomputers. Credit: Image by Argonne National Laboratory

    October 6, 2017
    Andrea Manning

    What would you do with a supercomputer that is at least 50 times faster than today’s fastest machines? For scientists and engineers, the emerging age of exascale computing opens a universe of possibilities to simulate experiments and analyze reams of data — potentially enabling, for example, models of atomic structures that lead to cures for disease.

    But first, scientists need to learn how to seize this opportunity, which is the mission of the Argonne Training Program on Extreme-Scale Computing (ATPESC). The training is part of the Exascale Computing Project, a collaborative effort of the U.S. Department of Energy’s (DOE) Office of Science and its National Nuclear Security Administration.

    Starting in late July, 70 participants — graduate students, computational scientists, and postdoctoral and early-career researchers — gathered at the Q Center in St. Charles, Illinois, for the program’s fifth annual training session. This two-week course is designed to teach scientists key skills and tools and the most effective ways to use leading-edge supercomputers to further their research aims.

    This year’s ATPESC agenda once again was packed with technical lectures, hands-on exercises and dinner talks.

    “Supercomputers are extremely powerful research tools for a wide range of science domains,” said ATPESC program director Marta García, a computational scientist at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility at the department’s Argonne National Laboratory.

    “But using them efficiently requires a unique skill set. With ATPESC, we aim to touch on all of the key skills and approaches a researcher needs to take advantage of the world’s most powerful computing systems.”

    To address all angles of high-performance computing, the training focuses on programming methodologies that are effective across a variety of supercomputers — and that are expected to apply to exascale systems. Renowned scientists, high-performance computing experts and other leaders in the field served as lecturers and guided the hands-on sessions.

    This year, experts covered:

    Hardware architectures
    Programming models and languages
    Data-intensive computing, input/output (I/O) and machine learning
    Numerical algorithms and software for extreme-scale science
    Performance tools and debuggers
    Software productivity
    Visualization and data analysis

    In addition, attendees tapped hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including the ALCF’s Mira, Cetus, Vesta, Cooley and Theta systems; the Oak Ridge Leadership Computing Facility’s Titan system; and the National Energy Research Scientific Computing Center’s Cori and Edison systems – all DOE Office of Science User Facilities.

    “I was looking at how best to optimize what I’m currently using on these new architectures and also figure out where things are going,” said Justin Walker, a Ph.D. student in the University of Wisconsin-Madison’s Physics Department. “ATPESC delivers on instructing us on a lot of things.”

    Shikhar Kumar, Ph.D. candidate in nuclear science and engineering at the Massachusetts Institute of Technology, elaborates: “On the issue of I/O, data processing, data visualization and performance tools, there isn’t a single option that is regarded as the ‘industry standard.’ Instead, we learned about many of the alternatives, which encourages learning high-performance computing from the ground up.”

    “You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”

    Jonathan Hoy, a Ph.D. student at the University of Southern California, pointed to the larger, “ripple effect” role of this type of gathering: “It is good to have all these people sit down together. In a way, we’re setting standards here.”

    Lisa Goodenough, a postdoctoral researcher in high energy physics at Argonne, said: “The theme has been about barriers coming down.” Goodenough referred to both barriers to entry and training barriers hindering scientists from realizing scientific objectives.

    “The program was of huge benefit for my postdoctoral researcher,” said Roseanna Zia, assistant professor of chemical engineering at Stanford University. “Without the financial assistance, it would have been out of my reach,” she said, highlighting the covered tuition fees, domestic airfare, meals and lodging.

    Now, anyone can learn from the program’s broad curriculum, including the slides and videos of the lectures from some of the world’s foremost experts in extreme-scale computing, online — underscoring program organizers’ efforts to extend its reach beyond the classroom. The slides and the videos of the lectures captured at ATPESC 2017 are now available online at: http://extremecomputingtraining.anl.gov/2017-slides and http://extremecomputingtraining.anl.gov/2017-videos, respectively.

    For more information on ATPESC, including on applying for selection to attend next year’s program, visit http://extremecomputingtraining.anl.gov.

    The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 2:39 pm on July 3, 2017 Permalink | Reply
    Tags: ANL-ALCF, , Argonne's Theta supercomputer goes online, ,   

    From ALCF: “Argonne’s Theta supercomputer goes online” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    July 3, 2017
    Laura Wolf

    Theta, a new production supercomputer located at the U.S. Department of Energy’s Argonnne National Laboratory is officially open to the research community. The new machine’s massively parallel, many-core architecture continues Argonne’s leadership computing program towards its future Aurora system.

    Theta was built onsite at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, where it will operate alongside Mira, an IBM Blue Gene/Q supercomputer. Both machines are fully dedicated to supporting a wide range of scientific and engineering research campaigns. Theta, an Intel-Cray system, entered production on July 1.

    The new supercomputer will immediately begin supporting several 2017-2018 DOE Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) projects. The ALCC is a major allocation program that supports scientists from industry, academia, and national laboratories working on advancements in targeted DOE mission areas. Theta will also support projects from the ALCF Data Science Program, ALCF’s discretionary award program, and, eventually, the DOE’s Innovative and Novel Computing Computational Impact on Theory and Experiment (INCITE) program—the major means by which the scientific community gains access to the DOE’s fastest supercomputers dedicated to open science.

    Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Now that Theta is available as a production resource, researchers can apply for computing time through the facility’s various allocation programs. Although the INCITE and ALCC calls for proposals recently closed, researchers can apply for Director’s Discretionary awards at any time.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: