Tagged: ASCR Discovery Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:13 pm on November 20, 2019 Permalink | Reply
    Tags: ASCR Discovery, , ,   

    From ASCR Discovery: “Tracking tungsten” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    From From ASCR Discovery

    November 2019

    Supercomputer simulations provide a snapshot of how plasma reacts with – and can damage – components in large fusion reactors.

    A cross-section view of plasma (hotter yellow to cooler blues and purples) as it interacts with the tungsten surface of a tokamak fusion reactor divertor (gray walls in lower half of image), which funnels away gases and impurities. Tungsten atoms can sputter, migrate and redeposit (red squiggles), and smaller ions of helium, deuterium and tritium (red circles) can implant. Some of these interactions are beneficial, but other effects can degrade the tungsten surface and deplete and even quench the fusion reaction over time. Image courtesy of Tim Younkin, University of Tennessee.

    Nuclear fusion offers the tantalizing possibility of clean, sustainable power – if tremendous scientific and engineering challenges are overcome. One key issue: Nuclear engineers must understand how extreme temperatures, particle speeds and magnetic field variations will affect the plasma – the superheated gas where fusion happens – and the reactor materials designed to contain it. Predicting these plasma-material interactions is critical for understanding the function and safety of these machines.

    Brian Wirth of the University of Tennessee and the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) is working with colleagues on one piece of this complex challenge: simulating tungsten, the metal that armors a key reactor component in ITER, the France-based world’s largest tokamak fusion reactor.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    ITER is expected to begin first plasma experiments in 2025 with the hope of producing 10 times more power than is required to heat it. Wirth’s team is part of DOE’s Scientific Discovery through Advanced Computing (SciDAC) program, and has collaborated with the Advanced Tokamak Modeling (AToM), another SciDAC project to develop computer codes that model the full range of plasma physics and material reactions inside a tokamak.

    “There’s no place today in a laboratory that can provide a similar environment to what we’re expecting on ITER,” Wirth says. “SciDAC and the high-performance computing (HPC) environment really give us an opportunity to simulate in advance how we expect the materials to perform, how we expect the plasma to perform, how we expect them to interact and talk to each other.” Modeling these features will help scientists learn about the effects of particular conditions and how long components might last. Such insights could support better design choices for fusion reactors.

    A tokamak’s doughnut-shaped reaction chamber confines rapidly moving, extremely hot, gaseous hydrogen ions – deuterium and tritium – and electrons within a strong magnetic field as a plasma, the fourth state of matter. The ions collide and fuse, spitting out alpha particles (two neutrons and two protons bound together) and neutrons. The particles release their kinetic energy as heat, which can boil water to produce steam that spins electricity-generating turbines. Today’s tokamaks don’t employ temperatures and magnetic fields high enough to produce self-sustaining fusion, but ITER could approach those benchmarks, over the next decades, toward producing 500 MW from 50 MW of input heat.

    Fusion plasmas must reach core temperatures up to hundreds of millions of degrees, and tokamak components could routinely experience temperatures approaching a thousand degrees – extreme conditions across a large range. Wirth’s group focuses on a component called the divertor, comprising 54 cassette assemblies that ring the doughnut’s base to funnel away waste gas and impurities. Each assembly includes a tungsten-armored plate supported by stainless steel. The divertor faces intensive plasma interactions. As the deuterium and tritium ions fuse, fast-moving neutrons, alpha particles and debris fall to the bottom of the reaction vessel and strike the divertor surface. Though only one part of the larger system, interactions between the metal and the reactive plasma have important implications for sustaining a fusion reaction and the durability of the divertor materials.

    Until recently, carbon fiber composites, protected divertors and other plasma-facing tokamak components, but such surfaces can react with tritium and retain it, a process that also limits recycling, the return of tritium to the plasma to continue the fusion reaction. Tungsten, with a melting point of more than 3,400 degrees, is expected to be more resilient. However, as plasma interacts with it, the ions can implant in the metal, forming bubbles or even diffusing hundreds of nanometers below the surface. Wirth and his colleagues are looking at how that process degrades the tungsten and quantifying the extent to which these interactions deplete tritium from the plasma. Both of these issues affect the rate of fusion reactions over time and can even entirely shut down, or quench, the fusion plasma.

    Exploring these questions requires integrating approaches at different time and length scales. The researchers use other SciDAC project codes to model the fundamental characteristics of the background plasma at steady state and how that energetic soup will interact with the divertor surface. Those results feed in to hPIC and F-TRIDYN, codes developed by Davide Curreli at the University of Illinois at Urbana-Champaign that describe the angles and energies of ions and alpha particles as they strike the tungsten surface. Building on those results, Wirth’s team can apply its own codes to characterize plasma particles as they interact with the tungsten and affect its surface.

    Developing these codes required combining top-down and bottom-up design approaches. To understand tungsten and its interaction with the helium ions (alpha particles) the fusion reaction produces, Wirth’s team has used molecular dynamics (MD) techniques. The simulations examined 20 million atoms, a relatively modest number compared with the largest calculations that approach 100 times that size, he notes. But they follow the materials for longer times, approximately 1.5 microseconds, approximately 1,500 times longer than most MD simulations. Those longer spans provide physics benchmarks for the top-down approach they developed to simulate the interactions of tungsten and plasma particles within cluster dynamics in a code called Xolotl, after the Aztec god of lightning and death. As part of this work, University of Tennessee graduate student Tim Younkin also has developed GITR (pronounced “guitar” for Global Impurity Transport). “With GITR we simulate all the species that are eroded off the surface, where do they ionize, what are their orbits following the plasma physics and dynamics of the electromagnetism, where do they redeposit,” Wirth says.

    The combination of codes has simulated several divertor operational scenarios on ITER, including a 100-second-long discharge of deuterium and tritium plasma designed to generate 100 MW of fusion power, about 20 percent of that which researchers plan to achieve on ITER. Overall the team found that the plasma causes tungsten to erode and re-deposit. Helium particles tend to erode tungsten, which could be a potential problem, Wirth says, though sometimes they also seem to block tritium from embedding deep within the tungsten, which could be beneficial overall because it would improve recycling.

    Although these simulations are contributing important insights, they are just the first steps toward understanding realistic conditions within ITER. These initial models simulate plasma with steady heat and ion-particle fluxes, but conditions in an operating tokamak constantly change, Wirth notes, and could affect overall material performance. His group plans to incorporate those changes in future simulations.

    The researchers also want to model beryllium, an element used to armor the main fusion chamber walls. Beryllium will also be eroded, transported and deposited into divertors, possibly altering the tungsten surface’s behavior.

    The researchers must validate all of these results with experiments, some of which must await ITER’s operation. Wirth and his team also collaborate with the smaller WEST tokamak in France on experiments to validate their coupled SciDAC plasma-surface interaction codes.

    Ultimately Wirth hopes these integrated codes will provide HPC tools that can truly predict physical response in these extreme systems. With that validation, he says, “we can think about using them to design better-functioning material components for even more aggressive operating conditions that could enable fusion to put energy on the grid.”

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 6:15 pm on May 22, 2019 Permalink | Reply
    Tags: , ASCR Discovery, , , ,   

    From ASCR Discovery: “Lessons machine-learned” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    From ASCR Discovery

    May 2019

    The University of Arizona’s Joshua Levine is using his Department of Energy Early Career Research Program award to combine machine learning and topology data-analysis tools to better understand trends within climate simulations. These map pairs represent data from January 1950 (top) and January 2010. The left panels depict near-surface air temperatures from hot (red) to cool (blue). In the multicolored images, Levine has used topological, or shape-based, data analysis to organize and color-code the temperature data into a tree-like hierarchy. As the time passes, the data behavior around the North Pole (right panels) breaks into smaller chunks. These changes highlight the need for machine-learning tools to understand how these structures evolve over time. Images courtesy of Joshua Levine, University of Arizona, with data from CMIP6/ESGF.

    Quantifying the risks buried nuclear waste pose to soil and water near the Department of Energy’s (DOE’s) Hanford site in Washington state is not easy. Researchers can’t measure the earth’s permeability, a key factor in how far chemicals might travel, and mathematical models of how substances move underground are incomplete, says Paris Perdikaris of the University of Pennsylvania.

    But where traditional experimental and computational tools fall short, artificial intelligence algorithms can help, building their own inferences based on patterns in the data. “We can’t directly measure the quantities we’re interested in,” he says. “But using this underlying mathematical structure, we can construct machine-learning algorithms that can predict what we care about.”

    Perdikaris’ project is one of several sponsored by the DOE Early Career Research Program that apply machine-learning methods. One piece of his challenge is combining disparate data types such as images, simulations and time-resolved sensor information to find patterns. He will also constrain these models using physics and math, so the resulting predictions respect the underlying science and don’t make spurious connections based on data artifacts. “The byproduct of this is that you can significantly reduce the amount of data you need to make robust predictions. So you can save a lot in data efficiency terms.”

    Another key obstacle is quantifying the uncertainty within these calculations. Missing aspects of the physical model or physical data can affect the prediction’s quality. Besides studying subsurface transport, such algorithms could also be useful for designing new materials.

    Machine learning belongs to a branch of artificial intelligence algorithms that already support our smartphone assistants, manage our home devices and curate our movie and music playlists. Many machine-learning algorithms depend on tools known as neural networks, which mimic the human brain’s ability to filter, classify and draw insights from the patterns within data. Machine-learning methods could help scientists interpret a range of information. In some disciplines, experiments generate more data than researchers can hope to analyze on their own. In others, scientists might be looking for insights about their data and observations.

    But industry’s tools alone won’t solve science’s problems. Today’s machine-learning algorithms, though powerful, make inferences researchers can’t verify against established theory. And such algorithms might flag experimental noise as meaningful. But with algorithms designed to handle science’s tenets, machine learning could boost computational efficiency, allow researchers to compare, integrate and improve physical models, and shift the ways that scientists work.

    Much of industrial artificial intelligence work started with distinguishing, say, cats from Corvettes – analyzing millions of digital images in which data are abundant and have regular, pixelated structures. But with science, researchers don’t have the same luxury. Unlike the ubiquitous digital photos and language snippets that have powered image and voice recognition, scientific data can be expensive to generate, such as in molecular research experiments or large-scale simulations, says Argonne National Laboratory’s Prasanna Balaprakash.

    With his early-career award, he’s designing machine-learning methods that incorporate scientific knowledge. “How do we leverage that? How do we bring in the physics, the domain knowledge, so that an algorithm doesn’t need a lot of data to learn?” He’s also focused on adapting machine-learning algorithms to accept a wider range of data types, including graph-like structures used for encoding molecules or large-scale traffic network scenarios.

    Balaprakash also is exploring ways to automate the development of new machine-learning algorithms on supercomputers – a neural network for designing new neural networks. Writing these algorithms requires a lot of trial-and-error work, and a neural network built with one data type often can’t be used on a new data type.

    Although some fields have data bottlenecks, in other situations scientific instruments generate gobs of data – gigabytes, even petabytes, of results that are beyond human capability to review and analyze. Machine learning could help researchers sift this information and glean important insights. For example, experiments on Sandia National Laboratories’ Z machine, which compresses energy to produce X-rays and to study nuclear fusion, spew out data about material properties under these extreme conditions.

    Sandia Z machine

    When superheated, samples studied in the Z machine mix in a complex process that researchers don’t fully understand yet, says Sandia’s Eric Cyr. He’s exploring data-driven algorithms that can divine an initial model of this mixing, giving theoretical physicists a starting point to work from. In addition, combining machine-learning tools with simulation data could help researchers streamline their use of the Z machine, reducing the number of experiments needed to achieve accurate results and minimizing costs.

    To reach that goal, Cyr focuses on scalable machine algorithms, a technology known as layer-parallel methods. Today’s machine-learning algorithms have expanded from a handful of processing layers to hundreds. As researchers spread these layers over multiple graphics processing units (GPUs), the computational efficiency eventually breaks down. Cyr’s algorithms would split the neural-network layers across processors as the algorithm trains on the problem of interest, he says. “That way if you want to double the number of layers, basically make your neural network twice as deep, you can use twice as many processors and do it in the same amount of time.”

    With problems such as climate and weather modeling, researchers struggle to incorporate the vast range of scales, from globe-circling currents to local eddies. To tackle this problem, Oklahoma State University’s Omer San will apply machine learning to study turbulence in these types of geophysical flows. Researchers must construct a computational grid to run these simulations, but they have to define the scale of the mesh, perhaps 100 kilometers across, to encompass the globe and produce a calculation of manageable size. At that scale, it’s impossible to simulate a range of smaller factors, such as vortices just a few meters wide that can produce important, outsized effects across the whole system because of nonlinear interactions. Machine learning could provide a way to add back in some of these fine details, San says, like software that sharpens a blurry photo.

    Machine learning also could help guide researchers as they choose from the available closure models, or ways to model smaller-scale features, as they examine various flow types. It could be a decision-support system, San says, using local data to determine whether Model A or Model B is a better choice. His group also is examining ways to connect existing numerical methods within neural networks, to allow those techniques to partially inform the systems during the learning process, rather than doing blind analysis. San wants “to connect all of these dots: physics, numerics and the learning framework.”

    Machine learning also promises to help researchers extend the use of mathematical strategies that already support data analysis. At the University of Arizona, Joshua Levine is combining machine learning with topological data-analysis tools.

    These strategies capture data’s shape, which can be useful for visualizing and understanding climate patterns, such as surface temperatures over time. Levine wants to extend topology, which helps researchers analyze a single simulation, to multiple climate simulations with different parameters to understand them as a whole.

    As climate scientists use different models, they often struggle to figure out which ones are correct. “More importantly, we don’t always know where they agree and disagree,” Levine says. “It turns out agreement is a little bit more tractable as a problem.” Researchers can do coarse comparisons – calculating the average temperature across the Earth and checking the models to see if those simple numbers agree. But that basic comparison says little about what happened within a simulation.

    Topology can help match those average values with their locations, Levine says. “So it’s not just that it was hotter over the last 50 years, but maybe it was much hotter in Africa over the last 50 years than it was in South America.”

    All of these projects involve blending machine learning with other disciplines to capitalize on each area’s relative strengths. Computational physics, for example, is built on well-defined principles and mathematical models. Such models provide a good baseline for study, Penn’s Perdikaris says. “But they’re a little bit sterilized and they don’t directly reflect the complexity of the real world.” By contrast, up to now machine learning has only relied on data and observations, he says, throwing away a scientist’s physical knowledge of the world. “Bridging the two approaches will be key in advancing our understanding and enhancing our ability to analyze and predict complex phenomena in the future.”

    Although Argonne’s Balaprakash notes that machine learning has been oversold in some cases, he also believes it will be a transformative research tool, much like the Hubble telescope was for astronomy. “It’s a really promising research area.”

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 1:38 pm on December 5, 2018 Permalink | Reply
    Tags: ASCR Discovery, Astronomical magnetism, , Mira an IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility a Department of Energy user facility, NASA’s Pleiades supercomputer, Nick Featherstone- University of Colorado Boulder   

    From ASCR Discovery: “Astronomical magnetism” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    Modeling solar and planetary magnetic fields is a big job that requires a big code.

    Convection models of the sun, with increasing amounts of rotation from left to right. Warm flows (red) rise to the surface while others cool (blue). These simulations are the most comprehensive high-resolution models of solar convection so far. See video here.

    Image courtesy of Nick Featherstone, University of Colorado Boulder.

    It’s easy to take the Earth’s magnetic field for granted. It’s always on the job, shielding our life-giving atmosphere from the corrosive effects of unending solar radiation. Its constant presence also gives animals – and us — clues to find our way around.

    This vital force has protected the planet since long before humans evolved, yet its source – the giant generator of a heat-radiating, electricity-conducting liquid iron core swirling as the planet rotates – still holds mysteries. Understanding the vast and complex turbulent features of Earth’s dynamo – and that of other planets and celestial bodies – has challenged physicists for decades.

    “You can always do the problem you want to, but just a little bit,” says Nick Featherstone, research associate at the University of Colorado Boulder. Thanks to his efforts, however, researchers now have a computer code that lets them come closer than ever to simulating these features in detail across a whole planet or star. The program, known as Rayleigh, is open-source and available to anyone.

    To demonstrate the power of Rayleigh’s algorithms, a research team has simulated the dynamics of the sun, Jupiter and Earth in unprecedented detail. The project has been supported with a Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program allocation of 260 million processor hours on Mira, an IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility, a Department of Energy user facility.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Earth’s liquid metal core produces a complex combination of outward (red) and inward (blue) flows in this dynamo simulation. Image courtesy of Rakesh Yadav, Harvard University.

    This big code stemmed from Featherstone’s research in solar physics. Previously scientists had used computation to model solar features on as many as a few hundred processor cores simultaneously, or in parallel. But Featherstone wanted to tackle larger problems that were intractable using available technology. “I spent a lot of time actually looking at the parallel algorithms that were used in that code and seeing where I could change things,” he says.

    When University of California, Los Angeles geophysicist Jonathan Aurnou saw Featherstone present his work at a conference in 2012, he was immediately impressed. “Nick has built this huge, huge capability,” says Aurnou, who leads the Geodynamo Working Group in the Computational Infrastructure for Geodynamics (CIG) based at the University of California, Davis. Though stars and planets can behave very differently, the dynamo in these bodies can be modeled with adjustments to the same fundamental algorithms.

    Aurnou soon recruited Featherstone to develop a community code – one researchers could share and improve – based on his earlier algorithms. The team initially performed simulations on up to 10,000 cores of NASA’s Pleiades supercomputer.

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer

    But the scientists wanted to go bigger. Previous codes are like claw hammers, but “this code – it’s a 30-pound sledge,” Aurnou says. “That changes what you can swing at.”

    In 2014 Aurnou, Featherstone and their colleagues proposed three big INCITE projects focusing on three bodies in our solar system: the sun, a star; Jupiter, a gas giant planet; and Earth, a rocky planet. Mira’s 786,000 processor cores let the team scale up their calculations by a factor of 100, Featherstone says. Adds Aurnou, “You can think of Mira as a place to let codes run wild, a safari park for big codes.”

    The group focused on one problem each year, starting with Featherstone’s specialty: the sun. In its core, hydrogen atoms fuse to form helium, releasing high-energy photons that bounce around a dense core for thousands of years. They eventually diffuse to an outer convecting layer, where they warm plasma pockets, causing them to rise to the surface. Finally, the energy reaches the surface, the photosphere, where it can escape, reaching Earth as light within minutes. Like planets, the sun rotates, producing chaotic forces and its own magnetic poles that reverse every 11 years. The processes that cause this magnetic reversal remain largely unknown.

    Featherstone broke down this complex mixture of activity into components across the whole star. “What I’ve been able to do with the INCITE program is to start modeling convection in the sun both with and without rotation turned on and at very, very high resolution,” Featherstone says. The researchers plan to incorporate magnetism into the models next.

    The team then moved on to Jupiter, aiming to predict and model the results of NASA’s Juno probe, which orbits that planet. In Jupiter’s core – the innermost 95 percent – hydrogen is compressed so tightly that the electrons pop off. The mass behaves like a metal ball, Aurnou says. Its core also releases heat in an amount equal to what the planet receives from the sun. All that convective turbulence also rotates, creating a potent planetary magnetic field, he says.

    Until recent results from Juno, scientists didn’t know that surface jets on Jupiter extend deep – thousands of kilometers – into the planet. Juno’s images reveal clusters of geometric turbulence – pentagons, octagons and more – grouped around the Jovian poles.

    A model of interacting vortices simulating turbulent jets that resemble those observed on Jupiter. Yellow features are rotating counterclockwise, while blue features rotate clockwise. Image courtesy of Moritz Heimpel, University of Alberta.

    Even before the Juno results were published in March, the CIG team had simulated deep jets and their interactions with Jupiter’s surface and magnetic core. The team is well-poised to help physicists better understand these unusual stormy features, Aurnou adds. “We’re going to be using our big simulations and the analysis that we’re now carrying out to try to understand the Juno data.”

    In its third year the team modeled the behavior of Earth’s magnetic field, a system where they had far more data from observations. Nonetheless, our home still harbors geophysical puzzles. Earth has an outer core of molten iron and a hard rocky crust that contains it. The magnetic poles drift – and can even flip – but the process takes a few hundred thousand years and doesn’t occur on a regular schedule. “Earth’s magnetic field is complex – messy – both in time and space,” Aurnou says. “That mess is where all the fun is.”

    Turbulence is difficult to simulate because it includes the cumulative effects of minuscule changes coupled with processes that are occurring over large parts of a planet.

    “[In our Earth model] we’ve made, in a sense, as messy a dynamo simulation as possible,” Aurnou says. Previous researchers modeling Earth have argued that tweaks to physics were needed to explain features such as the constant magnetic-pole shifts. “We’ve actually found with our Mira runs, that, no, we don’t need any extra ingredients. We just need turbulence.”

    With these results, the team hopes to pare down simulations to incorporate the simplest set of inputs needed to understand our complex terrestrial system.

    The INCITE project results are fueling new research opportunities already. Based on the team’s solar findings, in 2017 Featherstone received a $1 million grant from NASA’s Heliophysics Grand Challenge program, which supports research into solar physics problems that require both theory and observation.

    The project shows how federal funding can dovetail to help important science reach its potential, Aurnou says. CIG originally hired Featherstone using National Science Foundation funds, which led to the INCITE grant, followed by this NASA project, which will model even more of the sun’s fundamental physics. That information could help protect astronauts from solar radiation and shield our electrical grids from damage and outages during periods of high solar activity.

    Eventually the team would like to model the reversal of magnetic poles on Earth, which requires accounting for daily rotation over hundreds of thousands of years. “That’s going to cost us,” Aurnou says. “We need to get a more efficient code for that and faster computers.”

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 5:58 pm on October 17, 2018 Permalink | Reply
    Tags: ASCR Discovery, , , Quantum predictions   

    From ASCR Discovery: “Quantum predictions” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    Mechanical strain, pressure or temperature changes or adding chemical doping agents can prompt an abrupt switch from insulator to conductor in materials such as nickel oxide (pictured here). Nickel ions (blue) and oxygen ions (red) surround a dopant ion of potassium (yellow). Quantum Monte Carlo methods can accurately predict regions where charge density (purple) will accumulate in these materials. Image courtesy of Anouar Benali, Argonne National Laboratory.

    Solving a complex problem quickly requires careful tradeoffs – and simulating the behavior of materials is no exception. To get answers that predict molecular workings feasibly, scientists must swap in mathematical approximations that speed computation at accuracy’s expense.

    But magnetism, electrical conductivity and other properties can be quite delicate, says Paul R.C. Kent of the Department of Energy’s (DOE’s) Oak Ridge National Laboratory. These properties depend on quantum mechanics, the movements and interactions of myriad electrons and atoms that form materials and determine their properties. Researchers who study such features must model large groups of atoms and molecules rather than just a few. This problem’s complexity demands boosting computational tools’ efficiency and accuracy.

    That’s where a method called quantum Monte Carlo (QMC) modeling comes in. Many other techniques approximate electrons’ behavior as an overall average, for example, rather than considering them individually. QMC enables accounting for the individual behavior of all of the electrons without major approximations, reducing systematic errors in simulations and producing reliable results, Kent says.

    Kent’s interest in QMC dates back to his Ph.D. research at Cambridge University in the 1990s. At ORNL, he recently returned to the method because advances in both supercomputer hardware and in algorithms had allowed researchers to improve its accuracy.

    “We can do new materials and a wider fraction of elements across the periodic table,” Kent says. “More importantly, we can start to do some of the materials and properties where the more approximate methods that we use day to day are just unreliable.”

    Even with these advances, simulations of these types of materials, ones that include up to a few hundred atoms and thousands of electrons, requires computational heavy lifting. Kent leads a DOE Basic Energy Sciences Center, the Center for Predictive Simulations of Functional Materials (CPSFM) that includes researchers from ORNL, Argonne National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, the University of California, Berkeley and North Carolina State University.

    Their work is supported by a DOE Innovative and Novel Computational Impact on Theory and Experiments (INCITE) allocation of 140 million processor hours, split between Oak Ridge Leadership Computing Facility’s Titan and Argonne Leadership Computing Facility’s Mira supercomputers. Both computing centers are DOE Office of Science user facilities.

    ORNL Cray Titan XK7 Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    To take QMC to the next level, Kent and colleagues start with materials such as vanadium dioxide that display unusual electronic behavior. At cooler temperatures, this material insulates against the flow of electricity. But at just above room temperature, vanadium dioxide abruptly changes its structure and behavior.

    Suddenly this material becomes metallic and conducts electricity efficiently. Scientists still don’t understand exactly how and why this occurs. Factors such as mechanical strain, pressure or doping the materials with other elements also induce this rapid transition from insulator to conductor.

    However, if scientists and engineers could control this behavior, these materials could be used as switches, sensors or, possibly, the basis for new electronic devices. “This big change in conductivity of a material is the type of thing we’d like to be able to predict reliably,” Kent says.

    Laboratory researchers also are studying these insulator-to-conductors with experiments. That validation effort lends confidence to the predictive power of their computational methods in a range of materials. The team has built open-source software, known as QMCPACK, that is now available online and on all of the DOE Office of Science computational facilities.

    Kent and his colleagues hope to build up to high-temperature superconductors and other complex and mysterious materials. Although scientists know these materials’ broad properties, Kent says, “we can’t relate those to the actual structure and the elements in the materials yet. So that’s a really grand challenge for the condensed-matter physics field.”

    The most accurate quantum mechanical modeling methods restrict scientists to examining just a few atoms or molecules. When scientists want to study larger systems, the computation costs rapidly become unwieldy. QMC offers a compromise: a calculation’s size increases cubically relative to the number of electrons, a more manageable challenge. QMC incorporates only a few controlled approximations and can be applied to the numerous atoms and electrons needed. It’s well suited for today’s petascale supercomputers – capable of one quadrillion calculations or more each second – and tomorrow’s exascale supercomputers, which will be at least a thousand times faster. The method maps simulation elements relatively easily onto the compute nodes in these systems.

    The CPSFM team continues to optimize QMCPACK for ever-faster supercomputers, including OLCF’s Summit, which will be fully operational in January 2019.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The higher memory capacity on that machine’s Nvidia Volta GPUs – 16 gigabytes per graphics processing unit compared with 6 gigabytes on Titan – already boosts computation speed. With the help of OLCF’s Ed D’Azevedo and Andreas Tillack, the researchers have implemented improved algorithms that can double the speed of their larger calculations.

    QMCPACK is part of DOE’s Exascale Computing Project, and the team is already anticipating additional scaling challenges for running QMCPACK on future machines. To perform the desired simulations within roughly 12 hours on an exascale supercomputer, Kent estimates that they’ll need algorithms that are 30 times more scalable than those within the current version.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Even with improved hardware and algorithms, QMC calculations will always be expensive. So Kent and his team would like to use QMCPACK to understand where cheaper methods go wrong so that they can improve them. Then they can save QMC calculations for the most challenging problems in materials science, Kent says. “Ideally we will learn what’s causing these materials to be very tricky to model and then improve cheaper approaches so that we can do much wider scans of different materials.”

    The combination of improved QMC methods and a suite of computationally cheaper modeling approaches could lead the way to new materials and an understanding of their properties. Designing and testing new compounds in the laboratory is expensive, Kent says. Scientists could save valuable time and resources if they could first predict the behavior of novel materials in a simulation.

    Plus, he notes, reliable computational methods could help scientists understand properties and processes that depend on individual atoms that are extremely difficult to observe using experiments. “That’s a place where there’s a lot of interest in going after the fundamental science, predicting new materials and enabling technological applications.”

    Oak Ridge National Laboratory is supported by the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 11:57 am on August 22, 2018 Permalink | Reply
    Tags: , , , ASCR Discovery, Fine-tuning physics, ,   

    From ASCR Discovery and Argonne National Lab: “Fine-tuning physics” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    August 2018

    Argonne applies supercomputing heft to boost precision in particle predictions.

    A depiction of a scattering event on the Large Hadron Collider. Image courtesy of Argonne National Laboratories.

    Advancing science at the smallest scales calls for vast data from the world’s most powerful particle accelerator, leavened with the precise theoretical predictions made possible through many hours of supercomputer processing.

    The combination has worked before, when scientists from the Department of Energy’s Argonne National Laboratory provided timely predictions about the Higgs particle at the Large Hadron Collider in Switzerland. Their predictions contributed to the 2012 discovery of the Higgs, the subatomic particle that gives mass to all elementary particles.

    CERN CMS Higgs Event

    CERN ATLAS Higgs Event


    CERN map

    CERN LHC Tunnel

    CERN LHC particles

    “That we are able to predict so precisely what happens around us in nature is a remarkable achievement,” Argonne physicist Radja Boughezal says. “To put all these pieces together to get a number that agrees with the measurement that was made with something so complicated as the LHC is always exciting.”

    Earlier this year, she was allocated more than 98 million processor hours on the Mira and Theta supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility, through DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ANL ALCF Theta Cray XC40 supercomputer

    Her previous INCITE allocation helped solve problems that scientists saw as insurmountable just two or three years ago.

    These problems stem from the increasingly intricate and precise measurements and theoretical calculations associated with scrutinizing the Higgs boson and from searches for subtle deviations from the standard model that underpins the behavior of matter and energy.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    Standard Model of Particle Physics from Symmetry Magazine

    The approach she and her associates developed led to early, high-precision LHC predictions that describe so-called strong-force interactions between quarks and gluons, which comprise subatomic particles such as protons and neutrons.

    The theory governing strong-force interactions is called QCD, for quantum chromodynamics. In QCD, the thing that quantifies the strong force when exerted in any direction is called the strong coupling constant.

    “At high energies, when collisions happen, quarks and gluons are very close to each other, so the strong force is very weak. It’s almost turned off,” Boughezal explains. How this strong coupling grows – a small parameter called perturbative expansion – gives physicists a yardstick to calculate their predictions. Perturbative expansion is “a method we have used over and over to get these predictions, and it has provided powerful tests of QCD to date.”

    Crucial to these tests is the N-jettiness framework Boughezal and her Argonne and Northwestern University collaborators devised to obtain high-precision predictions for particle scattering processes. Specially adapted for high-performance computing systems, the framework’s novelty stems from its incorporation of existing low-precision numerical codes to achieve part of the desired result. The scientists fill in algorithmic gaps with simple analytic calculations.

    The LHC data lined up completely with predictions the team had obtained from running the N-jettiness code on the Mira supercomputer at Argonne. The agreement carries important implications for the precision goals physicists are setting for future accelerators such as the proposed Electron-Ion Collider (EIC).

    “One of the things that has puzzled us for 30 years is the spin of the proton,” Boughezal says. Planners hope the EIC reveals how the spin of the proton, matter’s basic building block, emerges from its elementary constituents, quarks and gluons.

    Boughezal also is working with LHC scientists in the search for dark matter, which accounts for 96 percent of stuff in the universe. The remainder is ordinary matter, the atoms and molecules that form stars, planets and people.

    “Scientists believe that the mysterious dark matter in the universe could leave a missing energy footprint at the LHC,” she says. Such a footprint would reveal the existence of a new particle that’s currently missing from the standard model. Dark matter particles interact weakly with the LHC’s detectors. “We cannot see them directly.”

    They could, however, be produced with a jet – a spray of standard-model particles made from LHC proton collisions. “We can measure that jet. We can see it. We can tag it.” And by using simple laws of physics such as the conservation of momentum, even if the particles are invisible, scientists would be able to detect them by measuring the jet’s energy.

    For example, when subatomic particles called Z bosons are produced with particle jets, the bosons can decay into neutrinos, ghostly specks that rarely interact with ordinary matter. The neutrinos appear as missing energy in the LHC’s detectors, just as a dark matter particle would.

    In July 2017, Boughezal and three co-authors published a paper in the Journal of High Energy Particle Physics. It was the first to describe new proton-structure details derived from precision high-energy Z-boson experimental data.

“If you want to know whether what you have produced is actually coming from a standard model process or something else that we have not seen before, you need to predict your standard model process very well,” she says. If the theoretical predictions deviate from the experimental data, it suggests new physics at play.

    In fact, Boughezal and her associates have precisely predicted the standard model jet process and it agrees with the data. “So far we haven’t produced dark matter at the LHC.”

    Previously, however, the results were so imprecise – and the margin of uncertainty so high – that physicists couldn’t tell whether they’d produced a standard-model jet or something entirely new.

    What surprises will higher-precision calculations reveal in future LHC experiments?

    “There is still a lot of territory that we can probe and look for something new,” Boughezal says. “The standard model is not a complete theory because there is a lot it doesn’t explain, like dark matter. We know that there has to be something bigger than the standard model.”

    Argonne is managed by UChicago Argonne LLC for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

  • richardmitnick 3:07 pm on March 15, 2017 Permalink | Reply
    Tags: ASCR Discovery, Coding a Starkiller, , ,   

    From OLCF via ASCR and DOE: “Coding a Starkiller” 


    Oak Ridge National Laboratory



    March 2017

    The Titan supercomputer and a tool called Starkiller help Stony Brook University-led team simulate key moments in exploding stars.

    A volume rendering of the density after 0.6 and 0.9 solar mass white dwarfs merge. The image is derived from a calculation performed on the Oak Ridge Leadership Computing facility’s Titan supercomputer. The model used Castro, an adaptive mesh astrophysical radiation hydrodynamics simulation code. Image courtesy of Stony Brook University / Max Katz et al.

    The spectacular Supernova 1987A, whose light reached Earth on Feb. 23 of the year it’s named for, captured the public’s fancy. It’s located at the edge of the Milky Way, in a dwarf galaxy called the Large Magellanic Cloud. It had been four centuries since earthlings had witnessed light from a star exploding in our galaxy.


    A supernova’s awesome light show heralds a giant star’s death, and the next supernova’s post-mortem will generate reams of data, compared to the paltry dozen or so neutrinos and X-rays harvested from the 1987 event.

    Astrophysicists Michael Zingale and Bronson Messer aren’t waiting. They’re aggressively anticipating the next supernova by leading teams in high-performance computer simulations of explosive stellar events, including different supernova types and their accompanying X-ray bursts. Zingale, of Stony Brook University, and Messer, of the Department of Energy’s Oak Ridge National Laboratory (ORNL), are in the midst of an award from the DOE Office of Science’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. It provides an allocation of 45 million processor hours of computer time on Titan, a Cray XK7 that’s one of the world’s most powerful supercomputers, at the Oak Ridge Leadership Computing Facility, or OLCF – a DOE Office of Science user facility.

    The simulations run on workhorse codes developed by the INCITE collaborators and at the DOE’s Lawrence Berkeley National Laboratory – codes that “are often modified toward specific problems,” Zingale says. “And the common problem we share with ORNL is that we have to put more and more of our algorithms on the Titan graphics processor units (GPUs),” specialized computer chips that accelerate calculations. While the phenomena they’re modeling “are really far away and on scales that are hard to imagine,” the codes have other applications closer to home: “terrestrial phenomena, like terrestrial combustion.” The team’s codes – Maestro, Castro, Chimera and FLASH – are available to other modelers free through online code repository Github.

    With a previous INCITE award, the researchers realized the possibility of attacking the GPU problem together. They envisioned codes comprised of multiphysics modules that compute common pieces of most kinds of explosive activities, Messer says. They dubbed the growing collection of GPU-enabled modules Starkiller.

    “Starkiller ties this INCITE project together,” he says. “We realized we didn’t want to reinvent the wheel with each new simulation.” For example, a module that tracks nuclear burning helps the researchers create larger networks for nucleosynthesis, a supernova process in which elements form in the turbulent flow on the stellar surface.

    “In the past, we were able to do only a little more than a dozen different elements, and now we’re routinely doing 150,” Messer says. “We can make the GPU run so much faster. That’s part of Titan’s advantage to us.”

    Supernova 1987A, a type II supernova, arose from the gravitational collapse of a stellar core, the consistent fate of massive stars. Type Ia supernovae follow from intense thermonuclear activities that eventually drive the explosion of a white dwarf – a star that has used up all its hydrogen. Zingale’s group is focused on type Ia, Messer’s on type II. A type II leaves a remnant star; a type Ia does not.

    Stars like the sun burn hydrogen into helium and, over enormous stretches of time, burn the helium into carbon. Once our sun starts burning carbon, it will gradually peter out, Messer says, because it’s not massive enough to turn the carbon into something heavier.

    “A star begins life as a big ball of hydrogen, and its whole life is this fight between gravity trying to suck it into the middle and thermonuclear reactions keeping it supported against its own gravity,” he adds. “Once it gets to the point where it’s burning some carbon, the sun will just give up. It will blow a big smoke ring into space and become a planetary nebula, and at the center it will become a white dwarf.”

    Zingale is modeling two distinct thermonuclear modes. One is for a white dwarf in a binary system – two stars orbiting one another – that consumes additional material from its partner. As the white dwarf grows in mass, it gets hotter and denser in the center, creating conditions that drive thermonuclear reactions.

    “This star is made mostly of carbon and oxygen,” Zingale says. “When you get up to a few hundred million K, you have densities of a few billion grams per cubic centimeter. Carbon nuclei get fused and make things like neon and sodium and magnesium, and the star gets energy out in that process. We are modeling the star’s convection, the creation of a rippling burning front that converts the carbon and oxygen into heavier elements such as iron and nickel. This creates such an enormous amount of energy that it overcomes the force of gravity that’s holding the star together, and the whole thing blows apart.”

    The other mode is being modeled with former Stony Brook graduate student and INCITE co-principal investigator Max Katz, who want to understand whether merging stars can create a burning point that leads to a supernova, as some observations suggest. His simulations feature two white dwarfs so close that they emit gravitational radiation, robbing energy from the system and causing the stars to spiral inward. Eventually, they get so close that the more massive one rips the lesser apart via tidal energy.

    Zingale’s group also continues to model the convective burning on stars, known as X-ray bursts, providing a springboard to more in-depth studies. He says they’re the first to simulate them in three dimensions. That work and additional supernova studies were supported by the DOE Office of Science and performed at OLCF and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.


    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.

    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: