Tagged: ANL-ALCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:01 pm on August 5, 2019 Permalink | Reply
    Tags: "Large cosmological simulation to run on Mira", ANL-ALCF, , , , , ,   

    From Argonne Leadership Computing Facility: “Large cosmological simulation to run on Mira” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    An extremely large cosmological simulation—among the five most extensive ever conducted—is set to run on Mira this fall and exemplifies the scope of problems addressed on the leadership-class supercomputer at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Argonne physicist and computational scientist Katrin Heitmann leads the project. Heitmann was among the first to leverage Mira’s capabilities when, in 2013, the IBM Blue Gene/Q system went online at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Among the largest cosmological simulations ever performed at the time, the Outer Rim Simulation she and her colleagues carried out enabled further scientific research for many years.

    For the new effort, Heitmann has been allocated approximately 800 million core-hours to perform a simulation that reflects cutting-edge observational advances from satellites and telescopes and will form the basis for sky maps used by numerous surveys. Evolving a massive number of particles, the simulation is designed to help resolve mysteries of dark energy and dark matter.

    “By transforming this simulation into a synthetic sky that closely mimics observational data at different wavelengths, this work can enable a large number of science projects throughout the research community,” Heitmann said. “But it presents us with a big challenge.” That is, in order to generate synthetic skies across different wavelengths, the team must extract relevant information and perform analysis either on the fly or after the fact in post-processing. Post-processing requires the storage of massive amounts of data—so much, in fact, that merely reading the data becomes extremely computationally expensive.

    Since Mira was launched, Heitmann and her team have implemented in their Hardware/Hybrid Accelerated Cosmology Code (HACC) more sophisticated analysis tools for on-the-fly processing. “Moreover, compared to the Outer Rim Simulation, we’ve effected three major improvements,” she said. “First, our cosmological model has been updated so that we can run a simulation with the best possible observational inputs. Second, as we’re aiming for a full-machine run, volume will be increased, leading to better statistics. Most importantly, we set up several new analysis routines that will allow us to generate synthetic skies for a wide range of surveys, in turn allowing us to study a wide range of science problems.”

    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing predictive tools and aid the development of new models, impacting both ongoing and upcoming cosmological surveys, including the Dark Energy Spectroscopic Instrument (DESI), the Large Synoptic Survey Telescope (LSST), SPHEREx, and the “Stage-4” ground-based cosmic microwave background experiment (CMB-S4).

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018


    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    LSST

    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    NASA’s SPHEREx Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer depiction

    4

    The value of the simulation derives from its tremendous volume (which is necessary to cover substantial portions of survey areas) and from attaining levels of mass and force resolution sufficient to capture the small structures that host faint galaxies.

    The volume and resolution pose steep computational requirements, and because they are not easily met, few large-scale cosmological simulations are carried out. Contributing to the difficulty of their execution is the fact that the memory footprints of supercomputers have not advanced proportionally with processing speed in the years since Mira’s introduction. This makes that system, despite its relative age, rather optimal for a large-scale campaign when harnessed in full.

    “A calculation of this scale is just a glimpse at what the exascale resources in development now will be capable of in 2021/22,” said Katherine Riley, ALCF Director of Science. “The research community will be taking advantage of this work for a very long time.”

    Funding for the simulation is provided by DOE’s High Energy Physics program. Use of ALCF computing resources is supported by DOE’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:49 pm on July 10, 2019 Permalink | Reply
    Tags: ANL-ALCF, , Atomic force microscopy, Computational materials science, Coupled cluster theory, DFT-density functional theory, Kelvin probe force microscopy, , , ,   

    From Argonne Leadership Computing Facility: “Predicting material properties with quantum Monte Carlo” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    July 9, 2019
    Nils Heinonen

    1
    For one of their efforts, the team used diffusion Monte Carlo to compute how doping affects the energetics of nickel oxide. Their simulations revealed the spin density difference between bulks of potassium-doped nickel oxide and pure nickel oxide, showing the effects of substituting a potassium atom (center atom) for a nickel atom on the spin density of the bulk. Credit: Anouar Benali, Olle Heinonen, Joseph A. Insley, and Hyeondeok Shin, Argonne National Laboratory.

    Recent advances in quantum Monte Carlo (QMC) methods have the potential to revolutionize computational materials science, a discipline traditionally driven by density functional theory (DFT). While DFT—an approach that uses quantum-mechanical modeling to examine the electronic structure of complex systems—provides convenience to its practitioners and has unquestionably yielded a great many successes throughout the decades since its formulation, it is not without shortcomings, which have placed a ceiling on the possibilities of materials discovery. QMC is poised to break this ceiling.

    The key challenge is to solve the quantum many-body problem accurately and reliably enough for a given material. QMC solves these problems via stochastic sampling—that is, by using random numbers to sample all possible solutions. The use of stochastic methods allows the full many-body problem to be treated while circumventing large approximations. Compared to traditional methods, they offer extraordinary potential accuracy, strong suitability for high-performance computing, and—with few known sources of systematic error—transparency. For example, QMC satisfies a mathematical principle that allows it to set a bound for a given system’s ground state energy (the lowest-energy, most stable state).

    QMC’s accurate treatment of quantum mechanics is very computationally demanding, necessitating the use of leadership-class computational resources and thus limiting its application. Access to the computing systems at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF)—U.S. Department of Energy (DOE) Office of Science User Facilities—has enabled a team of researchers led by Paul Kent of Oak Ridge National Laboratory (ORNL) to meet the steep demands posed by QMC. Supported by DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, the team’s goal is to simulate promising materials that elude DFT’s investigative and predictive powers.

    To conduct their work, the researchers employ QMCPACK, an open-source QMC code developed by the team. It is written specifically for high-performance computers and runs on all the DOE machines. It has been run at the ALCF since 2011.

    Functional materials

    The team’s efforts are focused on studies of materials combining transition metal elements with oxygen. Many of these transition metal oxides are functional materials that have striking and useful properties. Small perturbations in the make-up or structure of these materials can cause them to switch from metallic to insulating, and greatly change their magnetic properties and ability to host and transport other atoms. Such attributes make the materials useful for technological applications while posing fundamental scientific questions about how these properties arise.

    The computational challenge has been to simulate the materials with sufficient accuracy: the materials’ properties are sensitive to small changes due to complex quantum mechanical interactions, which make them very difficult to model.

    The computational performance and large memory of the ALCF’s Theta system have been particularly helpful to the team. Theta’s storage capacity has enabled studies of material changes caused by small perturbations such as additional elements or vacancies. Over three years the team developed a new technique to more efficiently store the quantum mechanical wavefunctions used by QMC, greatly increasing the range of materials that could be run on Theta.

    ANL ALCF Theta Cray XC40 supercomputer

    Experimental Validation

    Kent noted that experimental validation is a key component of the INCITE project. “The team is leveraging facilities located at Argonne and Oak Ridge National Laboratories to grow high-quality thin films of transition-metal oxides,” he said, including vanadium oxide (VO2) and variants of nickel oxide (NiO) that have been modified with other compounds.

    For VO2, the team combined atomic force microscopy, Kelvin probe force microscopy, and time-of-flight secondary ion mass spectroscopy on VO2 grown at ORNL’s Center for Nanophase Materials Science (CNMS) to demonstrate how oxygen vacancies suppress the transition from metallic to insulating VO2. A combination of QMC, dynamical mean field theory, and DFT modeling was deployed to identify the mechanism by which this phenomenon occurs: oxygen vacancies leave positively charged holes that are localized around the vacancy site and end up distorting the structure of certain vanadium orbitals.

    For NiO, the challenge was to understand how a small quantity of dopant atoms, in this case potassium, modifies the structure and optical properties. Molecular beam epitaxy at Argonne’s Materials Science Division was used to create high quality films that were then probed via techniques such as x-ray scattering and x-ray absorption spectroscopy at Argonne’s Advanced Photon Source (APS) [below] for direct comparison with computational results. These experimental results were subsequently compared against computational models employing QMC and DFT. The APS and CNMS are DOE Office of Science User Facilities.

    So far the team has been able to compute, understand, and experimentally validate how the band gap of materials containing a single transition metal element varies with composition. Band gaps determine a material’s usefulness as a semiconductor—a substance that can alternately conduct or cease the flow of electricity (which is important for building electronic sensors or devices). The next steps of the study will be to tackle more complex materials, with additional elements and more subtle magnetic properties. While more challenging, these materials could lead to greater discoveries.

    New chemistry applications

    Many of the features that make QMC attractive for materials also make it attractive for chemistry applications. An outside colleague—quantum chemist Kieron Burke of the University of California, Irvine—provided the impetus for a paper published in Journal of Chemical Theory and Computation. Burke approached the team’s collaborators with a problem he had encountered while trying to formulate a new method for DFT. Moving forward with his attempt required benchmarks against which to test his method’s accuracy. As QMC was the only means by which sufficiently precise benchmarks could be obtained, the team produced a series of calculations for him.

    The reputed gold standard for many-body system numerical techniques in quantum chemistry is known as coupled cluster theory. While it is extremely accurate for many molecules, some are so strongly correlated quantum-mechanically that they can be thought of as existing in a superposition of quantum states. The conventional coupled cluster method cannot handle something so complicated. Co-principal investigator Anouar Benali, a computational scientist at the ALCF and Argonne’s Computational Sciences Division, spent some three years collaborating on efforts to expand QMC’s capability so as to include both low-cost and highly efficient support for these states that will in future also be needed for materials problems. Performing analysis on the system for which Burke needed benchmarks required this superposition support; he verified the results of his newly developed DFT approach against the calculations generated with Benali’s QMC expansion. They were in close agreement with each other, but not with the results conventional coupled cluster had generated—which, for one particular compound, contained significant errors.

    “This collaboration and its results have therefore identified a potential new area of research for the team and QMC,” Kent said. “That is, tackling challenging quantum chemical problems.”

    The research was supported by DOE’s Office of Science. ALCF and OLCF computing time and resources were allocated through the INCITE program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus


     
  • richardmitnick 9:45 am on June 2, 2019 Permalink | Reply
    Tags: "Tapping the power of AI and high-performance computing to extend evolution to superconductors", ANL-ALCF, ,   

    From Argonne Leadership Computing Facility: “Tapping the power of AI and high-performance computing to extend evolution to superconductors” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    May 29, 2019
    Jared Sagoff

    1
    This image depicts the algorithmic evolution of a defect structure in a superconducting material. Each iteration serves as the basis for a new defect structure. Redder colors indicate a higher current-carrying capacity. Credit: Argonne National Laboratory/Andreas Glatz

    Owners of thoroughbred stallions carefully breed prizewinning horses over generations to eke out fractions of a second in million-dollar races. Materials scientists have taken a page from that playbook, turning to the power of evolution and artificial selection to develop superconductors that can transmit electric current as efficiently as possible.

    Perhaps counterintuitively, most applied superconductors can operate at high magnetic fields because they contain defects. The number, size, shape and position of the defects within a superconductor work together to enhance the electric current carrying capacity in the presence of a magnetic field. Too many defects, however, can lead to blocking the electric current pathway or a breakdown of the superconducting material, so scientists need to be selective in how they incorporate defects into a material.

    In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, researchers used the power of artificial intelligence and high-performance supercomputers to introduce and assess the impact of different configurations of defects on the performance of a superconductor.

    The researchers developed a computer algorithm that treated each defect like a biological gene. Different combinations of defects yielded superconductors able to carry different amounts of current. Once the algorithm identified a particularly advantageous set of defects, it re-initialized with that set of defects as a ​“seed,” from which new combinations of defects would emerge.

    “Each run of the simulation is equivalent to the formation of a new generation of defects that the algorithm seeks to optimize,” said Argonne distinguished fellow and senior materials scientist Wai-Kwong Kwok, an author of the study. ​“Over time, the defect structures become progressively refined, as we intentionally select for defect structures that will allow for materials with the highest critical current.”

    The reason defects form such an essential part of a superconductor lies in their ability to trap and anchor magnetic vortices that form in the presence of a magnetic field. These vortices can move freely within a pure superconducting material when a current is applied. When they do so, they start to generate a resistance, negating the superconducting effect. Keeping vortices pinned, while still allowing current to travel through the material, represents a holy grail for scientists seeking to find ways to transmit electricity without loss in applied superconductors.

    To find the right combination of defects to arrest the motion of the vortices, the researchers initialized their algorithm with defects of random shape and size. While the researchers knew this would be far from the optimal setup, it gave the model a set of neutral initial conditions from which to work. As the researchers ran through successive generations of the model, they saw the initial defects transform into a columnar shape and ultimately a periodic arrangement of planar defects.

    “When people think of targeted evolution, they might think of people who breed dogs or horses,” said Argonne materials scientist Andreas Glatz, the corresponding author of the study. ​“Ours is an example of materials by design, where the computer learns from prior generations the best possible arrangement of defects.”

    One potential drawback to the process of artificial defect selection lies in the fact that certain defect patterns can become entrenched in the model, leading to a kind of calcification of the genetic data. ​“In a certain sense, you can kind of think of it like inbreeding,” Kwok said. ​“Conserving most information in our defect ​‘gene pool’ between generations has both benefits and limitations as it does not allow for drastic systemwide transformations. However, our digital ​‘evolution’ can be repeated with different initial seeds to avoid these problems.”

    In order to run their model, the researchers required high-performance computing facilities at Argonne and Oak Ridge National Laboratory. The Argonne Leadership Computing Facility and Oak Ridge Leadership Computing Facility are both DOE Office of Science User Facilities.

    An article based on the study, ​“Targeted evolution of pinning landscapes for large superconducting critical currents,” appeared in the May 21 edition of the PNAS. In addition to Kwok and Glatz, Argonne’s Ivan Sadovskyy, Alexei Koshelev and Ulrich Welp also collaborated.

    Funding for the research came from the DOE’s Office of Science.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:35 pm on February 12, 2019 Permalink | Reply
    Tags: ANL-ALCF, CANDLE (CANcer Distributed Learning Environment) framework, , ,   

    From insideHPC: “Argonne ALCF Looks to Singularity for HPC Code Portability” 

    From insideHPC

    February 10, 2019

    Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code most anywhere.

    Scaling code for massively parallel architectures is a common challenge the scientific community faces. When moving from a system used for development—a personal laptop, for instance, or even a university’s computing cluster—to a large-scale supercomputer like those housed at the Argonne Leadership Computing Facility [see below], researchers traditionally would only migrate the target application: the underlying software stack would be left behind.

    To help alleviate this problem, the ALCF has deployed the service Singularity. Singularity, an open-source framework originally developed by Lawrence Berkeley National Laboratory (LBNL) and now supported by Sylabs Inc., is a tool for creating and running containers (platforms designed to package code and its dependencies so as to facilitate fast and reliable switching between computing environments)—albeit one intended specifically for scientific workflows and high-performance computing resources.

    “here is a definite need for increased reproducibility and flexibility when a user is getting started here, and containers can be tremendously valuable in that regard,” said Katherine Riley, Director of Science at the ALCF. “Supporting emerging technologies like Singularity is part of a broader strategy to provide users with services and tools that help advance science by eliminating barriers to productive use of our supercomputers.”

    2
    This plot shows the number of events ATLAS events simulated (solid lines) with and without containerization. Linear scaling is shown (dotted lines) for reference.

    The demand for such services has grown at the ALCF as a direct result of the HPC community’s diversification.

    When the ALCF first opened, it was catering to a smaller user base representative of the handful of domains conventionally associated with scientific computing (high energy physics and astrophysics, for example).

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    HPC is now a principal research tool in new fields such as genomics, which perhaps lack some of the computing culture ingrained in certain older disciplines. Moreover, researchers tackling problems in machine learning, for example, constitute a new community. This creates a strong incentive to make HPC more immediately approachable to users so as to reduce the amount of time spent preparing code and establishing migration protocols, and thus hasten the start of research.

    Singularity, to this end, promotes strong mobility of compute and reproducibility due to the framework’s employment of a distributable image format. This image format incorporates the entire software stack and runtime environment of the application into a single monolithic file. Users thereby gain the ability to define, create, and maintain an application on different hosts and operating environments. Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced. Because reproducibility is so crucial to the scientific process, this capability can be seen as one of the primary assets of container technology.

    ALCF users have already begun to take advantage of the service. Argonne computational scientist Taylor Childers (in collaboration with a team of researchers from Brookhaven National Laboratory, LBNL, and the Large Hadron Collider’s ATLAS experiment) led ASCR Leadership Computing Challenge and ALCF Data Science Program projects to improve the performance of ATLAS software and workflows on DOE supercomputers.

    CERN/ATLAS detector

    Every year ATLAS generates petabytes of raw data, the interpretation of which requires even larger simulated datasets, making recourse to leadership-scale computing resources an attractive option. The ATLAS software itself—a complex collection of algorithms with many different authors—is terabytes in size and features manifold dependencies, making manual installation a cumbersome task.

    The researchers were able to run the ATLAS software on Theta inside a Singularity container via Yoda, an MPI-enabled Python application the team developed to communicate between CERN and ALCF systems and ensure all nodes in the latter are supplied with work throughout execution. The use of Singularity resulted in linear scaling on up to 1024 of Theta’s nodes, with event processing improved by a factor of four.

    “All told, with this setup we were able to deliver to ATLAS 65 million proton collisions simulated on Theta using 50 million core-hours,” said John Taylor Childers from ALCF.

    Containerization also effectively circumvented the software’s relative “unfriendliness” toward distributed shared file systems by accelerating metadata access calls; tests performed without the ATLAS software suggested that containerization could speed up such access calls by a factor of seven.

    While Singularity can present a tradeoff between immediacy and computational performance (because the containerized software stacks, generally speaking, are not written to exploit massively parallel architectures), the data-intensive ATLAS project demonstrates the potential value in such a compromise for some scenarios, given the impracticality of retooling the code at its center.

    Because containers afford users the ability to switch between software versions without risking incompatibility, the service has also been a mechanism to expand research and try out new computing environments. Rick Stevens—Argonne’s Associate Laboratory Director for Computing, Environment, and Life Sciences (CELS)—leads the Aurora Early Science Program project Virtual Drug Response Prediction. The machine learning-centric project, whose workflow is built from the CANDLE (CANcer Distributed Learning Environment) framework, enables billions of virtual drugs to be screened singly and in numerous combinations while predicting their effects on tumor cells. Their distribution made possible by Singularity containerization, CANDLE workflows are shared between a multitude of users whose interests span basic cancer research, deep learning, and exascale computing. Accordingly, different subsets of CANDLE users are concerned with experimental alterations to different components of the software stack.

    CANDLE users at health institutes, for instance, may have no need for exotic code alterations intended to harness the bleeding-edge capabilities of new systems, instead requiring production-ready workflows primed to address realistic problems,” explained Tom Brettin, Strategic Program Manager for CELS and a co-principal investigator on the project. Meanwhile, through the support of DOE’s Exascale Computing Project, CANDLE is being prepared for exascale deployment.

    Containers are relatively new technology for HPC, and their role may well continue to grow. “I don’t expect this to be a passing fad,” said Riley. “It’s functionality that, within five years, will likely be utilized in ways we can’t even anticipate yet.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 9:25 pm on November 12, 2018 Permalink | Reply
    Tags: ANL-ALCF, , By the late 1980s the Argonne Computing Research Facility (ACRF) housed as many as 10 radically different parallel computer designs, Next on the horizon: exascale,   

    From Argonne National Laboratory ALCF: “Argonne’s pioneering computing program pivots to exascale” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne National Laboratory ALCF

    November 12, 2018

    Laura Wolf
    Gail Pieper

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    When it comes to the breadth and range of the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s contributions to the field of high-performance computing (HPC), few if any other organizations come close. Argonne has been building advanced parallel computing environments and tools since the 1970s. Today, the laboratory serves as both an expertise center and a world-renowned source of cutting-edge computing resources used by researchers to tackle the most pressing challenges in science and engineering.

    Since its digital automatic computer days in the early 1950s, Argonne has been interested in designing and developing algorithms and mathematical software for scientific purposes, such as the Argonne Subroutine Library in the 1960s and the so-called ​“PACKs” – e.g., EISPACK, LINPACK, MINPACK and FUNPACK – as well as Basic Linear Algebra Subprograms (BLAS) in the 1970s. In the 1980s, Argonne established a parallel computing program – nearly a decade before computational science was explicitly recognized as the new paradigm for scientific investigation and the government inaugurated the first major federal program to develop the hardware, software and workforce needed to solve ​“grand challenge” problems.

    A place for experimenting and community building

    By the late 1980s, the Argonne Computing Research Facility (ACRF) housed as many as 10 radically different parallel computer designs – nearly every emerging parallel architecture – on which applied mathematicians and computer scientists could explore algorithm interaction, program portability and parallel programming tools and languages. By 1987, Argonne was hosting a regular series of hands-on training courses on ACRF systems for attendees from universities, industry and research labs.

    In 1992, at DOE’s request, the laboratory acquired an IBM SP – the first scalable, parallel system to offer multiple levels of input/output (I/O) capability essential for increasingly complex scientific applications – and, with that system, embarked on a new focus on experimental production machines. Argonne’s High-Performance Computing Research Center (1992–1997) focused on production-oriented parallel computing for grand challenges in addition to computer science and emphasized collaborative research with computational scientists. By 1997, Argonne’s supercomputing center was recognized by the DOE as one of the nation’s four high-end resource providers.

    Becoming a leadership computing center

    In 2002, Argonne established the Laboratory Computing Resource Center and in 2004 formed the Blue Gene Consortium with IBM and other national laboratories to design, evaluate and develop code for a series of massively parallel computers. The laboratory installed a 5-teraflop IBM Blue Gene/L in 2005, a prototype and proving ground for what in 2006 would become the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Along with another leadership computing facility at Oak Ridge National Laboratory, the ALCF was chartered to operate some of the fastest supercomputing resources in the world dedicated to scientific discovery.

    In 2007, the ALCF installed a 100-teraflop Blue Gene/P and began to support projects under the Innovative and Novel Computational Impact on Theory and Experiment program. In 2008, ALCF’s 557-teraflop IBM Blue Gene/P, Intrepid, was named the fastest supercomputer in the world for open science (and third fastest machine overall) on the TOP500 list and, in 2009, entered production operation.

    4
    ALCF’s 557-teraflop IBM Blue Gene/P, Intrepid

    Intrepid also topped the first Graph 500 list in 2010 and again in 2011. In 2012, ALCF’s 10-petaflop IBM Blue Gene/Q, Mira [above], ranked third on the June TOP500 list and entered production operation in 2013.

    Next on the horizon: exascale

    Argonne is part of a broader community working to achieve a capable exascale computing ecosystem for scientific discoveries. The benefits of exascale computing – computing capability that can achieve at least a billion billion operations per second – is primarily in the applications it will enable. To take advantage of this immense computing power, Argonne researchers are contributing to the emerging convergence of simulation, big data analytics and machine learning across a wide variety of science and engineering domains and disciplines.

    In 2016, the laboratory launched an initiative to explore new ways to foster data-driven discoveries, with an eye to growing a new community of HPC users. The ALCF Data Science Program, the first of its kind in the leadership computing space, targets users with ​“big data” science problems and provides time on ALCF resources, staff support and training to improve computational methods across all scientific disciplines.

    In 2017, Argonne launched an Intel/Cray machine, Theta [above], doubling the ALCF’s capacity to do impactful science. The facility currently is operating at the frontier of data-centric and high-performance supercomputing.

    Argonne researchers are also getting ready for the ALCF’s future exascale system, Aurora [depicted above], expected in 2021. Using innovative technologies from Intel and Cray, Aurora will provide over 1,000 petaflops for research and development in three areas: simulation-based computational science; data-centric and data-intensive computing; and learning – including machine learning, deep learning, and other artificial intelligence techniques.

    The ALCF has already inaugurated an Early Science Program to prepare key applications and libraries for the innovative architecture. Moreover, ALCF computational scientists and performance engineers are working closely with Argonne’s Mathematics and Computer Science (MCS) division as well as its Computational Science and Data Science and Learning divisions with the aim of advancing the boundaries of HPC technologies ahead of Aurora. (The MCS division is the seedbed for such groundbreaking software as BLAS3, p4, Automatic Differentiation of Fortran Codes (ADIFOR), the PETSc toolkit of parallel computing software, and a version of the Message Passing Interface known as MPICH.)

    The ALCF also continues to add new services, helping researchers near and far to manage workflow execution of large experiments and to co-schedule jobs between ALCF systems, thereby extending Argonne’s reach even further as a premier provider of computing and data analysis resources for the scientific research community.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 4:06 pm on September 26, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , , , ,   

    From ASCR Discovery: “Superstars’ secrets” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    September 2018

    Superstars’ secrets

    Supercomputing power and algorithms are helping astrophysicists untangle giant stars’ brightness, temperature and chemical variations.

    1
    A frame from an animated global radiation hydrodynamic simulation of an 80-solar-mass star envelope, performed on the Mira supercomputer at the Argonne Leadership Computing Facility (ALCF).

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Seen here: turbulent structures resulting from convection around the iron opacity peak region. Density is highest near the star’s core (yellow). The other colors represent low-density winds launched near the surface. Simulation led by University of California at Santa Barbara. Visualization courtesy of Joseph A. Insley/ALCF.

    Since the Big Bang nearly 14 billion years ago, the universe has evolved and expanded, punctuated by supernova explosions and influenced by the massive stars that spawn them. These stars, many times the size and brightness of the sun, have relatively short lives and turbulent deaths that produce gamma ray bursts, neutron stars, black holes and nebulae, the colorful chemical incubators for new stars.

    Although massive stars are important to understanding astrophysics, the largest ones – at least 20 times the sun’s mass – are rare and highly variable. Their brightness changes by as much as 30 percent, notes Lars Bildsten of the Kavli Institute for Theoretical Physics (KITP) at University of California, Santa Barbara (UCSB). “It rattles around on a timescale of days to months, sometimes years.” Because of the complicated interactions between the escaping light and the gas within the star, scientists couldn’t explain or predict this stellar behavior.

    But with efficient algorithms and the power of the Mira IBM Blue Gene/Q supercomputer at the Argonne Leadership Computing Facility, a Department of Energy (DOE) Office of Science user facility, Bildsten and his colleagues have begun to model the variability in three dimensions across an entire massive star. With an allocation of 60 million processor hours from DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program, the team aims to make predictions about these stars that observers can test. They’ve published the initial results from these large-scale simulations – linking brightness changes in massive stars with temperature fluctuations on their surfaces – in the Sept. 27 issue of the journal Nature.

    Yan-Fei Jiang, a KITP postdoctoral scholar, leads these large-scale stellar simulations. They’re so demanding that astrophysicists often must limit the models – either by focusing on part of a star or by using simplifications and approximations that allow them to get a broad yet general picture of a whole star.

    The team started with one-dimensional computational models of massive stars using the open-source code MESA (Modules for Experiments in Stellar Astrophysics). Astrophysicists have used such methods to examine normal convection in stars for decades. But with massive stars, the team hit limits. The bodies are so bright and emit so much radiation that the 1-D models couldn’t capture the violent instability in some regions of the star, Bildsten says.

    Matching 1-D models to observations required researchers to hand-tune various features, Jiang says. “They had no predictive power for these massive stars. And that’s exactly what good theory should do: explain existing data and predict new observations.”

    To calculate the extreme turbulence in these stars, Jiang’s team needed a more complex three-dimensional model and high-performance computers. As a Princeton University Ph.D. student, Jiang had worked with James Stone on a program that could handle these turbulent systems. Stone’s group had developed the Athena++ code to study the dynamics of magnetized plasma, a charged, flowing soup that occurs in stars and many other astronomical objects. While at Princeton, Jiang had added radiation transport algorithms.

    That allowed the team to study accretion disks – accumulated dust and other matter – around the edges of black holes, a project that received a 2016 INCITE allocation of 47 million processor hours. Athena++ has been used for hundreds of other projects, Stone says.

    Stone is part of the current INCITE team, which also includes UCSB’s Omer Blaes, Matteo Cantiello of the Flatiron Institute in New York and Eliot Quataert, University of California, Berkeley.

    In their Nature paper, the group has linked variations in a massive star’s brightness with changes in its surface temperature. Hotter blue stars show smaller fluctuations, Bildsten says. “As a star becomes redder (and cooler), it becomes more variable. That’s a pretty firm prediction from what we’ve found, and that’s going to be what’s exciting to test in detail.”

    Another factor in teasing out massive stars’ behaviors could be the quantity of heavy elements in their atmospheres. Fusion of the lightest hydrogen and helium atoms in massive stars produces heavier atoms, including carbon, oxygen, silicon and iron. When supernovae explode, these bulkier chemical elements are incorporated into new stars. The new elements are more opaque than hydrogen and helium, so they capture and scatter radiation rather than letting photons pass through. For its code to model massive stars, the team needed to add opacity data for these other elements. “The more opaque it is, the more violent these instabilities are likely to be,” Bildsten says. The team is just starting to explore how this chemistry influences the stars’ behavior.

    The scientists also are examining how the brightness variations connect to mass loss. Wolf-Rayet stars are an extreme example of this process, having lost their outer envelopes containing hydrogen and instead containing helium and heavier elements only. These short-lived objects burn for a mere 5 million years, compared with 10 billion years for the sun. Over that time, they shed mass and material before collapsing into a neutron star or a black hole. Jiang and his group are working with UC Berkeley postdoctoral scholar Stephen Ro to diagnose that mass-loss mechanism.

    These 3-D simulations are just the beginning. The group’s current model doesn’t include rotation or magnetic fields, Jiang notes, factors that can be important for studying properties of massive stars such as gamma ray burst-related jets, the brightest explosions in the universe.

    The team also hopes to use its 3-D modeling lessons to improve the faster, cheaper 1-D algorithms – codes Bildsten says helped the team choose which systems to model in 3-D and could point to systems for future investigations.

    Three-dimensional models, Bildsten notes, “are precious simulations, so you want to know that you’re doing the one you want.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 3:39 pm on September 25, 2018 Permalink | Reply
    Tags: , , ANL-ALCF, Argonne's Theta supercomputer, Aurora exascale supercomputer, , , , , , ,   

    From Argonne National Laboratory ALCF: “Argonne team brings leadership computing to CERN’s Large Hadron Collider” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    September 25, 2018
    Madeleine O’Keefe

    CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, expects to produce around 50 petabytes of data this year. This is equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    A team of collaborators from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to address this issue with computing resources at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Since 2015, this team has worked with the ALCF on multiple projects to explore ways supercomputers can help meet the growing needs of the LHC’s ATLAS experiment.

    The efforts are especially important given what is coming up for the accelerator. In 2026, the LHC will undergo an ambitious upgrade to become the High-Luminosity LHC (HL-LHC). The aim of this upgrade is to increase the LHC’s luminosity—the number of events detected per second—by a factor of 10. “This means that the HL-LHC will be producing about 20 times more data per year than what ATLAS will have on disk at the end of 2018,” says Taylor Childers, a member of the ATLAS collaboration and computer scientist at the ALCF who is leading the effort at the facility. “CERN’s computing resources are not going to grow by that factor.”

    Luckily for CERN, the ALCF already operates some of the world’s most powerful supercomputers for science, and the facility is in the midst of planning for an upgrade of its own. In 2021, Aurora—the ALCF’s next-generation system, and the first exascale machine in the country—is scheduled to come online.

    It will provide the ATLAS experiment with an unprecedented resource for analyzing the data coming out of the LHC—and soon, the HL-LHC.

    CERN/ATLAS detector

    Why ALCF?

    CERN may be best known for smashing particles, which physicists do to study the fundamental laws of nature and gather clues about how the particles interact. This involves a lot of computationally intense calculations that benefit from the use of the DOE’s powerful computing systems.

    The ATLAS detector is an 82-foot-tall, 144-foot-long cylinder with magnets, detectors, and other instruments layered around the central beampipe like an enormous 7,000-ton Swiss roll. When protons collide in the detector, they send a spray of subatomic particles flying in all directions, and this particle debris generates signals in the detector’s instruments. Scientists can use these signals to discover important information about the collision and the particles that caused it in a computational process called reconstruction. Childers compares this process to arriving at the scene of a car crash that has nearly completely obliterated the vehicles and trying to figure out the makes and models of the cars and how fast they were going. Reconstruction is also performed on simulated data in the ATLAS analysis framework, called Athena.

    An ATLAS physics analysis consists of three steps. First, in event generation, researchers use the physics that they know to model the kinds of particle collisions that take place in the LHC. In the next step, simulation, they generate the subsequent measurements the ATLAS detector would make. Finally, reconstruction algorithms are run on both simulated and real data, the output of which can be compared to see differences between theoretical prediction and measurement.

    “If we understand what’s going on, we should be able to simulate events that look very much like the real ones,” says Tom LeCompte, a physicist in Argonne’s High Energy Physics division and former physics coordinator for ATLAS.

    “And if we see the data deviate from what we know, then we know we’re either wrong, we have a bug, or we’ve found new physics,” says Childers.

    Some of these simulations, however, are too complicated for the Worldwide LHC Computing Grid, which LHC scientists have used to handle data processing and analysis since 2002.

    MonALISA LHC Computing GridMap http:// monalisa.caltech.edu/ml/_client.beta

    The Grid is an international distributed computing infrastructure that links 170 computing centers across 42 countries, allowing data to be accessed and analyzed in near real-time by an international community of more than 10,000 physicists working on various LHC experiments.

    The Grid has served the LHC well so far, but as demand for new science increases, so does the required computing power.

    That’s where the ALCF comes in.

    In 2011, when LeCompte returned to Argonne after serving as ATLAS physics coordinator, he started looking for the next big problem he could help solve. “Our computing needs were growing faster than it looked like we would be able to fulfill them, and we were beginning to notice that there were problems we were trying to solve with existing computing that just weren’t able to be solved,” he says. “It wasn’t just an issue of having enough computing; it was an issue of having enough computing in the same place. And that’s where the ALCF really shines.”

    LeCompte worked with Childers and ALCF computer scientist Tom Uram to use Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to carry out calculations to improve the performance of the ATLAS software.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Together they scaled Alpgen, a Monte Carlo-based event generator, to run efficiently on Mira, enabling the generation of millions of particle collision events in parallel. “From start to finish, we ended up processing events more than 20 times as fast, and used all of Mira’s 49,152 processors to run the largest-ever event generation job,” reports Uram.

    But they weren’t going to stop there. Simulation, which takes up around five times more Grid computing than event generation, was the next challenge to tackle.
    Moving forward with Theta

    In 2017, Childers and his colleagues were awarded a two-year allocation from the ALCF Data Science Program (ADSP), a pioneering initiative designed to explore and improve computational and data science methods that will help researchers gain insights into very large datasets produced by experimental, simulation, or observational methods. The goal is to deploy Athena on Theta, the ALCF’s 11.69-petaflops Intel-Cray supercomputer, and develop an end-to-end workflow to couple all the steps together to improve upon the current execution model for ATLAS jobs which involves a many­step workflow executed on the Grid.

    ANL ALCF Theta Cray XC40 supercomputer

    “Each of those steps—event generation, simulation, and reconstruction—has input data and output data, so if you do them in three different locations on the Grid, you have to move the data with it,” explains Childers. “Ideally, you do all three steps back-to-back on the same machine, which reduces the amount of time you have to spend moving data around.”

    Enabling portions of this workload on Theta promises to expedite the production of simulation results, discovery, and publications, as well as increase the collaboration’s data analysis reach, thus moving scientists closer to new particle physics.

    One challenge the group has encountered so far is that, unlike other computers on the Grid, Theta cannot reach out to the job server at CERN to receive computing tasks. To solve this, the ATLAS software team developed Harvester, a Python edge service that can retrieve jobs from the server and submit them to Theta. In addition, Childers developed Yoda, an MPI-enabled wrapper that launches these jobs on each compute node.

    Harvester and Yoda are now being integrated into the ATLAS production system. The team has just started testing this new workflow on Theta, where it has already simulated over 12 million collision events. Simulation is the only step that is “production-ready,” meaning it can accept jobs from the CERN job server.

    The team also has a running end-to-end workflow—which includes event generation and reconstruction—for ALCF resources. For now, the local ATLAS group is using it to run simulations investigating if machine learning techniques can be used to improve the way they identify particles in the detector. If it works, machine learning could provide a more efficient, less resource-intensive method for handling this vital part of the LHC scientific process.

    “Our traditional methods have taken years to develop and have been highly optimized for ATLAS, so it will be hard to compete with them,” says Childers. “But as new tools and technologies continue to emerge, it’s important that we explore novel approaches to see if they can help us advance science.”
    Upgrade computing, upgrade science

    As CERN’s quest for new science gets more and more intense, as it will with the HL-LHC upgrade in 2026, the computational requirements to handle the influx of data become more and more demanding.

    “With the scientific questions that we have right now, you need that much more data,” says LeCompte. “Take the Higgs boson, for example. To really understand its properties and whether it’s the only one of its kind out there takes not just a little bit more data but takes a lot more data.”

    This makes the ALCF’s resources—especially its next-generation exascale system, Aurora—more important than ever for advancing science.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Aurora, scheduled to come online in 2021, will be capable of one billion billion calculations per second—that’s 100 times more computing power than Mira. It is just starting to be integrated into the ATLAS efforts through a new project selected for the Aurora Early Science Program (ESP) led by Jimmy Proudfoot, an Argonne Distinguished Fellow in the High Energy Physics division. Proudfoot says that the effective utilization of Aurora will be key to ensuring that ATLAS continues delivering discoveries on a reasonable timescale. Since increasing compute resources increases the analyses that are able to be done, systems like Aurora may even enable new analyses not yet envisioned.

    The ESP project, which builds on the progress made by Childers and his team, has three components that will help prepare Aurora for effective use in the search for new physics: enable ATLAS workflows for efficient end-to-end production on Aurora, optimize ATLAS software for parallel environments, and update algorithms for exascale machines.

    “The algorithms apply complex statistical techniques which are increasingly CPU-intensive and which become more tractable—and perhaps only possible—with the computing resources provided by exascale machines,” explains Proudfoot.

    In the years leading up to Aurora’s run, Proudfoot and his team, which includes collaborators from the ALCF and Lawrence Berkeley National Laboratory, aim to develop the workflow to run event generation, simulation, and reconstruction. Once Aurora becomes available in 2021, the group will bring their end-to-end workflow online.

    The stated goals of the ATLAS experiment—from searching for new particles to studying the Higgs boson—only scratch the surface of what this collaboration can do. Along the way to groundbreaking science advancements, the collaboration has developed technology for use in fields beyond particle physics, like medical imaging and clinical anesthesia.

    These contributions and the LHC’s quickly growing needs reinforce the importance of the work that LeCompte, Childers, Proudfoot, and their colleagues are doing with ALCF computing resources.

    “I believe DOE’s leadership computing facilities are going to play a major role in the processing and simulation of the future rounds of data that will come from the ATLAS experiment,” says LeCompte.

    This research is supported by the DOE Office of Science. ALCF computing time and resources were allocated through the ASCR Leadership Computing Challenge, the ALCF Data Science Program, and the Early Science Program for Aurora.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 5:57 pm on September 5, 2018 Permalink | Reply
    Tags: ANL-ALCF, , , , ,   

    From PPPL and ALCF: “Artificial intelligence project to help bring the power of the sun to Earth is picked for first U.S. exascale system” 


    From PPPL

    and

    Argonne Lab

    Argonne National Laboratory ALCF

    August 27, 2018
    John Greenwald

    1
    Deep Learning Leader William Tang. (Photo by Elle Starkman/Office of Communications.)

    To capture and control the process of fusion that powers the sun and stars in facilities on Earth called tokamaks, scientists must confront disruptions that can halt the reactions and damage the doughnut-shaped devices.

    PPPL NSTX-U

    Now an artificial intelligence system under development at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    The project, titled “Accelerated Deep Learning Discovery in Fusion Energy Science” is one of 10 Early Science Projects on data science and machine learning for the Aurora supercomputer, which is set to become the first U.S. exascale system upon its expected arrival at Argonne in 2021. The system will be capable of performing a quintillion (1018) calculations per second — 50-to-100 times faster than the most powerful supercomputers today.

    Fusion combines light elements

    Fusion combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — in reactions that generate massive amounts of energy. Scientists aim to replicate the process for a virtually inexhaustible supply of power to generate electricity.

    The goal of the PPPL/Princeton University project is to develop a method that can be experimentally validated for predicting and controlling disruptions in burning plasma fusion systems such as ITER — the international tokamak under construction in France to demonstrate the practicality of fusion energy. “Burning plasma” refers to self-sustaining fusion reactions that will be essential for producing continuous fusion energy.

    Heading the project will be William Tang, a principal research physicist at PPPL and a lecturer with the rank and title of professor in the Department of Astrophysical Sciences at Princeton University. “Our research will utilize capabilities to accelerate progress that can only come from the deep learning form of artificial intelligence,” Tang said.

    Networks analagous to a brain

    Deep learning, unlike other types of computational approaches, can be trained to solve with accuracy and speed highly complex problems that require realistic image resolution. Associated software consists of multiple layers of interconnected neural networks that are analogous to simple neurons in a brain. Each node in a network identifies a basic aspect of data that is fed into the system and passes the results along to other nodes that identify increasingly complex aspects of the data. The process continues until the desired output is achieved in a timely way.

    The PPPL/Princeton deep-learning software is called the “Fusion Recurrent Neural Network (FRNN),” composed of convolutional and recurrent neural nets that allow a user to train a computer to detect items or events of interest. The software seeks to speedily predict when disruptions will break out in large-scale tokamak plasmas, and to do so in time for effective control methods to be deployed.

    The project has greatly benefited from access to the huge disruption-relevant data base of the Joint European Torus (JET) in the United Kingdom, the largest and most powerful tokamak in the world today.

    Joint European Torus, at the Culham Centre for Fusion Energy in the United Kingdom

    The FRNN software has advanced from smaller computer clusters to supercomputing systems that can deal with such vast amounts of complex disruption-relevant data. Running the data aims to identify key pre-disruption conditions, guided by insights from first principles-based theoretical simulations, to enable the “supervised machine learning” capability of deep learning to produce accurate predictions with sufficient warning time.

    Access to Tiger computer cluster

    The project has gained from access to Tiger, a high-performance Princeton University cluster equipped with advanced image-resolution GPUs that have enabled the deep learning software to advance to the Titan supercomputer at Oak Ridge National Laboratory and to powerful international systems such as the Tsubame 3.0 supercomputer in Tokyo, Japan.

    Tiger supercomputer at Princeton University

    ORNL Cray XK7 Titan Supercomputer

    Tsubame 3.0 supercomputer in Tokyo, Japan

    The overall goal is to achieve the challenging requirements for ITER, which will need predictions to be 95 percent accurate with less than 5 percent false alarms at least 30 milliseconds or longer before disruptions occur.


    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    The team will continue to build on advances that are currently supported by the DOE while preparing the FRNN software for Aurora exascale computing. The researchers will also move forward with related developments on the SUMMIT supercomputer at Oak Ridge.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Members of the team include Julian Kates-Harbeck, a graduate student at Harvard University and a DOE Office of Science Computational Science Graduate Fellow (CSGF) who is the chief architect of the FRNN. Researchers include Alexey Svyatkovskiy, a big-data, machine learning expert who will continue to collaborate after moving from Princeton University to Microsoft; Eliot Feibush, a big data analyst and computational scientist at PPPL and Princeton, and Kyle Felker, a CSGF member who will soon graduate from Princeton University and rejoin the FRNN team as a post-doctoral research fellow at Argonne National Laboratory.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    PPPL campus

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 11:57 am on August 22, 2018 Permalink | Reply
    Tags: , ANL-ALCF, , , Fine-tuning physics, ,   

    From ASCR Discovery and Argonne National Lab: “Fine-tuning physics” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    August 2018

    Argonne applies supercomputing heft to boost precision in particle predictions.

    2
    A depiction of a scattering event on the Large Hadron Collider. Image courtesy of Argonne National Laboratories.

    Advancing science at the smallest scales calls for vast data from the world’s most powerful particle accelerator, leavened with the precise theoretical predictions made possible through many hours of supercomputer processing.

    The combination has worked before, when scientists from the Department of Energy’s Argonne National Laboratory provided timely predictions about the Higgs particle at the Large Hadron Collider in Switzerland. Their predictions contributed to the 2012 discovery of the Higgs, the subatomic particle that gives mass to all elementary particles.


    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    “That we are able to predict so precisely what happens around us in nature is a remarkable achievement,” Argonne physicist Radja Boughezal says. “To put all these pieces together to get a number that agrees with the measurement that was made with something so complicated as the LHC is always exciting.”

    Earlier this year, she was allocated more than 98 million processor hours on the Mira and Theta supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility, through DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ANL ALCF Theta Cray XC40 supercomputer

    Her previous INCITE allocation helped solve problems that scientists saw as insurmountable just two or three years ago.

    These problems stem from the increasingly intricate and precise measurements and theoretical calculations associated with scrutinizing the Higgs boson and from searches for subtle deviations from the standard model that underpins the behavior of matter and energy.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    The approach she and her associates developed led to early, high-precision LHC predictions that describe so-called strong-force interactions between quarks and gluons, which comprise subatomic particles such as protons and neutrons.

    The theory governing strong-force interactions is called QCD, for quantum chromodynamics. In QCD, the thing that quantifies the strong force when exerted in any direction is called the strong coupling constant.

    “At high energies, when collisions happen, quarks and gluons are very close to each other, so the strong force is very weak. It’s almost turned off,” Boughezal explains. How this strong coupling grows – a small parameter called perturbative expansion – gives physicists a yardstick to calculate their predictions. Perturbative expansion is “a method we have used over and over to get these predictions, and it has provided powerful tests of QCD to date.”

    Crucial to these tests is the N-jettiness framework Boughezal and her Argonne and Northwestern University collaborators devised to obtain high-precision predictions for particle scattering processes. Specially adapted for high-performance computing systems, the framework’s novelty stems from its incorporation of existing low-precision numerical codes to achieve part of the desired result. The scientists fill in algorithmic gaps with simple analytic calculations.

    The LHC data lined up completely with predictions the team had obtained from running the N-jettiness code on the Mira supercomputer at Argonne. The agreement carries important implications for the precision goals physicists are setting for future accelerators such as the proposed Electron-Ion Collider (EIC).

    “One of the things that has puzzled us for 30 years is the spin of the proton,” Boughezal says. Planners hope the EIC reveals how the spin of the proton, matter’s basic building block, emerges from its elementary constituents, quarks and gluons.

    Boughezal also is working with LHC scientists in the search for dark matter, which accounts for 96 percent of stuff in the universe. The remainder is ordinary matter, the atoms and molecules that form stars, planets and people.

    “Scientists believe that the mysterious dark matter in the universe could leave a missing energy footprint at the LHC,” she says. Such a footprint would reveal the existence of a new particle that’s currently missing from the standard model. Dark matter particles interact weakly with the LHC’s detectors. “We cannot see them directly.”

    They could, however, be produced with a jet – a spray of standard-model particles made from LHC proton collisions. “We can measure that jet. We can see it. We can tag it.” And by using simple laws of physics such as the conservation of momentum, even if the particles are invisible, scientists would be able to detect them by measuring the jet’s energy.

    For example, when subatomic particles called Z bosons are produced with particle jets, the bosons can decay into neutrinos, ghostly specks that rarely interact with ordinary matter. The neutrinos appear as missing energy in the LHC’s detectors, just as a dark matter particle would.

    In July 2017, Boughezal and three co-authors published a paper in the Journal of High Energy Particle Physics. It was the first to describe new proton-structure details derived from precision high-energy Z-boson experimental data.

    
“If you want to know whether what you have produced is actually coming from a standard model process or something else that we have not seen before, you need to predict your standard model process very well,” she says. If the theoretical predictions deviate from the experimental data, it suggests new physics at play.


    In fact, Boughezal and her associates have precisely predicted the standard model jet process and it agrees with the data. “So far we haven’t produced dark matter at the LHC.”

    Previously, however, the results were so imprecise – and the margin of uncertainty so high – that physicists couldn’t tell whether they’d produced a standard-model jet or something entirely new.

    What surprises will higher-precision calculations reveal in future LHC experiments?

    “There is still a lot of territory that we can probe and look for something new,” Boughezal says. “The standard model is not a complete theory because there is a lot it doesn’t explain, like dark matter. We know that there has to be something bigger than the standard model.”

    Argonne is managed by UChicago Argonne LLC for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: