Tagged: ORNL OLCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:23 pm on December 28, 2018 Permalink | Reply
    Tags: , ORNL OLCF, , , , UT Students Get Bite-Sized Bits of Big Data Centers in ORNL-Led Course   

    From Oak Ridge Leadership Computing Facility: “UT Students Get Bite-Sized Bits of Big Data Centers in ORNL-Led Course” 

    i1

    Oak Ridge National Laboratory

    From Oak Ridge Leadership Computing Facility

    20 Dec, 2018
    Rachel Harken

    1
    Image Credit: Genevieve Martin, ORNL

    This fall, staff at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) once again contributed to the “Introduction to Data Centers” course at the University of Tennessee, Knoxville (UT).

    Now in its fourth year, the class had the largest and most diverse enrollment yet, with four disciplines represented: computer engineering, computer science, electrical engineering, and industrial engineering. This year’s students toured the data centers at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL, earlier this fall as part of the course.

    The multidisciplinary course, part of UT’s data center technology and management minor, introduces students to the many topics involved in building and commanding a data center. Because running a data center requires knowledge in a multitude of areas, no one discipline typically covers the broad spectrum of topics involved.

    The multidisciplinary course, part of UT’s data center technology and management minor, introduces students to the many topics involved in building and commanding a data center. Because running a data center requires knowledge in a multitude of areas, no one discipline typically covers the broad spectrum of topics involved.

    “We bring in a lot of disciplinary experts from ORNL,” said Stephen McNally, operations manager at the OLCF and the course organizer. “We cover the mechanical and electrical components, but we also focus on project management, commissioning, overall requirements-gathering, and networking.” The current curriculum was developed by McNally, UT interim dean of the College of Engineering Mark Dean, UT professor David Icove, and ORNL project specialist Jennifer Goodpasture.

    The students enrolled in the course are provided a request for proposals at the beginning of the year, and they work together throughout the semester to submit a 20- to 30-page proposal to meet the requirements. Because students are often restricted to classes within their majors, the course stresses the interplay between disciplines and showcases areas that might previously have been out of reach.

    “Hiring someone straight out of school to do what a data center person does is really difficult, because you have to understand so much about so many different disciplines,” McNally said. “This is primarily why we have such a low talent pool for data center–related jobs. We built this class to help solve that problem.”

    The course is opening new opportunities for some students. Two of the students in this year’s class received scholarships to Infrastructure Masons (iMasons), an organization that brings digital infrastructure experts together to network, learn, and collaborate. The students’ enrollment in the course through the new minor degree program qualified them to apply.

    Aside from the opportunity to apply for the iMasons scholarship, students learned from new data center professionals in industry this year. One of the course’s new speakers was Frank Hutchison of SH Data Technologies, who talked about his role in building Tennessee’s first tier 3 data center. Tier 3 data centers are available 99.9% of the time, which means they are only down for seconds at a time each year.

    “This was the most engaging class we’ve had by far,” McNally said. “These students really got to see how these different disciplines work together to run, build, and operate data centers, and we are excited to continue bringing these folks in and helping to bridge this talent gap in the workforce.”

    The team is excited that this course continues to gain traction with the students at UT and is making plans to accommodate more students next fall. The course is currently under consideration for possible expansion into a professional certification program or a distance learning course.

    In addition to McNally and Goodpasture, the ORNL team contributing to the course includes Jim Serafin, Jim Rogers, Kathlyn Boudwin, Justin Whitt, Darren Norris, David Grant, Rick Griffin, Saeed Ghezawi, Brett Ellis, Bart Hammontree, Scott Milliken, Gary Rogers, and Kris Torgerson.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    With a peak performance of 200,000 trillion calculations per second—or 200 petaflops, Summit will be eight times more powerful than ORNL’s previous top-ranked system, Titan. For certain scientific applications, Summit will also be capable of more than three billion billion mixed precision calculations per second, or 3.3 exaops. Summit will provide unprecedented computing power for research in energy, advanced materials and artificial intelligence (AI), among other domains, enabling scientific discoveries that were previously impractical or impossible.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 5:58 pm on October 17, 2018 Permalink | Reply
    Tags: , ORNL OLCF, Quantum Monte Carlo (QMC) modeling, Quantum predictions   

    From ASCR Discovery: “Quantum predictions” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    1
    Mechanical strain, pressure or temperature changes or adding chemical doping agents can prompt an abrupt switch from insulator to conductor in materials such as nickel oxide (pictured here). Nickel ions (blue) and oxygen ions (red) surround a dopant ion of potassium (yellow). Quantum Monte Carlo methods can accurately predict regions where charge density (purple) will accumulate in these materials. Image courtesy of Anouar Benali, Argonne National Laboratory.

    Solving a complex problem quickly requires careful tradeoffs – and simulating the behavior of materials is no exception. To get answers that predict molecular workings feasibly, scientists must swap in mathematical approximations that speed computation at accuracy’s expense.

    But magnetism, electrical conductivity and other properties can be quite delicate, says Paul R.C. Kent of the Department of Energy’s (DOE’s) Oak Ridge National Laboratory. These properties depend on quantum mechanics, the movements and interactions of myriad electrons and atoms that form materials and determine their properties. Researchers who study such features must model large groups of atoms and molecules rather than just a few. This problem’s complexity demands boosting computational tools’ efficiency and accuracy.

    That’s where a method called quantum Monte Carlo (QMC) modeling comes in. Many other techniques approximate electrons’ behavior as an overall average, for example, rather than considering them individually. QMC enables accounting for the individual behavior of all of the electrons without major approximations, reducing systematic errors in simulations and producing reliable results, Kent says.

    Kent’s interest in QMC dates back to his Ph.D. research at Cambridge University in the 1990s. At ORNL, he recently returned to the method because advances in both supercomputer hardware and in algorithms had allowed researchers to improve its accuracy.

    “We can do new materials and a wider fraction of elements across the periodic table,” Kent says. “More importantly, we can start to do some of the materials and properties where the more approximate methods that we use day to day are just unreliable.”

    Even with these advances, simulations of these types of materials, ones that include up to a few hundred atoms and thousands of electrons, requires computational heavy lifting. Kent leads a DOE Basic Energy Sciences Center, the Center for Predictive Simulations of Functional Materials (CPSFM) that includes researchers from ORNL, Argonne National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, the University of California, Berkeley and North Carolina State University.

    Their work is supported by a DOE Innovative and Novel Computational Impact on Theory and Experiments (INCITE) allocation of 140 million processor hours, split between Oak Ridge Leadership Computing Facility’s Titan and Argonne Leadership Computing Facility’s Mira supercomputers. Both computing centers are DOE Office of Science user facilities.

    ORNL Cray Titan XK7 Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    To take QMC to the next level, Kent and colleagues start with materials such as vanadium dioxide that display unusual electronic behavior. At cooler temperatures, this material insulates against the flow of electricity. But at just above room temperature, vanadium dioxide abruptly changes its structure and behavior.

    Suddenly this material becomes metallic and conducts electricity efficiently. Scientists still don’t understand exactly how and why this occurs. Factors such as mechanical strain, pressure or doping the materials with other elements also induce this rapid transition from insulator to conductor.

    However, if scientists and engineers could control this behavior, these materials could be used as switches, sensors or, possibly, the basis for new electronic devices. “This big change in conductivity of a material is the type of thing we’d like to be able to predict reliably,” Kent says.

    Laboratory researchers also are studying these insulator-to-conductors with experiments. That validation effort lends confidence to the predictive power of their computational methods in a range of materials. The team has built open-source software, known as QMCPACK, that is now available online and on all of the DOE Office of Science computational facilities.

    Kent and his colleagues hope to build up to high-temperature superconductors and other complex and mysterious materials. Although scientists know these materials’ broad properties, Kent says, “we can’t relate those to the actual structure and the elements in the materials yet. So that’s a really grand challenge for the condensed-matter physics field.”

    The most accurate quantum mechanical modeling methods restrict scientists to examining just a few atoms or molecules. When scientists want to study larger systems, the computation costs rapidly become unwieldy. QMC offers a compromise: a calculation’s size increases cubically relative to the number of electrons, a more manageable challenge. QMC incorporates only a few controlled approximations and can be applied to the numerous atoms and electrons needed. It’s well suited for today’s petascale supercomputers – capable of one quadrillion calculations or more each second – and tomorrow’s exascale supercomputers, which will be at least a thousand times faster. The method maps simulation elements relatively easily onto the compute nodes in these systems.

    The CPSFM team continues to optimize QMCPACK for ever-faster supercomputers, including OLCF’s Summit, which will be fully operational in January 2019.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The higher memory capacity on that machine’s Nvidia Volta GPUs – 16 gigabytes per graphics processing unit compared with 6 gigabytes on Titan – already boosts computation speed. With the help of OLCF’s Ed D’Azevedo and Andreas Tillack, the researchers have implemented improved algorithms that can double the speed of their larger calculations.

    QMCPACK is part of DOE’s Exascale Computing Project, and the team is already anticipating additional scaling challenges for running QMCPACK on future machines. To perform the desired simulations within roughly 12 hours on an exascale supercomputer, Kent estimates that they’ll need algorithms that are 30 times more scalable than those within the current version.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Even with improved hardware and algorithms, QMC calculations will always be expensive. So Kent and his team would like to use QMCPACK to understand where cheaper methods go wrong so that they can improve them. Then they can save QMC calculations for the most challenging problems in materials science, Kent says. “Ideally we will learn what’s causing these materials to be very tricky to model and then improve cheaper approaches so that we can do much wider scans of different materials.”

    The combination of improved QMC methods and a suite of computationally cheaper modeling approaches could lead the way to new materials and an understanding of their properties. Designing and testing new compounds in the laboratory is expensive, Kent says. Scientists could save valuable time and resources if they could first predict the behavior of novel materials in a simulation.

    Plus, he notes, reliable computational methods could help scientists understand properties and processes that depend on individual atoms that are extremely difficult to observe using experiments. “That’s a place where there’s a lot of interest in going after the fundamental science, predicting new materials and enabling technological applications.”

    Oak Ridge National Laboratory is supported by the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 10:16 am on August 16, 2018 Permalink | Reply
    Tags: 3-D simulations of double-detonation Type Ia supernovas reveal dynamic burning, , , , , , ORNL OLCF, , Supernova research, Titan Helps Researchers Explore Explosive Star Scenarios   

    From Oak Ridge Leadership Computing Facility: ” Titan Helps Researchers Explore Explosive Star Scenarios – 3-D simulations of double-detonation Type Ia supernovas reveal dynamic burning’ 

    i1

    Oak Ridge National Laboratory

    From Oak Ridge Leadership Computing Facility

    8.16.18
    Jonathan Hines

    Exploding stars may seem like an unlikely yardstick for measuring the vast distances of space, but astronomers have been mapping the universe for decades using these stellar eruptions, called supernovas, with surprising accuracy.

    This is an artist’s impression of the SN 1987A remnant. The image is based on real data and reveals the cold, inner regions of the remnant, in red, where tremendous amounts of dust were detected and imaged by ALMA. This inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    Type Ia supernovas—exploding white dwarf stars—are considered the most reliable distance markers for objects beyond our local group of galaxies. Because all Type Ia supernovas give off about the same amount of light, their distance can be inferred by the light intensity observed from Earth.

    A white dwarf fed by a normal star reaches the critical mass and explodes as a type Ia supernova. Credit: NASA/CXC/M Weiss

    These so-called standard candles are critical to astronomers’ efforts to map the cosmos. It’s been estimated that Type Ia supernovas can be used to calculate distances to within 10 percent accuracy, good enough to help scientists determine that the expansion of the universe is accelerating, a discovery that garnered the Nobel Prize in 2011.

    1
    “Outflows” (red), regions where plumes of hot gas escape the intense nuclear burning at a star’s surface, form at the onset of convection in the helium shell of some white dwarf stars. This visualization depicts early convection on the surface of white dwarf stars of different masses. (Image credit: Adam Jacobs, Stony Brook University)

    But despite their reputation for uniformity, exploding white dwarfs contain subtle differences that scientists are working to explain using supercomputers.

    A team led by Michael Zingale of Stony Brook University is exploring the physics of Type Ia supernovas using the Titan supercomputer at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory. Titan is the flagship machine of the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. The team’s latest research focuses on a specific class of Type Ia supernovas known as double-detonation supernovas, a process by which a single star explodes twice.

    This year, the team completed a three-dimensional (3-D), high-resolution investigation of the thermonuclear burning a double-detonation white dwarf undergoes before explosion. The study expands upon the team’s initial 3-D simulation of this supernova scenario, which was carried out in 2013.

    “In 3-D simulations we can see the region of convective burning drill down deeper and deeper into the star under the right conditions,” said Adam Jacobs, a graduate student on Zingale’s team. “Higher mass and more burning force the convection to be more violent. These results will be useful in future studies that explore the subsequent explosion in three-dimensional detail.”

    By capturing the genesis of a Type Ia supernova, Zingale’s team is laying the foundation for the first physically realistic start-to-finish, double-detonation supernova simulation. Beyond capturing the incredible physics of an exploding star, the creation of a robust end-to-end model would help astronomers understand stellar phenomena observed in our night sky and improve the accuracy of cosmological measurements.

    These advances, in addition to helping us orient ourselves in the universe, could shed light on some of humanity’s biggest questions about how the universe formed, how we came to be, and where we’re going.

    An Explosive Pairing

    All Type Ia supernovas begin with a dying star gravitationally bound to a stellar companion. White dwarfs are the remnants of Sun-like stars that have spent most of their nuclear fuel. Composed mostly of carbon and oxygen, white dwarfs pack a mass comparable to that of the Sun in a star that’s about the size of the Earth.

    Left to its own devices, a lone white dwarf will smolder into darkness. But when a white dwarf is paired with a companion star, a cosmic dance ensues that’s destined for fireworks.

    To become a supernova, a white dwarf must collide with or siphon off the mass of its companion. The nature of the companion—perhaps a Sun-like star, a red giant star, or another white dwarf—and the properties of its orbit play a large role in determining the supernova trigger.

    In the classic setup, known as the single-degenerate scenario, a white dwarf takes on the maximum amount of mass it can handle—about 1.4 times the mass of the Sun, a constraint known as the Chandrasekhar limit. The additional mass increases pressure within the white dwarf’s core, reigniting nuclear fusion. Heat builds up within the star over time until it can no longer escape the star’s surface fast enough. A moving flame front of burning gases emerges, engulfing the star and causing its explosion.

    This model gave scientists a strong explanation for the uniformity of Type Ia supernovas, but further tests and observational data gathered by astronomers suggested there was more to the story.

    “To reach the Chandrasekhar limit, a white dwarf has to gain mass at just the right rate so that it grows without losing mass, for example by triggering an explosion,” Jacobs said. “It’s difficult for the classic model to explain all we know today. The community is more and more of the belief that there are going to be multiple progenitor systems that lead to a Type Ia system.”

    The double-detonation scenario, a current focus of Zingale’s team, is one such alternative. In this model, a white dwarf builds up helium on its surface. The helium can be acquired in multiple ways: stealing hydrogen from a Sun-like companion and burning it into helium, siphoning helium directly from a helium white dwarf, or attracting the helium-rich core remnant of a dying Sun-like star. The buildup of helium on the white dwarf’s surface can cause a detonation before reaching the Chandrasekhar limit. The force of this sub-Chandrasekhar detonation triggers a second detonation in the star’s carbon–oxygen core.

    “If you have a thick helium shell, the explosion doesn’t look like a normal Type Ia supernova,” Jacobs said. “But if the helium shell is very thin, you can get something that does.”

    To test this scenario, Zingale’s team simulated 18 different double-detonation models using the subsonic hydrodynamics code MAESTRO. The simulations were carried out under a 50-million core-hour allocation on Titan, a Cray XK7 with a peak performance of 27 petaflops (or 27 quadrillion calculations per second), awarded through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program. DOE’s Office of Nuclear Physics also supported the team’s work.

    By varying the mass of the helium shell and carbon–oxygen core in each model, MAESTRO calculated a range of thermonuclear dynamics that potentially could lead to detonation. Additionally, the team experimented with “hot” and “cold” core temperatures—about 10 million and 1 million degrees Celsius, respectively.

    In three-dimensional detail, the team was able to capture the formation of “hot spots” on the sub-Chandrasekhar star’s surface, regions where the star cannot shed the heat of burning helium fast enough. The simulations indicated that this buildup could lead to a runaway reaction if the conditions are right, Jacobs said.

    “We know that all nuclear explosions depend on a star’s temperature and density. The question is whether the shell dynamics of the double-detonation model can yield the temperature and density needed for an explosion,” Jacobs said. “Our study suggests that it can.”

    Using the OLCF’s analysis cluster Rhea, Zingale’s team was able to visualize this relationship for the first time.

    Bigger and Better

    Before translating its findings to the next step of double detonation, called the ignition-to-detonation phase, Zingale’s team is upgrading MAESTRO to calculate more realistic physics, an outcome that will enhance the fidelity of its simulations. On Titan, this means equipping the CPU-only code to leverage GPUs, which are highly parallel, highly efficient processors that can take on heavy calculation loads.

    Working with the OLCF’s Oscar Hernandez, the team was able to offload one of MAESTRO’s most demanding tasks: tracking stars’ nucleus-merging, energy-releasing process called nucleosynthesis. For the double-detonation problem, MAESTRO calculates a network of three elements—helium, carbon, and oxygen. By leveraging the GPUs, Zingale’s team could increase that number to around 10. Early efforts to program the OpenACC compiler directives included in the PGI compiler indicated a speedup of around 400 percent was attainable for this part of the code.

    The GPU effort benefits the team’s investigation of not only Type Ia supernovas but also other astrophysical phenomena. As part of its current INCITE proposal, Zingale’s team is exploring Type I x‑ray bursts, a recurring explosive event triggered by the buildup of hydrogen and helium on the surface of a neutron star, the densest and smallest type of star in the universe.

    “Right now our reaction network for x-ray bursts includes 11 nuclei. We want to go up to 40. That requires about a factor of 16 more computational power that only the GPUs can give us,” Zingale said.

    Maximizing the power of current-generation supercomputers will position codes like MAESTRO to better take advantage of the next generation of machines. Summit, the OLCF’s next GPU-equipped leadership system, is expected to deliver at least five times the performance of Titan.

    “Ultimately, we hope to understand how convection behaves in these stellar systems,” Zingale said, “Now we want to do bigger and better, and Titan is what we need to achieve that.”

    Related publications:

    Jacobs, M. Zingale, A. Nonaka, A. Almgren, and J. Bell, “Low Mach Number Modeling of Convection in Helium Shells on Sub-Chandrasekhar White Dwarfs II: Bulk Properties of Simple Models.” arXiv preprint: http://arxiv.org/abs/1507.06696.

    Zingale, C. Malone, A. Nonaka, A. Almgren, and J. Bell, “Comparisons of Two- and Three-Dimensional Convection in Type I X-ray Bursts.” The Astrophysical Journal 807, no. 1 (2015): 60–71, doi:10.1088/0004-637X/807/1/60.

    Zingale, A. Nonaka, A. Almgren, J. Bell, C. Malone, and R. Orvedahl, “Low Mach Number Modeling of Convection in Helium Shells on Sub-Chandrasekhar White Dwarfs. I. Methodology.” The Astrophysical Journal 764, no. 1 (2013): 97–110, doi:10.1088/0004-637X/764/1/97.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 7:46 pm on May 29, 2018 Permalink | Reply
    Tags: Adaptive Input/Output System (ADIOS) and the BigData Express (BDE), , ITER Tokamak in Saint-Paul-lès-Durance which is in southern France, ORNL OLCF,   

    From Fermilab and OLCF: “ADIOS and BigData Express offer new data streaming capabilities” 

    FNAL II photo

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    1

    Projects large enough to run on high-performance computing (HPC) resources pack data—and a lot of it. Transferring this data between computational and experimental facilities is a challenging but necessary part of projects that rely on experiments to validate computational models.

    Staff at two U.S. Department of Energy (DOE) Office of Science User Facilities — the Oak Ridge Leadership Computing Facility (OLCF) and Fermi National Accelerator Laboratory — facilitated this process by executing the integration of the Adaptive Input/Output System (ADIOS) and the BigData Express (BDE) high-speed data transfer service.

    Now ADIOS and BDE developers are changing the way researchers can transport and analyze data by incorporating a new methodology into the tool that allows for compressing and streaming of data coming out of simulations in real time. The methodology is being tested by OLCF user C. S. Chang, a plasma physics researcher at Princeton Plasma Physics Laboratory (PPPL) who studies the properties of the plasmas that exist in giant fusion devices called tokamaks.

    PPPL NSTX -U at Princeton Plasma Physics Lab, Princeton, NJ,USA

    Chang seeks an understanding of the power needed to run ITER and the heat load to the material wall that will surround its plasma, both of which are key to fusion’s viability.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    ITER is an international collaboration working to design, construct, and assemble a burning plasma experiment that can demonstrate the scientific and technological feasibility of fusion power for the commercial power grid. ITER, which counts DOE’s Oak Ridge National Laboratory (ORNL) among its partners, is currently under construction in southern France.

    “If users can separate out the most important pieces of data and move those to another processor that can recognize the intended prioritization and reduce the data, it can provide them with feedback that they may need to stop a simulation if necessary,” said Scott Klasky, leader of the ADIOS framework and group leader for ORNL’s Scientific Data Group.

    Wenji Wu, principal investigator of the BDE project and principal network research investigator of Fermilab’s Core Computing Division, added, “The new approach leverages the software-defining network [SDN] capabilities for resource scheduling and the high-performance data streaming capabilities of BDE.”

    SDN allows users to dynamically control network resources rather than manually request to connect.

    “This combination enables real-time data streaming with guaranteed quality of service, whether it be over short or long distances,” Wu said. “In addition, this approach yields small memory footprints.”

    Although the project is still in the development phase, preliminary tests allowed Chang and his team to successfully transfer fusion data between the OLCF — located at ORNL — and PPPL.

    “With this new methodology, users can stream data on the network without ever touching the file system and request network resources on the fly,” said ADIOS and BDE researcher Qing Liu, who has a joint appointment with the New Jersey Institute of Technology and ORNL.

    Without streaming capabilities, scientists can perform only after-the-fact analyses for many experiments, such as KSTAR, the Korean Superconducting Tokamak Advanced Research.

    KSTAR Korean Superconducting Tokamak Advanced Research

    But with simulations and experiments increasing in size, near–real-time monitoring and control are becoming necessary. The new ADIOS–BDE integration could also play a major role in large experimental projects, such as the fusion project Chang is leading and the Square Kilometer Array, an effort involving dozens of institutions to build the world’s largest radio telescope.

    SKA Square Kilometer Array

    The new streaming capabilities could more easily enable the capture of short-lived events such as pulsars — neutron stars that emit electromagnetic radiation — that the telescope aims to record.

    “KSTAR wants to transfer their data as the experiment is happening, to process their data during the experiment,” Klasky said. “These additions to ADIOS will enable both sides to quickly perform data analysis and visualization in real time.”

    Seo-Young Noh, director of the Global Science Experimental Data Hub Center at the Korea Institute of Science and Technology Information, leads a group that has contributed significantly to the BDE project.

    “Our work has made cross-Pacific, real-time data streaming possible,” Noh said.

    Klasky, Liu, and their collaborators will give a best paper plenary talk related to these new capabilities—titled “Understanding and Modeling Lossy Compression Schemes on HPC Scientific Data” — at the 32nd IEEE International Parallel and Distributed Processing Symposium. The team noted that the new ADIOS methodology will allow scientists to efficiently select the type of compression that will best fit their scientific and research needs, affording them the ability to analyze their data faster than ever before.

    Liang Zhang, the developer of BDE data streaming capabilities, is working with Liu to enhance and test the tool. They expect the tool’s new capabilities to be fully tested and deployed by late 2019. This work also involves ADIOS researcher Jason Wang and BDE researchers Nageswara Rao, Phil DeMar, Qiming Lu, Sajith Sasidharan, S. A. R. Shah, Jin Kim, and Huizhang Luo.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    stem

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.


    FNAL/MINERvA

    FNAL DAMIC

    FNAL Muon g-2 studio

    FNAL Short-Baseline Near Detector under construction

    FNAL Mu2e solenoid

    Dark Energy Camera [DECam], built at FNAL

    FNAL DUNE Argon tank at SURF

    FNAL/MicrobooNE

    FNAL Don Lincoln

    FNAL/MINOS

    FNAL Cryomodule Testing Facility

    FNAL Minos Far Detector

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

    FNAL/NOvA experiment map

    FNAL NOvA Near Detector

    FNAL ICARUS

    FNAL Holometer

     
  • richardmitnick 12:45 pm on May 7, 2018 Permalink | Reply
    Tags: Faces of Summit: Preparing to Launch, ORNL OLCF, ,   

    From Oak Ridge National Laboratory: “Faces of Summit: Preparing to Launch” 

    i1

    From Oak Ridge National Laboratory

    OLCF

    5.1.18
    Katie Elyce Jones

    1
    HPC Support Specialist Chris Fuson in the Summit computing center. No image credit.

    OLCF’s Chris Fuson works with Summit vendors and OLCF team members to ready Summit’s batch scheduler and job launcher.

    The Faces of Summit series shares stories of people working to stand up America’s next top supercomputer for open science, the Oak Ridge Leadership Computing Facility’s Summit. The next-generation machine is scheduled to come online in 2018.

    ORNL IBM Summit Supercomputer

    ORNL IBM Summit Supercomputer

    At the Oak Ridge Leadership Computing Facility (OLCF), supercomputing staff and users are already talking about what kinds of science problems they will be able to solve once they “get on Summit.”

    But before they run their science applications on the 200-petaflop IBM AC922 supercomputer later this year, they will have to go through the system’s batch scheduler and job launcher.

    “The batch scheduler and job launcher control access to the compute resources on the new machine,” said Chris Fuson, OLCF high-performance computing (HPC) support specialist. “As a user, you will need to understand these resources to utilize the system effectively.”

    A staff member in the User Assistance and Outreach (UAO) Group, Fuson has worked on five flagship supercomputers at OLCF—Cheetah, Phoenix, Jaguar, Titan, and now Summit.

    [Cheetah, Phoenix, no images available.]

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer

    With a background in programming and computer science, Fuson said he likes to focus on solving the unexpected issues that come up during installation and testing, such as fixing bugs or adding new features to help users navigate the system.

    Fuson can often be found standing at his desk listening to background music while he sorts through new tasks, user requests, and technical issues related to job scheduling.

    “As the systems change and evolve, the detective work involved in helping users solve problems as they run on a new machine keeps it interesting,” he said.

    Of course, the goal is to make the transition to a new system as smooth as possible for users. While still responding to day-to-day tasks related to the OLCF’s current supercomputer, Titan, Fuson and the UAO group also work with IBM to learn, incorporate, and document the IBM Load Sharing Facility (LSF) batch scheduler and the parallel job launcher jsrun for Summit. LSF allocates Summit resources, and jsrun launches jobs on the compute nodes.

    “The new launcher provides similar functionality to other parallel job launchers, such as aprun and mpirun, but requires users to take a slightly different approach in determining how to request and lay out resources for a job,” Fuson said.

    IBM developed jsrun to meet the unique computing needs of two CORAL partners, the US Department of Energy’s (DOE’s) Oak Ridge and Lawrence Livermore National Laboratories.

    “We relayed our workload and scheduling requirements to IBM,” Fuson said. “For example, as a leadership computing facility, we provide priority for large jobs in the batch queue. We work with LSF developers to incorporate our center’s policy requirements and diverse workload needs into the existing scheduler.”

    OLCF Center for Accelerated Application Readiness team members, who are optimizing application codes for Summit, have tested LSF and jsrun on Summitdev, an early access system with IBM processers one generation away from Summit’s Power9 processors.

    “Early users are already providing feedback,” Fuson said. “There’s a lot of work that goes into getting these pieces polished. At first, it is always a struggle as we work toward production, but things will begin to fall into place.”

    To prepare all facility users for scheduling on Summit, Fuson is also developing user documentation and training. In February, he introduced users to jsrun on the monthly User Conference Call for the OLCF, a DOE Office of Science User Facility at ORNL.

    “Right now, Summit is a big focus,” he said. “We’ve invested time in learning these new tools and testing them in the Summit environment.”

    And what about during his free time when Summit is not the focus? Fuson spends his off-hours scheduling as well. “My hobby is taxiing my kids around town between practices,” he joked.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 12:11 pm on February 9, 2018 Permalink | Reply
    Tags: , GM, , ORNL OLCF,   

    From OLCF: “GM Revs up Diesel Combustion Modeling on Titan Supercomputer” 

    i1

    Oak Ridge National Laboratory

    OLCF

    1
    In a model of a 1.6 liter engine cylinder, liquid fuel (shown in red and orange) is converted to fuel vapor under high temperatures during ignition. Image courtesy of Ronald Grover.

    Running more detailed chemistry models on GPUs, researchers improve predictions for nitrogen oxides.

    Most car owners in the United States do not think twice about passing over the diesel pump at the gas station. Instead, diesel fuel mostly powers our shipping trucks, boats, buses, and generators—and that is because diesel engines are about 10 percent more fuel-efficient than gasoline, saving companies money transporting large deliveries.

    The downside to diesel engines is that they produce more emissions, like soot and nitrogen oxides, than gasoline engines because of how they combust fuel and air. A gasoline engine uses a spark plug to ignite a fuel-air mixture. A diesel engine compresses air until it is hot enough to ignite diesel fuel sprayed into the cylinder, using more air than necessary to burn all the fuel in a process called lean mixing-controlled combustion.

    “We can generally clean up emissions for a gasoline engine with a three-way catalyst,” said Ronald Grover, staff researcher at General Motors (GM) Research and Development. “The problem with diesel is that when you operate lean, you can’t use the conventional three-way catalysts to clean up all the emissions suitably, so you have to add a lot of complexity to the after-treatment system.”

    That complexity makes diesel engines heavier and more expensive upfront.

    Grover and GM colleagues Jian Gao, Venkatesh Gopalakrishnan, and Ramachandra Diwakar are using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE) Office of Science User Facility at DOE’s Oak Ridge National Laboratory (ORNL), to improve combustion models for diesel passenger car engines with an ultimate goal of accelerating innovative engine designs while meeting strict emissions standards.

    A multinational corporation that delivered 10 million vehicles to market last year, GM runs its research side of the house, Global R&D Laboratories, to develop new technologies for engines and powertrains.

    “We work from a clean sheet of paper, asking ‘What if?’” Grover said. From there, ideas move up to advanced engineering, then to product organization where technology is vetted before it goes into the production pipeline.

    For every engine design, GM must balance cost and performance for customers while working within the constraints of emissions regulations. The company also strives to develop exciting new ideas.

    “The customer is our compass. We’re always trying to design and improve the engine,” Grover said. “We see constraint, and we’re trying to push that boundary.”

    But testing innovative engine designs can run up a huge bill.

    “One option is to try some designs, make some hardware, go test it, make some more hardware, go test it, and you continue to do this iterative process until you eventually reach the design that you like,” he said. “But obviously, every design iteration costs money because you’re cutting new hardware.”

    Meanwhile, competitors might put their own new designs on the market. To reduce R&D costs, automakers use virtual engine models to computationally simulate and calibrate, or adjust, new designs so that only the best designs are built as prototypes for testing in the real world.

    Central to engine design is the combustion process, but studying the intricacies of combustion in a laboratory is difficult and significant computational resources are required to simulate it in a virtual environment.

    Combustion is critical to drivability and ensuring seamless operation on the road, but combustion also affects emissions production because emissions are chemical byproducts of combustion’s main ingredients: fuel, air, and heat.

    “There are hundreds of thousands of chemical species to be measured that you have to track and tens of thousands of reactions that you need to simulate,” Grover said. “We have to simplify the chemistry to the point that we can handle it for computational modeling, and to simplify it, sometimes you have to make assumptions. So sometimes we find the model works well in some areas and doesn’t work well in others.”

    The combustion process in a car engine—from burning the first drop of fuel to emitting the last discharge of exhaust—can create many thousands of chemical species, including regulated emissions. However, sensors used in experimental testing allow researchers to track only a limited number of species over the combustion process.

    “You’re missing a lot of detail in the middle,” Grover said.

    Grover’s team wanted to increase the number of species to better understand the chemical reactions taking place during combustion, but in-house computational resources could not compute such complex chemical changes with high accuracy within a reasonable time frame.

    To test the limits of their in-house resources, Grover’s team increased the number of chemical species to 766 and planned to simulate combustion across a span of 280 crank angle degrees, which is a measure of engine-cycle progress. An entire engine cycle, with one combustion event, equals 720 crank angle degrees.

    “It took 15 days just to compute 150 crank angle degrees. So, we didn’t even finish the calculation in over 2 weeks,” he said. “But we still wanted to model the highest fidelity chemistry package that we could.”

    To reduce computing time while increasing the complexity of the chemistry calculations, the GM team would need an extremely powerful computer and a new approach.

    A richer recipe for combustion

    Grover and the GM team turned to DOE for assistance. Through DOE’s Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC), a competitive peer-reviewed program, they successfully applied for and were awarded time on Titan during 2015 and 2016.

    A 27-petaflop Cray XK7 supercomputer with a hybrid CPU–GPU architecture, Titan is the nation’s most powerful computer for open scientific research. To make the most of the computing allocation, Grover’s team worked with Dean Edwards, Wael Elwasif, and Charles Finney at ORNL’s National Transportation Research Center to optimize combustion models for Titan’s architecture and add chemical species. They also partnered with Russell Whitesides at DOE’s Lawrence Livermore National Laboratory (LLNL). Whitesides is a developer of a chemical-kinetics solver called Zero-RK, which can use GPUs to accelerate computations. Both the ORNL and LLNL efforts are funded by DOE’s Vehicle Technologies Office (VTO).

    The team combined Zero-RK with the CONVERGE computational fluid dynamics (CFD) software that Grover uses in-house. CONVERGE is the product of a small-business CFD software company called Convergent Science.

    The GM team set out to accomplish three things: use Titan’s GPUs so they could increase the complexity of the chemistry in their combustion models, compare the results of Titan simulations with GM experimental data to measure accuracy, and identify other areas for improvement in the combustion model.

    “Their goal was to be able to better simulate what actually happens in the engine,” said Edwards, the ORNL principal investigator.

    ORNL’s goal was to help the GM team improve the accuracy of the combustion model, an exercise that could benefit other combustion research down the road. “The first step was to improve the emissions predictions by adding detail back into the simulation,” Edwards said.

    “And the bigger the recipe, the longer it takes the computer to solve it,” Finney said.

    This was also a computationally daunting step because chemistry does not happen in a vacuum.

    “On top of chemical kinetics, for our engine work, we have to model the movement of the piston, the movement of the valves, the spray injection, the turbulent flow—all of these things in addition to the chemistry,” Grover said.

    The combustion model also needed to accurately simulate the many different operating conditions created in the engine. To simulate combustion under realistic conditions, GM brought experimental data for about 600 operating conditions—points measuring the balance of engine load (a measure of work output from the engine) and engine speed (revolutions per minute) that mimic realistic driving conditions in which a driver is braking, accelerating, driving uphill or downhill, idling in traffic, and more.

    The team simulated a baseline model of 50 chemical species that matched what GM routinely computed in-house, then added 94 chemical species for a total of 144.

    “On Titan, we almost tripled our number of species,” Grover said. “We found that by using the Zero-RK GPU solver for chemistry, the chemistry computations ran about 33 percent faster.”

    These encouraging results led the team to increase the number of chemical species to 766. What had taken the team over 2 weeks to do in-house—modeling 766 species across 150 crank angle degrees—was completed in 5 days on Titan.

    In addition, the team was able to complete the calculations over the desired 280 crank angle degrees, something that wasn’t possible using in-house resources.

    “We gathered a lot of success here,” Grover said.

    With the first objective met—to see if they could increase simulation detail within a manageable compute time by using Titan’s GPUs—they moved on to compare accuracy against the experimental data.

    They measured emissions including nitrogen oxides, carbon monoxide, soot, and unburned hydrocarbons (fuel that did not burn completely).

    “Nitrogen oxide emissions in particular are tied to temperature and how a diesel engine combustion system operates,” Edwards said. “Diesel engines tend to operate at high temperatures and create a lot of nitrogen oxides.”

    Compared with the baseline Titan simulation, the refined Titan simulation with 766 species improved nitrogen oxide predictions by 10–20 percent.

    “That was one of our objectives: Can we model bigger chemistry and learn anything? Yes, we can,” Grover said, noting that the team saw some improvements for soot predictions as well but still struggled with increasing predictive accuracy for carbon monoxide and unburned hydrocarbon emissions.

    “That’s not a bad result because we were able to see that maybe there’s something we’re missing other than chemistry,” Grover said.

    “We need to spend more time evaluating the validity of those wall temperatures,” Grover said. “We’re actually going to compute the wall temperatures by simulating the effect of the coolant flow around the engine. We’re hoping better heat transfer predictions will give us a big jump in combination with better chemistry.”

    Another result was the demonstration of the GPUs’ ability to solve new problems.

    The parallelism boosted by Titan’s GPUs enabled the throughput necessary to calculate hundreds of chemical species across hundreds of operating points. “Applying GPUs for computer-aided engineering could open up another benefit,” Grover said.

    If GPUs can help reduce design time, that could boost business.

    “That’s faster designs to market,” Grover said. “Usually a company will go through a vehicle development process from end-to-end that could take 4 or 5 years. If you could develop the powertrain faster, then you could get cars to market faster and more reliably.”

    Science paper:
    Steady-State Calibration of a Diesel Engine in CFD Using a GPU-based Chemistry Solver ASME

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 1:12 pm on January 19, 2018 Permalink | Reply
    Tags: , ORNL OLCF, , ,   

    From OLCF: “Optimizing Miniapps for Better Portability” 

    i1

    Oak Ridge National Laboratory

    OLCF

    January 17, 2018
    Rachel Harken

    When scientists run their scientific applications on massive supercomputers, the last thing they want to worry about is optimizing their codes for new architectures. Computer scientist Sunita Chandrasekaran at the University of Delaware is taking steps to make sure they don’t have a reason to worry.

    Chandrasekaran collaborates with a team at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) to optimize miniapps, smaller pieces of large applications that can be extracted and fine-tuned to run on GPU architectures. Chandrasekaran and her PhD student, Robert Searles, have taken on the task of porting (adapting) one such miniapp, Minisweep, to OpenACC—a directive-based programming model that allows users to run a code on multiple computing platforms without having to change or rewrite it.

    1
    Minisweep performs a “sweep” computation across a grid (pictured)—representative of a 3D volume in space—to calculate the positions, energies, and flows of neutrons in a nuclear reactor. The yellow cube marks the beginning location of the sweep. The green cubes are dependent upon information from the yellow cube, the blue cubes are dependent upon information from the green cubes, and so forth. In practice, sweeps are performed from each of the eight corners of the cube simultaneously.

    Minisweep is particularly important because it represents approximately 80–99 percent of the computation time of Denovo, a 3D code for radiation transport in nuclear reactors being used in a current DOE Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, project. Minisweep is also being used in benchmarking for the Oak Ridge Leadership Computing Facility’s (OLCF’s) new Summit supercomputer.

    ORNL IBM Summit supercomputer depiction

    Summit is scheduled to be in full production in 2019 and will be the next leadership-class system at the OLCF, a DOE Office of Science User Facility located at ORNL.

    Created from Denovo by OLCF computational scientist Wayne Joubert, Minisweep works by “sweeping” diagonally across grid cells that represent points in space, allowing it to track the positions, flows, and energies of neutrons in a nuclear reactor. Cubes in the grid cell represent a number of these qualities and depend on information from previous cubes in the grid.

    “Scientists need to know how neutrons are flowing in a reactor because it can help them figure out how to build the radiation shield around it,” Chandrasekaran said. “Using Denovo, physicists can simulate this flow of neutrons, and with a faster code, they can compute many different configurations quickly and get their work done faster.”

    Minisweep has already been ported to multicore platforms using the OpenMP programming interface and to GPU accelerators using the lower-level programming language CUDA. ORNL computer scientists and ORNL Miniapps Port Collaboration organizers Tiffany Mintz and Oscar Hernandez knew that porting these kinds of codes to OpenACC would equip them for use on different high-performance computing architectures.

    Chandrasekaran and Searles have been using the Summit early access system, Summitdev, and the Cray XK7 Titan supercomputer at the OLCF to test Minisweep since mid-2017.

    ORNL Cray XK7 Titan Supercomputer

    2
    Visualization of a nuclear reactor simulation on Titan.

    Now, they’ve successfully enabled Minisweep to run on parallel architectures using OpenACC for fast execution on the targeted computer. An option to port to these types of systems without compromising performance didn’t previously exist.

    Whereas the code typically sweeps in eight directions from diagonal corners of a cube inward, the team saw that with only one sweep, the OpenACC directive performed on par with CUDA.

    “We saw OpenACC performing as well as CUDA on an NVIDIA Volta GPU, which is a state-of-the-art GPU card,” Searles said. “That’s huge for us to take away, because we are normally lucky to get performance that’s even 85 percent of CUDA. That one sweep consistently showed us about 0.3 or 0.4 seconds faster, which is significant at the problem size we used for measuring performance.”

    Chandrasekaran and the team at ORNL will continue optimizing Minisweep to get the application up and “sweeping” from all eight corners of a grid cell. Other radiation transport applications and one for DNA sequencing may be able to take advantage of Minisweep for multiple GPU architectures such as Summit—and even exascale systems—in the future.

    “I’m constantly trying to look at how I can package these kinds of tools from a user’s perspective,” Chandrasekaran said. “I take applications that are essential for these scientists’ research and try to find out how to make them more accessible. I always say: write once, reuse multiple times.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 11:57 am on November 8, 2017 Permalink | Reply
    Tags: At higher temperature snapshots at different times show the moments pointing in different random directions, , Magnetic moments, , , ORNL OLCF, , , Rules of attraction   

    From ORNL OLCF via D.O.E.: “Rules of attraction” 

    i1

    Oak Ridge National Laboratory

    OLCF

    November 8, 2017
    No writer credit

    1
    A depiction of magnetic moments obtained using the hybrid WL-LSMS modeling technique inside nickel (Ni) as the temperature is increased from left to right. At low temperature (left), Ni atoms in their magnetic moments all point in one direction and align. At higher temperature (right) snapshots at different times show the moments pointing in different, random directions, and the individual atoms no longer perfectly align. Image courtesy of Oak Ridge National Laboratory.

    The atoms inside materials are not always perfectly ordered, as usually depicted in models. In magnetic, ferroelectric (or showing electric polarity) and alloy materials, there is competition between random arrangement of the atoms and their desire to align in a perfect pattern. The change between these two states, called a phase transition, happens at a specific temperature.

    Markus Eisenbach, a computational scientist at the Department of Energy’s Oak Ridge National Laboratory, heads a group of researchers who’ve set out to model the behavior of these materials using first principles – from fundamental physics without preset conditions that fit external data.

    “We’re just scratching the surface of comprehending the underlying physics of these three classes of materials, but we have an excellent start,” Eisenbach says. “The three are actually overlapping in that their modes of operation involve disorder, thermal excitations and resulting phase transitions – from disorder to order – to express their behavior.”

    Eisenbach says he’s fascinated by “how magnetism appears and then disappears at varying temperatures. Controlling magnetism from one direction to another has implications for magnetic recording, for instance, and all sorts of electric machines – for example, motors in automobiles or generators in wind turbines.”

    The researchers’ models also could help find strong, versatile magnets that don’t use rare earth elements as an ingredient. Located at the bottom of the periodic table, these 17 materials come almost exclusively from China and, because of their limited source, are considered critical. They are a mainstay in the composition of many strong magnets.

    Eisenbach and his collaborators, which includes his ORNL team and Yang Wang with the Pittsburgh Supercomputing Center, are in the second year of a DOE INCITE (Innovative and Novel Computational Impact on Theory and Experiment) award to model all three materials at the atomic level. They’ve been awarded 100 million processor hours on ORNL’s Titan supercomputer and already have impressive results in magnetics and alloys. Titan is housed at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science user facility.

    The researchers tease out atomic-scale behavior using, at times, a hybrid code that combines Wang-Landau (WL) Monte Carlo and locally self-consistent multiple scattering (LSMS) methods. WL is a statistical approach that samples the atomic energy landscape in terms of finite temperature effects; LSMS determines energy value. With LSMS alone, they’ve calculated the ground state magnetic properties of an iron-platinum particle. And without making any assumption beyond the chemical composition, they’ve determined the temperature at which copper-zinc alloy goes from a disordered state to an ordered one.

    Moreover, Eisenbach has co-authored two materials science papers in the past year, one in Leadership Computing, the other a letter in Nature, in which he and colleagues reported using the three-dimensional coordinates of a real iron-platinum nanoparticle with 6,560 iron and 16,627 platinum atoms to find its magnetic properties.

    “We’re combining the efficiency of WL sampling, the speed of the LSMS and the computing power of Titan to provide a solid first-principles thermodynamics description of magnetism,” Eisenbach says. “The combination also is giving us a realistic treatment of alloys and functional materials.”

    Alloys are comprised of at least two metals. Brass, for instance, is an alloy of copper and zinc. Magnets, of course, are used in everything from credit cards to MRI machines and in electric motors. Ferroelectric materials, such as barium titanate and zirconium titanate, form what’s known as an electric moment, in a transition phase, when temperatures drop beneath the ferroelectric Curie temperature – the point where atoms align, triggering spontaneous magnetism. The term – named after the French physicist Pierre Curie, who in the late 19th century described how magnetic materials respond to temperature changes – applies to both ferroelectric and ferromagnetic transitions. Eisenbach and his collaborators are interested in both phenomena.

    Eisenbach is particularly intrigued by high-entropy alloys, a relatively new sub-class discovered a decade ago that may hold useful mechanical properties. Conventional alloys have a dominant element – for instance, iron in stainless steel. High-entropy alloys, on the other hand, evenly spread out their elements on a crystal lattice. They don’t get brittle when chilled, remaining pliable at extremely low temperatures.

    To understand the configuration of high-entropy alloys, Eisenbach uses the analogy of a chess board sprinkled with black and white beads. In an ordered material, black beads occupy black squares and white beads, white squares. In high-entropy alloys, however, the beads are scattered randomly across the lattice regardless of color until the material reaches a low temperature, much lower than normal alloys, when it almost grudgingly orders itself.

    Eisenbach and his colleagues have modelled a material as large as 100,000 atoms using the Wang-Landau/LSMS method. “If I want to represent disorder, I want a simulation that calculates for hundreds if not thousands of atoms, rather than just two or three,” he says.

    To model an alloy, the researchers first deploy the Schrodinger equation to determine the state of electrons in the atoms. “Solving the equation lets you understand the electrons and their interactions, which is the glue that holds the material together and determines their physical properties.”

    All of a material’s properties and energies are calculated by many hundreds of thousands of calculations over many possible configurations and over varying temperatures to give a rendering so that modelers can determine at what temperature a material loses or gains its magnetism, or at what temperature an alloy goes from a disordered state to a perfectly ordered one.

    Eisenbach eagerly awaits the arrival of the Summit supercomputer – five to six times more powerful than Titan – to OLCF in late 2018.

    Two views of Summit-

    ORNL IBM Summit Supercomputer

    ORNL IBM Summit supercomputer depiction

    “Ultimately, we can do larger simulations and possibly look at even more complex disordered materials with more components and widely varying compositions, where the chemical disorder might lead to qualitatively new physical behaviors.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: