Tagged: ANL-ALCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:03 pm on August 25, 2021 Permalink | Reply
    Tags: ANL-ALCF, , , , NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE's Argonne National Laboratory", , The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society.   

    From insideHPC : “NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE’s Argonne National Laboratory” 

    From insideHPC

    August 25, 2021

    NVIDIA and DOE’s Argonne National Laboratory (US) this morning announced Polaris, a GPU-based supercomputer, with 22240 NVIDIA A100 Tensor Core GPUs delivering 1.4 exaflops of theoretical AI performance and about 44 petaflops of peak double-precision performance.

    The Polaris system will be hosted at the laboratory’s Argonne Leadership Computing Facility (ALCF) in support of R&D with extreme scale for users’ algorithms and science. Polaris, to be built by Hewlett Packard Enterprise, will combine simulation and machine learning by tackling data-intensive and AI high performance computing workloads, powered by 560 total nodes, each with four A100 GPUs, the organizations said.

    1
    ANL ALCF HPE NVIDIA Polaris supercomputer depiction.

    “The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU-accelerated computing platform provides pioneers like the ALCF breakthrough performance for next-generation supercomputers such as Polaris that let researchers push the boundaries of scientific exploration.”

    “Polaris is a powerful platform that will allow our users to enter the era of exascale AI,” said ALCF Director Michael E. Papka. “Harnessing the huge number of NVIDIA A100 GPUs will have an immediate impact on our data-intensive and AI HPC workloads, allowing Polaris to tackle some of the world’s most complex scientific problems.”

    The system will accelerate transformative scientific exploration, such as advancing cancer treatments, exploring clean energy and propelling particle collision research to discover new approaches to physics. And it will transport the ALCF into the era of exascale AI by enabling researchers to update their scientific workloads for Aurora, Argonne’s forthcoming exascale system.

    Polaris will also be available to researchers from academia, government agencies and industry through the ALCF’s peer-reviewed allocation and application programs. These programs provide the scientific community with access to the nation’s fastest supercomputers to address “grand challenges” in science and engineering.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 11:21 am on May 5, 2020 Permalink | Reply
    Tags: "Four years of calculations lead to new insights into muon anomaly", ANL-ALCF, ,   

    From Argonne National Laboratory: “Four years of calculations lead to new insights into muon anomaly” 

    Argonne Lab
    News from From Argonne National Laboratory

    May 5, 2020
    Christina Nunez

    Using Argonne’s supercomputer Mira, researchers have come up with newly precise calculations aimed at understanding a key disconnect between physics theory and experimental measurements.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Two decades ago, an experiment at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory pinpointed a mysterious mismatch between established particle physics theory and actual lab measurements. When researchers gauged the behavior of a subatomic particle called the muon, the results did not agree with theoretical calculations, posing a potential challenge to the Standard Model — our current understanding of how the universe works.

    Ever since then, scientists around the world have been trying to verify this discrepancy and determine its significance. The answer could either uphold the Standard Model, which defines all of the known subatomic particles and how they interact, or introduce the possibility of an entirely undiscovered physics. A multi-institutional research team (including Brookhaven, Columbia University, and the universities of Connecticut, Nagoya and Regensburg, RIKEN) have used Argonne National Laboratory’s Mira supercomputer to help narrow down the possible explanations for the discrepancy, delivering a newly precise theoretical calculation that refines one piece of this very complex puzzle. The work, funded in part by the DOE’s Office of Science through its Office of High Energy Physics and Advanced Scientific Computing Research programs, has been published in the journal Physical Review Letters.

    A muon is a heavier version of the electron and has the same electric charge. The measurement in question is of the muon’s magnetic moment, which defines how the particle wobbles when it interacts with an external magnetic field. The earlier Brookhaven experiment, known as Muon g-2 [since moved to FNAL], examined muons as they interacted with an electromagnet storage ring 50 feet in diameter. The experimental results diverged from the value predicted by theory by an extremely small amount measured in parts per million, but in the realm of the Standard Model, such a difference is big enough to be notable.

    FNAL Muon g-2 studio

    Standard Model of Particle Physics, Quantum Diaries

    “If you account for uncertainties in both the calculations and the measurements, we can’t tell if this is a real discrepancy or just a statistical fluctuation,” said Thomas Blum, a physicist at the University of Connecticut who co-authored the paper. ​“So both experimentalists and theorists are trying to improve the sharpness of their results.”

    As Taku Izubuchi, a physicist at Brookhaven Lab who is a co-author on the paper, noted, ​“Physicists have been trying to understand the anomalous magnetic moment of the muon by comparing precise theoretical calculations and accurate experiments since the 1940s. This sequence of work has led to many discoveries in particle physics and continues to expand the limits of our knowledge and capabilities in both theory and experiment.”

    If the discrepancy between experimental results and theoretical predictions is indeed real, that would mean some other factor — perhaps some yet-to-be discovered particle — is causing the muon to behave differently than expected, and the Standard Model would need to be revised.

    The team’s work centered on a notoriously difficult aspect of the anomaly involving the strong interaction, which is one of four basic forces in nature that govern how particles interact, along with weak, electromagnetic, and gravitational interactions. The biggest uncertainties in the muon calculations come from particles that interact through the strong force, known as hadronic contributions. These hadronic contributions are defined by a theory called quantum chromodynamics (QCD).

    The researchers used a method called lattice QCD to analyze a type of hadronic contribution, light-by-light scattering. ​“To do the calculation, we simulate the quantum field in a small cubic box that contains the light-by-light scattering process we are interested in,” said Luchang Jin, a physicist at the University of Connecticut and paper co-author. ​“We can easily end up with millions of points in time and space in the simulation.”

    That’s where Mira came in. The team used the supercomputer, housed at the Argonne Leadership Computing Facility (ALCF), to solve the complex mathematical equations of QCD, which encode all possible strong interactions with the muon. The ALCF, a DOE Office of Science User Facility, recently retired Mira to make room for the more powerful Aurora supercomputer, an exascale system scheduled to arrive in 2021.

    “Mira was ideally suited for this work,” said James Osborn, a computational scientist with the ALCF and Argonne’s Computational Science division. ​“With nearly 50,000 nodes connected by a very fast network, our massively parallel system enabled the team to run large simulations very efficiently.”

    After four years of running calculations on Mira, the researchers produced the first-ever result for the hadronic light-by-light scattering contribution to the muon anomalous magnetic moment, controlling for all errors.

    “For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy,” Blum said. ​“But we found previous estimates were not far off, and that the real value cannot explain the discrepancy.”

    Meanwhile, a new version of the Muon g-2 experiment is underway at Fermi National Accelerator Laboratory [above], aiming to reduce uncertainty on the experimental side by a factor of four. Those results will add more insight to the theoretical work being done now.

    “As far as we know, the discrepancy still stands,” Blum said. ​“We are waiting to see whether the results together point to new physics, or whether the current Standard Model is still the best theory we have to explain nature.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 5:52 pm on January 30, 2020 Permalink | Reply
    Tags: ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution., ALCF’s two new storage systems which it has named “Grand” (150 PB of center-wide storage) and “Eagle” (50 PB community file system) are using the Cray ClusterStor E1000 system., ANL-ALCF, , , , , This is in preparation for the pending Aurora exascale supercomputer.   

    From insideHPC: “Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale” 

    From insideHPC

    January 30, 2020
    Rich Brueckner

    1
    Cray ClusterStor E1000

    Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution.

    The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more.

    The latest deployment will expand storage capacity for ALCF’s workloads that require converged modeling, simulation, AI and analytics workloads, in preparation for the pending Aurora exascale supercomputer.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    Powered by HPE and Intel, Aurora is a Cray Shasta system planned for delivery in 2021.

    The Cray ClusterStor E1000 system utilizes purpose-built software and hardware features to meet high-performance storage requirements of any size with significantly fewer drives. Designed to support the Exascale Era, which is characterized by the explosion of data and converged workloads, the Cray ClusterStor E1000 will power ALCF’s future Aurora supercomputer to target a multitude of data-intensive workloads required to make breakthrough discoveries at unprecedented speed.

    “ALCF is leveraging Exascale Era technologies by deploying infrastructure required for converged workloads in modeling, simulation, AI and analytics,” said Peter Ungaro, senior vice president and general manager, HPC and AI, at HPE. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads. We look forward to continuing our collaboration with ALCF and empowering its research community to unlock new value.”

    ALCF’s two new storage systems, which it has named “Grand” and “Eagle,” are using the Cray ClusterStor E1000 system to gain a completely new, cost-effective high-performance computing (HPC) storage solution to effectively and efficiently manage growing converged workloads that today’s offerings cannot support.

    “When Grand launches, it will benefit ALCF’s legacy petascale machines, providing increased capacity for the Theta compute system and enabling new levels of performance for not just traditional checkpoint-restart workloads, but also for complex workflows and metadata-intensive work,” said Mark Fahey, director of operations, ALCF.”

    “Eagle will help support the ever-increasing importance of data in the day-to-day activities of science,” said Michael E. Papka, director, ALCF. “By leveraging our experience with our current data-sharing system, Petrel, this new storage will help eliminate barriers to productivity and improve collaborations throughout the research community.”

    The two new systems will gain a total of 200 petabyes (PB) of storage capacity, and through the Cray ClusterStor E1000’s intelligent software and hardware designs, will more accurately align data flows with target workloads. ALCF’s Grand and Eagle systems will help researchers accelerate a range of scientific discoveries across disciplines, and are each assigned to address the following:

    Computational capacity – ALCF’s “Grand” provides 150 PB of center-wide storage and new levels of input/output (I/O) performance to support massive computational needs for its users.
    Simplified data-sharing – ALCF’s “Eagle” provides a 50 PB community file system to make data-sharing easier than ever among ALCF users, their collaborators and with third parties.

    ALCF plans to deliver its Grand and Eagle storage systems in early 2020. The systems will initially connect to existing ALCF supercomputers powered by HPE HPC systems: Theta, based on the Cray XC40-AC and Cooley, based on the Cray CS-300. ALCF’s Grand, which is capable of 1 terabyte per second (TB/s) bandwidth, will be optimized to support converged simulation science and data-intensive workloads once the Aurora exascale supercomputer is operational.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:46 pm on January 22, 2020 Permalink | Reply
    Tags: "Beyond the tunnel", (LES)-large-eddy simulation, , ANL-ALCF, , , How turbulence affects aircraft during flight, , Stanford-led team turns to Argonne’s Mira to fine-tune a computational route around aircraft wind-tunnel testing.,   

    From ASCR Discovery: “Beyond the tunnel” 

    From ASCR Discovery

    Stanford-led team turns to Argonne’s Mira to fine-tune a computational route around aircraft wind-tunnel testing.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    1
    The white lines represent simulated air-flow on a wing surface, including an eddy (the circular pattern at the tip). Engineers can use supercomputing, in particular large-eddy simulation (LES), to study how turbulence affects flight. LES techniques applied to commercial aircraft promise a cost-effective alternative to wind-tunnel testing. Image courtesy of Stanford University and Cascade Technologies.

    For aircraft designers, modeling and wind-tunnel testing one iteration after another consumes time and may inadequately recreate the conditions planes encounter during free flight – especially take-off and landing. “Prototyping that aircraft every time you change something in the design would be infeasible and expensive,” says Parviz Moin, a Stanford University professor of mechanical engineering.

    Over the past five years, researchers have explored the use of high-fidelity numerical simulations to predict unsteady airflow and forces such as lift and drag on commercial aircraft, particularly under challenging operating conditions such as takeoff and landing.

    Moin has led much of this research as founding director of the Center for Turbulence Research at Stanford. With help from the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) grants, he and colleagues at Stanford and nearby Cascade Technologies in Palo Alto, California, have used supercomputing to see whether large-eddy simulation (LES) of commercial aircraft is both cost effective and sufficiently accurate to help designers. They’ve used 240 million core-hours on Mira, the Blue Gene/Q at the Argonne Leadership Computing Facility, a DOE user facility at Argonne National Laboratories, to conduct these simulations. The early results are “very encouraging,” Moin says.

    Specifically, Moin and colleagues – including INCITE co-investigators George Park of the University of Pennsylvania and Cascade Technologies’ Sanjeeb Bose, a DOE Computational Science Graduate Fellowship (DOE CSGF) alumnus – study how turbulence affects aircraft during flight. Flow of air about a plane in flight is always turbulent, wherein patches of swirling fluid – eddies – move seemingly at random.

    Because it happens on multiple scales, engineers and physicists find aircraft turbulence difficult to describe mathematically. The Navier-Stokes equations are known to govern all flows of engineering interest, Moin explains, including those involving gases and liquids and flows inside or outside a given object. Eddies can be several meters long in the atmosphere but only microns big in the aircraft surface’s vicinity. This means computationally solving the Navier-Stokes equations to describe all the fluid-motion scales would be prohibitively expensive and computationally taxing. For years, engineers have used Reynolds-averaged Navier-Stokes (RANS) equations to predict averaged quantities of engineering interest such as lift and drag forces. RANS equations, however, contain certain modeling assumptions that are not based on first principles, which can result in inaccurate predictions of complex flows.

    LES, on the other hand, offers a compromise, Moin says, between capturing the spectrum of eddy motions or ignoring them all. With LES, researchers can compute the effect of energy-containing eddies on an aircraft while modeling small-scale motions. Although LES predictions are more accurate than RANS approaches, the computational cost of LES has so far been a barrier to widespread use. But supercomputers and recent algorithm advances have rendered LES computations and specialized variations of them more feasible recently, Moin says.

    Eddies get smaller and smaller as they approach a wall – or a surface like an aircraft wing – and capturing these movements has historically been computationally challenging. To avoid these issues, Moin and his colleagues instead model the small-scale near-wall turbulence using a technique they call wall-modeled LES. In wall-modeled LES, the near-wall-eddy effect on the large-scale motions away from the wall are accounted for by a simpler system model.

    Moin and his colleagues have used two commercial aircraft models to validate their large-eddy simulation results: NASA’s Common Research Model and the Japan Aerospace Exploration Agency’s (JAXA’s) Standard Model. They’ve studied each at about a dozen operating conditions to see how the simulations agreed with physical measurements in wind tunnels.

    These early results show that the large-eddy simulations are capable of predicting quantities of engineering interest resulting from turbulent flow around an aircraft. This proof of concept, Moin says, is the first step. “We can compute these flows without tuning model parameters and predict experimental results. Once we have confidence as we compute many cases, then we can start looking into manipulating the flow using passive or active flow-control strategies.” The speed and accuracy of the computations, Moin notes, have been surprising. Researchers commonly thought the calculations could not have been realized until 2030, he says.

    Ultimately, these simulations will help engineers to make protrusions or other modifications of airplane wing surfaces to increase lift during take-off conditions or to design more efficient engines.

    Moin is eager to see more engineers use large eddy simulations and supercomputing to study the effect of turbulence on commercial aircraft and other applications.

    “The future of aviation is bright and needs more development,” he says. “I think with time – and hopefully it won’t take too long – aerospace engineers will start to see the advantage of these high-fidelity computations in engineering analysis and design.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 1:24 pm on October 2, 2019 Permalink | Reply
    Tags: , ANL-ALCF, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer; cosmology; brain imaging and materials science among others., , Bigger and better telescopes and accelerators and of course supercomputers on which they could run larger multiscale simulations, , , The influx of massive data sets and the computing power to sift sort and analyze it., The size of the simulations we are running is so big the problems that we are trying to solve are getting bigger so that these AI methods can no longer be seen as a luxury but as must-have technology.   

    From Argonne Leadership Computing Facility: “Artificial Intelligence: Transforming science, improving lives” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    September 30, 2019
    Mary Fitzpatrick
    John Spizzirri

    Commitment to developing artificial intelligence (AI) as a national research strategy in the United States may have unequivocally defined 2019 as the Year of AI — particularly at the federal level, more specifically throughout the U.S. Department of Energy (DOE) and its national laboratory complex.

    In February, the White House established the Executive Order on Maintaining American Leadership in Artificial Intelligence (American AI Initiative) to expand the nation’s leadership role in AI research. Its goals are to fuel economic growth, enhance national security and improve quality of life.

    The initiative injects substantial and much-needed research dollars into federal facilities across the United States, promoting technology advances and innovation and enhancing collaboration with nongovernment partners and allies abroad.

    In response, DOE has made AI — along with exascale supercomputing and quantum computing — a major element of its $5.5 billion scientific R&D budget and established the Artificial Intelligence and Technology Office, which will serve to coordinate AI work being done across the DOE.

    At DOE facilities like Argonne National Laboratory, researchers have already begun using AI to design better materials and processes, safeguard the nation’s power grid, accelerate treatments in brain trauma and cancer and develop next-generation microelectronics for applications in AI-enabled devices.

    Over the last two years, Argonne has made significant strides toward implementing its own AI initiative. Leveraging the Laboratory’s broad capabilities and world-class facilities, it has set out to explore and expand new AI techniques; encourage collaboration; automate traditional research methods, as well as lab facilities and drive discovery.

    In July, it hosted an AI for Science town hall, the first of four such events that also included Oak Ridge and Lawrence Berkeley national laboratories and DOE’s Office of Science.

    3

    Engaging nearly 350 members of the AI community, the town hall served to stimulate conversation around expanding the development and use of AI, while addressing critical challenges by using the initiative framework called AI for Science.

    “AI for Science requires new research and infrastructure, and we have to move a lot of data around and keep track of thousands of models,” says Rick Stevens, Associate Laboratory Director for Argonne’s Computing, Environment and Life Sciences (CELS) Directorate and a professor of computer science at the University of Chicago.

    1
    Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, is helping to develop the CANDLE computer architecture on the patient level, which is meant to help guide drug treatment choices for tumors based on a much wider assortment of data than currently used.

    “How do we distribute this production capability to thousands of people? We need to have system software with different capabilities for AI than for simulation software to optimize workflows. And these are just a few of the issues we have to begin to consider.”

    The conversation has just begun and continues through Laboratory-wide talks and events, such as a recent AI for Science workshop aimed at growing interest in AI capabilities through technical hands-on sessions.

    Argonne also will host DOE’s Innovation XLab Artificial Intelligence Summit in Chicago, meant to showcase the assets and capabilities of the national laboratories and facilitate an exchange of information and ideas between industry, universities, investors and end-use customers with Lab innovators and experts.
    What exactly is AI?

    Ask any number of researchers to define AI and you’re bound to get — well, first, a long pause and perhaps a chuckle — a range of answers from the more conventional ​“utilizing computing to mimic the way we interpret data but at a scale not possible by human capability” to ​“a technology that augments the human brain.”

    Taken together, AI might well be viewed as a multi-component toolbox that enables computers to learn, recognize patterns, solve problems, explore complex datasets and adapt to changing conditions — much like humans, but one day, maybe better.

    While the definitions and the tools may vary, the goals remain the same: utilize or develop the most advanced AI technologies to more effectively address the most pressing issues in science, medicine and technology, and accelerate discovery in those areas.

    At Argonne, AI has become a critical tool for modeling and prediction across almost all areas where the Laboratory has significant domain expertise: chemistry, materials, photon science, environmental and manufacturing sciences, biomedicine, genomics and cosmology.

    A key component of Argonne’s AI toolbox is a technique called machine learning and its derivatives, such as deep learning. The latter is built on neural networks comprising many layers of artificial neurons that learn internal representations of data, mimicking human information-gathering-processing systems like the brain.

    “Deep learning is the use of multi-layered neural networks to do machine learning, a program that gets smarter or more accurate as it gets more data to learn from. It’s very successful at learning to solve problems,” says Stevens.

    A staunch supporter of AI, particularly deep learning, Stevens is principal investigator on a multi-institutional effort that is developing the deep neural network application CANDLE (CANcer Distributed Learning Environment), that integrates deep learning with novel data, modeling and simulation techniques to accelerate cancer research.

    Coupled with the power of Argonne’s forthcoming exascale computer Aurora — which has the capacity to deliver a billion billion calculations per second — the CANDLE environment will enable a more personalized and effective approach to cancer treatment.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    And that is just a small sample of AI’s potential in science. Currently, all across Argonne, researchers are involved in more than 60 AI-related investigations, many of them driven by machine learning.

    Argonne Distinguished Fellow Valerie Taylor’s work looks at how applications execute on computers and large-scale, high-performance computing systems. Using machine learning, she and her colleagues model an execution’s behavior and then use that model to provide feedback on how to best modify the application for better performance.

    “Better performance may be shorter execution time or, using generated metrics such as energy, it may be reducing the average power,” says Taylor, director of Argonne’s Mathematics and Computer Science (MCS) division. ​“We use statistical analysis to develop the models and identify hints on how to modify the application.”

    Material scientists are exploring the use of machine learning to optimize models of complex material properties in the discovery and design of new materials that could benefit energy storage, electronics, renewable energy resources and additive manufacturing, to name just a few areas.

    And still more projects address complex transportation and vehicle efficiency issues by enhancing engine design, minimizing road congestion, increasing energy efficiency and improving safety.

    Beyond the deep

    Beyond deep learning, there are many sub-ranges of AI that people have been working on for years, notes Stevens. ​“And while machine learning now dominates, something else might emerge as a strength.”

    Natural language processing, for example, is commercially recognizable as voice-activated technologies — think Siri — and on-the-fly language translators. Exceeding those capabilities is its ability to review, analyze and summarize information about a given topic from journal articles, reports and other publications, and extract and coalesce select information from massive and disparate datasets.

    Immersive visualization can place us into 3D worlds of our own making, interject objects or data into our current reality or improve upon human pattern recognition. Argonne researchers have found application for virtual and augmented reality in the 3D visualization of complicated data sets and the detection of flaws or instabilities in mechanical systems.

    And of course, there is robotics — a program started at Argonne in the late 1940s and rebooted in 1999 — that is just beginning to take advantage of Argonne’s expanding AI toolkit, whether to conduct research in a specific domain or improve upon its more utilitarian use in decommissioning nuclear power plants.

    Until recently, according to Stevens, AI has been a loose collection of methods using very different underlying mechanisms, and the people using them weren’t necessarily communicating their progress or potentials with one another.

    But with a federal initiative in hand and a Laboratory-wide vision, that is beginning to change.

    Among those trying to find new ways to collaborate and combine these different AI methods is Marius Stan, a computational scientist in Argonne’s Applied Materials division (AMD) and a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

    Stan leads a research area called Intelligent Materials Design that focuses on combining different elements of AI to discover and design new materials and to optimize and control complex synthesis and manufacturing processes.

    Work on the latter has created a collaboration between Stan and colleagues in the Applied Materials and Energy Systems divisions, and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    Merging machine learning and computer vision with the Flame Spray Pyrolysis technology at Argonne’s Materials Engineering Research Facility, the team has developed an AI ​“intelligent software” that can optimize, in real time, the manufacturing process.

    “Our idea was to use the AI to better understand and control in real time — first in a virtual, experimental setup, then in reality — a complex synthesis process,” says Stan.

    Automating the process leads to a safer and much faster process compared to those led by humans. But even more intriguing is the potential that the AI process might observe materials with better properties than did the researchers.

    What drove us to AI?

    Whether or not they concur on a definition, most researchers will agree that the impetus for the escalation of AI in scientific research was the influx of massive data sets and the computing power to sift, sort and analyze it.

    Not only was the push coming from big corporations brimming with user data, but the tools that drive science were getting more expansive — bigger and better telescopes and accelerators and of course supercomputers, on which they could run larger, multiscale simulations.

    “The size of the simulations we are running is so big, the problems that we are trying to solve are getting bigger, so that these AI methods can no longer be seen as a luxury, but as must-have technology,” notes Prasanna Balaprakash, a computer scientist in MCS and ALCF.

    Data and compute size also drove the convergence of more traditional techniques, such as simulation and data analysis, with machine and deep learning. Where analysis of data generated by simulation would eventually lead to changes in an underlying model, that data is now being fed back into machine learning models and used to guide more precise simulations.

    “More or less anybody who is doing large-scale computation is adopting an approach that puts machine learning in the middle of this complex computing process and AI will continue to integrate with simulation in new ways,” says Stevens.

    “And where the majority of users are in theory-modeling-simulation, they will be integrated with experimentalists on data-intense efforts. So the population of people who will be part of this initiative will be more diverse.”

    But while AI is leading to faster time-to-solution and more precise results, the number of data points, parameters and iterations required to get to those results can still prove monumental.

    Focused on the automated design and development of scalable algorithms, Balaprakash and his Argonne colleagues are developing new types of AI algorithms and methods to more efficiently solve large-scale problems that deal with different ranges of data. These additions are intended to make existing systems scale better on supercomputers, like those housed at the ALCF; a necessity in the light of exascale computing.

    “We are developing an automated machine learning system for a wide range of scientific applications, from analyzing cancer drug data to climate modeling,” says Balaprakash. ​“One way to speed up a simulation is to replace the computationally expensive part with an AI-based predictive model that can make the simulation faster.”

    Industry support

    The AI techniques that are expected to drive discovery are only as good as the tech that drives them, making collaboration between industry and the national labs essential.

    “Industry is investing a tremendous amount in building up AI tools,” says Taylor. ​“Their efforts shouldn’t be duplicated, but they should be leveraged. Also, industry comes in with a different perspective, so by working together, the solutions become more robust.”

    Argonne has long had relationships with computing manufacturers to deliver a succession of ever-more powerful machines to handle the exponential growth in data size and simulation scale. Its most recent partnership is that with semiconductor chip manufacturer Intel and supercomputer manufacturer Cray to develop the exascale machine Aurora.

    But the Laboratory is also collaborating with a host of other industrial partners in the development or provision of everything from chip design to deep learning-enabled video cameras.

    One of these, Cerebras, is working with Argonne to test a first-of-its-kind AI accelerator that provides a 100–500 times improvement over existing AI accelerators. As its first U.S. customer, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer, cosmology, brain imaging and materials science, among others.

    The National Science Foundation-funded Array of Things, a partnership between Argonne, the University of Chicago and the City of Chicago, actively seeks commercial vendors to supply technologies for its edge computing network of programmable, multi-sensor devices.

    But Argonne and the other national labs are not the only ones to benefit from these collaborations. Companies understand the value in working with such organizations, recognizing that the AI tools developed by the labs, combined with the kinds of large-scale problems they seek to solve, offer industry unique benefits in terms of business transformation and economic growth, explains Balaprakash.

    “Companies are interested in working with us because of the type of scientific applications that we have for machine learning,” he adds ​“What we have is so diverse, it makes them think a lot harder about how to architect a chip or design software for these types of workloads and science applications. It’s a win-win for both of us.”

    AI’s future, our future

    “There is one area where I don’t see AI surpassing humans any time soon, and that is hypotheses formulation,” says Stan, ​“because that requires creativity. Humans propose interesting projects and for that you need to be creative, make correlations, propose something out of the ordinary. It’s still human territory but machines may soon take the lead.

    “It may happen,” he says, and adds that he’s working on it.

    In the meantime, Argonne researchers continue to push the boundaries of existing AI methods and forge new components for the AI toolbox. Deep learning techniques like neuromorphic algorithms that exhibit the adaptive nature of insects in an equally small computational space can be used at the ​“edge” — where there are few computing resources; as in cell phones or urban sensors.

    An optimizing neural network called a neural architecture search, where one neural network system improves another, is helping to automate deep-learning-based predictive model development in several scientific and engineering domains, such as cancer drug discovery and weather forecasting using supercomputers.

    Just as big data and better computational tools drove the convergence of simulation, data analysis and visualization, the introduction of the exascale computer Aurora into the Argonne complex of leadership-class tools and experts will only serve to accelerate the evolution of AI and witness its full assimilation into traditional techniques.

    The tools may change, the definitions may change, but AI is here to stay as an integral part of the scientific method and our lives.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:01 pm on August 5, 2019 Permalink | Reply
    Tags: "Large cosmological simulation to run on Mira", ANL-ALCF, , , , , ,   

    From Argonne Leadership Computing Facility: “Large cosmological simulation to run on Mira” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    An extremely large cosmological simulation—among the five most extensive ever conducted—is set to run on Mira this fall and exemplifies the scope of problems addressed on the leadership-class supercomputer at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Argonne physicist and computational scientist Katrin Heitmann leads the project. Heitmann was among the first to leverage Mira’s capabilities when, in 2013, the IBM Blue Gene/Q system went online at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Among the largest cosmological simulations ever performed at the time, the Outer Rim Simulation she and her colleagues carried out enabled further scientific research for many years.

    For the new effort, Heitmann has been allocated approximately 800 million core-hours to perform a simulation that reflects cutting-edge observational advances from satellites and telescopes and will form the basis for sky maps used by numerous surveys. Evolving a massive number of particles, the simulation is designed to help resolve mysteries of dark energy and dark matter.

    “By transforming this simulation into a synthetic sky that closely mimics observational data at different wavelengths, this work can enable a large number of science projects throughout the research community,” Heitmann said. “But it presents us with a big challenge.” That is, in order to generate synthetic skies across different wavelengths, the team must extract relevant information and perform analysis either on the fly or after the fact in post-processing. Post-processing requires the storage of massive amounts of data—so much, in fact, that merely reading the data becomes extremely computationally expensive.

    Since Mira was launched, Heitmann and her team have implemented in their Hardware/Hybrid Accelerated Cosmology Code (HACC) more sophisticated analysis tools for on-the-fly processing. “Moreover, compared to the Outer Rim Simulation, we’ve effected three major improvements,” she said. “First, our cosmological model has been updated so that we can run a simulation with the best possible observational inputs. Second, as we’re aiming for a full-machine run, volume will be increased, leading to better statistics. Most importantly, we set up several new analysis routines that will allow us to generate synthetic skies for a wide range of surveys, in turn allowing us to study a wide range of science problems.”

    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing predictive tools and aid the development of new models, impacting both ongoing and upcoming cosmological surveys, including the Dark Energy Spectroscopic Instrument (DESI), the Large Synoptic Survey Telescope (LSST), SPHEREx, and the “Stage-4” ground-based cosmic microwave background experiment (CMB-S4).

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018


    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    LSST

    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    NASA’s SPHEREx Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer depiction

    4

    The value of the simulation derives from its tremendous volume (which is necessary to cover substantial portions of survey areas) and from attaining levels of mass and force resolution sufficient to capture the small structures that host faint galaxies.

    The volume and resolution pose steep computational requirements, and because they are not easily met, few large-scale cosmological simulations are carried out. Contributing to the difficulty of their execution is the fact that the memory footprints of supercomputers have not advanced proportionally with processing speed in the years since Mira’s introduction. This makes that system, despite its relative age, rather optimal for a large-scale campaign when harnessed in full.

    “A calculation of this scale is just a glimpse at what the exascale resources in development now will be capable of in 2021/22,” said Katherine Riley, ALCF Director of Science. “The research community will be taking advantage of this work for a very long time.”

    Funding for the simulation is provided by DOE’s High Energy Physics program. Use of ALCF computing resources is supported by DOE’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:49 pm on July 10, 2019 Permalink | Reply
    Tags: ANL-ALCF, , , Computational materials science, Coupled cluster theory, , Kelvin probe force microscopy, , , ,   

    From Argonne Leadership Computing Facility: “Predicting material properties with quantum Monte Carlo” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    July 9, 2019
    Nils Heinonen

    1
    For one of their efforts, the team used diffusion Monte Carlo to compute how doping affects the energetics of nickel oxide. Their simulations revealed the spin density difference between bulks of potassium-doped nickel oxide and pure nickel oxide, showing the effects of substituting a potassium atom (center atom) for a nickel atom on the spin density of the bulk. Credit: Anouar Benali, Olle Heinonen, Joseph A. Insley, and Hyeondeok Shin, Argonne National Laboratory.

    Recent advances in quantum Monte Carlo (QMC) methods have the potential to revolutionize computational materials science, a discipline traditionally driven by density functional theory (DFT). While DFT—an approach that uses quantum-mechanical modeling to examine the electronic structure of complex systems—provides convenience to its practitioners and has unquestionably yielded a great many successes throughout the decades since its formulation, it is not without shortcomings, which have placed a ceiling on the possibilities of materials discovery. QMC is poised to break this ceiling.

    The key challenge is to solve the quantum many-body problem accurately and reliably enough for a given material. QMC solves these problems via stochastic sampling—that is, by using random numbers to sample all possible solutions. The use of stochastic methods allows the full many-body problem to be treated while circumventing large approximations. Compared to traditional methods, they offer extraordinary potential accuracy, strong suitability for high-performance computing, and—with few known sources of systematic error—transparency. For example, QMC satisfies a mathematical principle that allows it to set a bound for a given system’s ground state energy (the lowest-energy, most stable state).

    QMC’s accurate treatment of quantum mechanics is very computationally demanding, necessitating the use of leadership-class computational resources and thus limiting its application. Access to the computing systems at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF)—U.S. Department of Energy (DOE) Office of Science User Facilities—has enabled a team of researchers led by Paul Kent of Oak Ridge National Laboratory (ORNL) to meet the steep demands posed by QMC. Supported by DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, the team’s goal is to simulate promising materials that elude DFT’s investigative and predictive powers.

    To conduct their work, the researchers employ QMCPACK, an open-source QMC code developed by the team. It is written specifically for high-performance computers and runs on all the DOE machines. It has been run at the ALCF since 2011.

    Functional materials

    The team’s efforts are focused on studies of materials combining transition metal elements with oxygen. Many of these transition metal oxides are functional materials that have striking and useful properties. Small perturbations in the make-up or structure of these materials can cause them to switch from metallic to insulating, and greatly change their magnetic properties and ability to host and transport other atoms. Such attributes make the materials useful for technological applications while posing fundamental scientific questions about how these properties arise.

    The computational challenge has been to simulate the materials with sufficient accuracy: the materials’ properties are sensitive to small changes due to complex quantum mechanical interactions, which make them very difficult to model.

    The computational performance and large memory of the ALCF’s Theta system have been particularly helpful to the team. Theta’s storage capacity has enabled studies of material changes caused by small perturbations such as additional elements or vacancies. Over three years the team developed a new technique to more efficiently store the quantum mechanical wavefunctions used by QMC, greatly increasing the range of materials that could be run on Theta.

    ANL ALCF Theta Cray XC40 supercomputer

    Experimental Validation

    Kent noted that experimental validation is a key component of the INCITE project. “The team is leveraging facilities located at Argonne and Oak Ridge National Laboratories to grow high-quality thin films of transition-metal oxides,” he said, including vanadium oxide (VO2) and variants of nickel oxide (NiO) that have been modified with other compounds.

    For VO2, the team combined atomic force microscopy, Kelvin probe force microscopy, and time-of-flight secondary ion mass spectroscopy on VO2 grown at ORNL’s Center for Nanophase Materials Science (CNMS) to demonstrate how oxygen vacancies suppress the transition from metallic to insulating VO2. A combination of QMC, dynamical mean field theory, and DFT modeling was deployed to identify the mechanism by which this phenomenon occurs: oxygen vacancies leave positively charged holes that are localized around the vacancy site and end up distorting the structure of certain vanadium orbitals.

    For NiO, the challenge was to understand how a small quantity of dopant atoms, in this case potassium, modifies the structure and optical properties. Molecular beam epitaxy at Argonne’s Materials Science Division was used to create high quality films that were then probed via techniques such as x-ray scattering and x-ray absorption spectroscopy at Argonne’s Advanced Photon Source (APS) [below] for direct comparison with computational results. These experimental results were subsequently compared against computational models employing QMC and DFT. The APS and CNMS are DOE Office of Science User Facilities.

    So far the team has been able to compute, understand, and experimentally validate how the band gap of materials containing a single transition metal element varies with composition. Band gaps determine a material’s usefulness as a semiconductor—a substance that can alternately conduct or cease the flow of electricity (which is important for building electronic sensors or devices). The next steps of the study will be to tackle more complex materials, with additional elements and more subtle magnetic properties. While more challenging, these materials could lead to greater discoveries.

    New chemistry applications

    Many of the features that make QMC attractive for materials also make it attractive for chemistry applications. An outside colleague—quantum chemist Kieron Burke of the University of California, Irvine—provided the impetus for a paper published in Journal of Chemical Theory and Computation. Burke approached the team’s collaborators with a problem he had encountered while trying to formulate a new method for DFT. Moving forward with his attempt required benchmarks against which to test his method’s accuracy. As QMC was the only means by which sufficiently precise benchmarks could be obtained, the team produced a series of calculations for him.

    The reputed gold standard for many-body system numerical techniques in quantum chemistry is known as coupled cluster theory. While it is extremely accurate for many molecules, some are so strongly correlated quantum-mechanically that they can be thought of as existing in a superposition of quantum states. The conventional coupled cluster method cannot handle something so complicated. Co-principal investigator Anouar Benali, a computational scientist at the ALCF and Argonne’s Computational Sciences Division, spent some three years collaborating on efforts to expand QMC’s capability so as to include both low-cost and highly efficient support for these states that will in future also be needed for materials problems. Performing analysis on the system for which Burke needed benchmarks required this superposition support; he verified the results of his newly developed DFT approach against the calculations generated with Benali’s QMC expansion. They were in close agreement with each other, but not with the results conventional coupled cluster had generated—which, for one particular compound, contained significant errors.

    “This collaboration and its results have therefore identified a potential new area of research for the team and QMC,” Kent said. “That is, tackling challenging quantum chemical problems.”

    The research was supported by DOE’s Office of Science. ALCF and OLCF computing time and resources were allocated through the INCITE program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus


     
  • richardmitnick 9:45 am on June 2, 2019 Permalink | Reply
    Tags: "Tapping the power of AI and high-performance computing to extend evolution to superconductors", ANL-ALCF, ,   

    From Argonne Leadership Computing Facility: “Tapping the power of AI and high-performance computing to extend evolution to superconductors” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    May 29, 2019
    Jared Sagoff

    1
    This image depicts the algorithmic evolution of a defect structure in a superconducting material. Each iteration serves as the basis for a new defect structure. Redder colors indicate a higher current-carrying capacity. Credit: Argonne National Laboratory/Andreas Glatz

    Owners of thoroughbred stallions carefully breed prizewinning horses over generations to eke out fractions of a second in million-dollar races. Materials scientists have taken a page from that playbook, turning to the power of evolution and artificial selection to develop superconductors that can transmit electric current as efficiently as possible.

    Perhaps counterintuitively, most applied superconductors can operate at high magnetic fields because they contain defects. The number, size, shape and position of the defects within a superconductor work together to enhance the electric current carrying capacity in the presence of a magnetic field. Too many defects, however, can lead to blocking the electric current pathway or a breakdown of the superconducting material, so scientists need to be selective in how they incorporate defects into a material.

    In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, researchers used the power of artificial intelligence and high-performance supercomputers to introduce and assess the impact of different configurations of defects on the performance of a superconductor.

    The researchers developed a computer algorithm that treated each defect like a biological gene. Different combinations of defects yielded superconductors able to carry different amounts of current. Once the algorithm identified a particularly advantageous set of defects, it re-initialized with that set of defects as a ​“seed,” from which new combinations of defects would emerge.

    “Each run of the simulation is equivalent to the formation of a new generation of defects that the algorithm seeks to optimize,” said Argonne distinguished fellow and senior materials scientist Wai-Kwong Kwok, an author of the study. ​“Over time, the defect structures become progressively refined, as we intentionally select for defect structures that will allow for materials with the highest critical current.”

    The reason defects form such an essential part of a superconductor lies in their ability to trap and anchor magnetic vortices that form in the presence of a magnetic field. These vortices can move freely within a pure superconducting material when a current is applied. When they do so, they start to generate a resistance, negating the superconducting effect. Keeping vortices pinned, while still allowing current to travel through the material, represents a holy grail for scientists seeking to find ways to transmit electricity without loss in applied superconductors.

    To find the right combination of defects to arrest the motion of the vortices, the researchers initialized their algorithm with defects of random shape and size. While the researchers knew this would be far from the optimal setup, it gave the model a set of neutral initial conditions from which to work. As the researchers ran through successive generations of the model, they saw the initial defects transform into a columnar shape and ultimately a periodic arrangement of planar defects.

    “When people think of targeted evolution, they might think of people who breed dogs or horses,” said Argonne materials scientist Andreas Glatz, the corresponding author of the study. ​“Ours is an example of materials by design, where the computer learns from prior generations the best possible arrangement of defects.”

    One potential drawback to the process of artificial defect selection lies in the fact that certain defect patterns can become entrenched in the model, leading to a kind of calcification of the genetic data. ​“In a certain sense, you can kind of think of it like inbreeding,” Kwok said. ​“Conserving most information in our defect ​‘gene pool’ between generations has both benefits and limitations as it does not allow for drastic systemwide transformations. However, our digital ​‘evolution’ can be repeated with different initial seeds to avoid these problems.”

    In order to run their model, the researchers required high-performance computing facilities at Argonne and Oak Ridge National Laboratory. The Argonne Leadership Computing Facility and Oak Ridge Leadership Computing Facility are both DOE Office of Science User Facilities.

    An article based on the study, ​“Targeted evolution of pinning landscapes for large superconducting critical currents,” appeared in the May 21 edition of the PNAS. In addition to Kwok and Glatz, Argonne’s Ivan Sadovskyy, Alexei Koshelev and Ulrich Welp also collaborated.

    Funding for the research came from the DOE’s Office of Science.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:35 pm on February 12, 2019 Permalink | Reply
    Tags: ANL-ALCF, CANDLE (CANcer Distributed Learning Environment) framework, , ,   

    From insideHPC: “Argonne ALCF Looks to Singularity for HPC Code Portability” 

    From insideHPC

    February 10, 2019

    Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code most anywhere.

    Scaling code for massively parallel architectures is a common challenge the scientific community faces. When moving from a system used for development—a personal laptop, for instance, or even a university’s computing cluster—to a large-scale supercomputer like those housed at the Argonne Leadership Computing Facility [see below], researchers traditionally would only migrate the target application: the underlying software stack would be left behind.

    To help alleviate this problem, the ALCF has deployed the service Singularity. Singularity, an open-source framework originally developed by Lawrence Berkeley National Laboratory (LBNL) and now supported by Sylabs Inc., is a tool for creating and running containers (platforms designed to package code and its dependencies so as to facilitate fast and reliable switching between computing environments)—albeit one intended specifically for scientific workflows and high-performance computing resources.

    “here is a definite need for increased reproducibility and flexibility when a user is getting started here, and containers can be tremendously valuable in that regard,” said Katherine Riley, Director of Science at the ALCF. “Supporting emerging technologies like Singularity is part of a broader strategy to provide users with services and tools that help advance science by eliminating barriers to productive use of our supercomputers.”

    2
    This plot shows the number of events ATLAS events simulated (solid lines) with and without containerization. Linear scaling is shown (dotted lines) for reference.

    The demand for such services has grown at the ALCF as a direct result of the HPC community’s diversification.

    When the ALCF first opened, it was catering to a smaller user base representative of the handful of domains conventionally associated with scientific computing (high energy physics and astrophysics, for example).

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    HPC is now a principal research tool in new fields such as genomics, which perhaps lack some of the computing culture ingrained in certain older disciplines. Moreover, researchers tackling problems in machine learning, for example, constitute a new community. This creates a strong incentive to make HPC more immediately approachable to users so as to reduce the amount of time spent preparing code and establishing migration protocols, and thus hasten the start of research.

    Singularity, to this end, promotes strong mobility of compute and reproducibility due to the framework’s employment of a distributable image format. This image format incorporates the entire software stack and runtime environment of the application into a single monolithic file. Users thereby gain the ability to define, create, and maintain an application on different hosts and operating environments. Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced. Because reproducibility is so crucial to the scientific process, this capability can be seen as one of the primary assets of container technology.

    ALCF users have already begun to take advantage of the service. Argonne computational scientist Taylor Childers (in collaboration with a team of researchers from Brookhaven National Laboratory, LBNL, and the Large Hadron Collider’s ATLAS experiment) led ASCR Leadership Computing Challenge and ALCF Data Science Program projects to improve the performance of ATLAS software and workflows on DOE supercomputers.

    CERN/ATLAS detector

    Every year ATLAS generates petabytes of raw data, the interpretation of which requires even larger simulated datasets, making recourse to leadership-scale computing resources an attractive option. The ATLAS software itself—a complex collection of algorithms with many different authors—is terabytes in size and features manifold dependencies, making manual installation a cumbersome task.

    The researchers were able to run the ATLAS software on Theta inside a Singularity container via Yoda, an MPI-enabled Python application the team developed to communicate between CERN and ALCF systems and ensure all nodes in the latter are supplied with work throughout execution. The use of Singularity resulted in linear scaling on up to 1024 of Theta’s nodes, with event processing improved by a factor of four.

    “All told, with this setup we were able to deliver to ATLAS 65 million proton collisions simulated on Theta using 50 million core-hours,” said John Taylor Childers from ALCF.

    Containerization also effectively circumvented the software’s relative “unfriendliness” toward distributed shared file systems by accelerating metadata access calls; tests performed without the ATLAS software suggested that containerization could speed up such access calls by a factor of seven.

    While Singularity can present a tradeoff between immediacy and computational performance (because the containerized software stacks, generally speaking, are not written to exploit massively parallel architectures), the data-intensive ATLAS project demonstrates the potential value in such a compromise for some scenarios, given the impracticality of retooling the code at its center.

    Because containers afford users the ability to switch between software versions without risking incompatibility, the service has also been a mechanism to expand research and try out new computing environments. Rick Stevens—Argonne’s Associate Laboratory Director for Computing, Environment, and Life Sciences (CELS)—leads the Aurora Early Science Program project Virtual Drug Response Prediction. The machine learning-centric project, whose workflow is built from the CANDLE (CANcer Distributed Learning Environment) framework, enables billions of virtual drugs to be screened singly and in numerous combinations while predicting their effects on tumor cells. Their distribution made possible by Singularity containerization, CANDLE workflows are shared between a multitude of users whose interests span basic cancer research, deep learning, and exascale computing. Accordingly, different subsets of CANDLE users are concerned with experimental alterations to different components of the software stack.

    CANDLE users at health institutes, for instance, may have no need for exotic code alterations intended to harness the bleeding-edge capabilities of new systems, instead requiring production-ready workflows primed to address realistic problems,” explained Tom Brettin, Strategic Program Manager for CELS and a co-principal investigator on the project. Meanwhile, through the support of DOE’s Exascale Computing Project, CANDLE is being prepared for exascale deployment.

    Containers are relatively new technology for HPC, and their role may well continue to grow. “I don’t expect this to be a passing fad,” said Riley. “It’s functionality that, within five years, will likely be utilized in ways we can’t even anticipate yet.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: