From Argonne Leadership Computing Facility: “Artificial Intelligence: Transforming science, improving lives”

Argonne Lab
News from Argonne National Laboratory

From Argonne Leadership Computing Facility

September 30, 2019
Mary Fitzpatrick
John Spizzirri

Commitment to developing artificial intelligence (AI) as a national research strategy in the United States may have unequivocally defined 2019 as the Year of AI — particularly at the federal level, more specifically throughout the U.S. Department of Energy (DOE) and its national laboratory complex.

In February, the White House established the Executive Order on Maintaining American Leadership in Artificial Intelligence (American AI Initiative) to expand the nation’s leadership role in AI research. Its goals are to fuel economic growth, enhance national security and improve quality of life.

The initiative injects substantial and much-needed research dollars into federal facilities across the United States, promoting technology advances and innovation and enhancing collaboration with nongovernment partners and allies abroad.

In response, DOE has made AI — along with exascale supercomputing and quantum computing — a major element of its $5.5 billion scientific R&D budget and established the Artificial Intelligence and Technology Office, which will serve to coordinate AI work being done across the DOE.

At DOE facilities like Argonne National Laboratory, researchers have already begun using AI to design better materials and processes, safeguard the nation’s power grid, accelerate treatments in brain trauma and cancer and develop next-generation microelectronics for applications in AI-enabled devices.

Over the last two years, Argonne has made significant strides toward implementing its own AI initiative. Leveraging the Laboratory’s broad capabilities and world-class facilities, it has set out to explore and expand new AI techniques; encourage collaboration; automate traditional research methods, as well as lab facilities and drive discovery.

In July, it hosted an AI for Science town hall, the first of four such events that also included Oak Ridge and Lawrence Berkeley national laboratories and DOE’s Office of Science.

3

Engaging nearly 350 members of the AI community, the town hall served to stimulate conversation around expanding the development and use of AI, while addressing critical challenges by using the initiative framework called AI for Science.

“AI for Science requires new research and infrastructure, and we have to move a lot of data around and keep track of thousands of models,” says Rick Stevens, Associate Laboratory Director for Argonne’s Computing, Environment and Life Sciences (CELS) Directorate and a professor of computer science at the University of Chicago.

1
Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences, is helping to develop the CANDLE computer architecture on the patient level, which is meant to help guide drug treatment choices for tumors based on a much wider assortment of data than currently used.

“How do we distribute this production capability to thousands of people? We need to have system software with different capabilities for AI than for simulation software to optimize workflows. And these are just a few of the issues we have to begin to consider.”

The conversation has just begun and continues through Laboratory-wide talks and events, such as a recent AI for Science workshop aimed at growing interest in AI capabilities through technical hands-on sessions.

Argonne also will host DOE’s Innovation XLab Artificial Intelligence Summit in Chicago, meant to showcase the assets and capabilities of the national laboratories and facilitate an exchange of information and ideas between industry, universities, investors and end-use customers with Lab innovators and experts.
What exactly is AI?

Ask any number of researchers to define AI and you’re bound to get — well, first, a long pause and perhaps a chuckle — a range of answers from the more conventional ​“utilizing computing to mimic the way we interpret data but at a scale not possible by human capability” to ​“a technology that augments the human brain.”

Taken together, AI might well be viewed as a multi-component toolbox that enables computers to learn, recognize patterns, solve problems, explore complex datasets and adapt to changing conditions — much like humans, but one day, maybe better.

While the definitions and the tools may vary, the goals remain the same: utilize or develop the most advanced AI technologies to more effectively address the most pressing issues in science, medicine and technology, and accelerate discovery in those areas.

At Argonne, AI has become a critical tool for modeling and prediction across almost all areas where the Laboratory has significant domain expertise: chemistry, materials, photon science, environmental and manufacturing sciences, biomedicine, genomics and cosmology.

A key component of Argonne’s AI toolbox is a technique called machine learning and its derivatives, such as deep learning. The latter is built on neural networks comprising many layers of artificial neurons that learn internal representations of data, mimicking human information-gathering-processing systems like the brain.

“Deep learning is the use of multi-layered neural networks to do machine learning, a program that gets smarter or more accurate as it gets more data to learn from. It’s very successful at learning to solve problems,” says Stevens.

A staunch supporter of AI, particularly deep learning, Stevens is principal investigator on a multi-institutional effort that is developing the deep neural network application CANDLE (CANcer Distributed Learning Environment), that integrates deep learning with novel data, modeling and simulation techniques to accelerate cancer research.

Coupled with the power of Argonne’s forthcoming exascale computer Aurora — which has the capacity to deliver a billion billion calculations per second — the CANDLE environment will enable a more personalized and effective approach to cancer treatment.

Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

And that is just a small sample of AI’s potential in science. Currently, all across Argonne, researchers are involved in more than 60 AI-related investigations, many of them driven by machine learning.

Argonne Distinguished Fellow Valerie Taylor’s work looks at how applications execute on computers and large-scale, high-performance computing systems. Using machine learning, she and her colleagues model an execution’s behavior and then use that model to provide feedback on how to best modify the application for better performance.

“Better performance may be shorter execution time or, using generated metrics such as energy, it may be reducing the average power,” says Taylor, director of Argonne’s Mathematics and Computer Science (MCS) division. ​“We use statistical analysis to develop the models and identify hints on how to modify the application.”

Material scientists are exploring the use of machine learning to optimize models of complex material properties in the discovery and design of new materials that could benefit energy storage, electronics, renewable energy resources and additive manufacturing, to name just a few areas.

And still more projects address complex transportation and vehicle efficiency issues by enhancing engine design, minimizing road congestion, increasing energy efficiency and improving safety.

Beyond the deep

Beyond deep learning, there are many sub-ranges of AI that people have been working on for years, notes Stevens. ​“And while machine learning now dominates, something else might emerge as a strength.”

Natural language processing, for example, is commercially recognizable as voice-activated technologies — think Siri — and on-the-fly language translators. Exceeding those capabilities is its ability to review, analyze and summarize information about a given topic from journal articles, reports and other publications, and extract and coalesce select information from massive and disparate datasets.

Immersive visualization can place us into 3D worlds of our own making, interject objects or data into our current reality or improve upon human pattern recognition. Argonne researchers have found application for virtual and augmented reality in the 3D visualization of complicated data sets and the detection of flaws or instabilities in mechanical systems.

And of course, there is robotics — a program started at Argonne in the late 1940s and rebooted in 1999 — that is just beginning to take advantage of Argonne’s expanding AI toolkit, whether to conduct research in a specific domain or improve upon its more utilitarian use in decommissioning nuclear power plants.

Until recently, according to Stevens, AI has been a loose collection of methods using very different underlying mechanisms, and the people using them weren’t necessarily communicating their progress or potentials with one another.

But with a federal initiative in hand and a Laboratory-wide vision, that is beginning to change.

Among those trying to find new ways to collaborate and combine these different AI methods is Marius Stan, a computational scientist in Argonne’s Applied Materials division (AMD) and a senior fellow at both the University of Chicago’s Consortium for Advanced Science and Engineering and the Northwestern-Argonne Institute for Science and Engineering.

Stan leads a research area called Intelligent Materials Design that focuses on combining different elements of AI to discover and design new materials and to optimize and control complex synthesis and manufacturing processes.

Work on the latter has created a collaboration between Stan and colleagues in the Applied Materials and Energy Systems divisions, and the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

Merging machine learning and computer vision with the Flame Spray Pyrolysis technology at Argonne’s Materials Engineering Research Facility, the team has developed an AI ​“intelligent software” that can optimize, in real time, the manufacturing process.

“Our idea was to use the AI to better understand and control in real time — first in a virtual, experimental setup, then in reality — a complex synthesis process,” says Stan.

Automating the process leads to a safer and much faster process compared to those led by humans. But even more intriguing is the potential that the AI process might observe materials with better properties than did the researchers.

What drove us to AI?

Whether or not they concur on a definition, most researchers will agree that the impetus for the escalation of AI in scientific research was the influx of massive data sets and the computing power to sift, sort and analyze it.

Not only was the push coming from big corporations brimming with user data, but the tools that drive science were getting more expansive — bigger and better telescopes and accelerators and of course supercomputers, on which they could run larger, multiscale simulations.

“The size of the simulations we are running is so big, the problems that we are trying to solve are getting bigger, so that these AI methods can no longer be seen as a luxury, but as must-have technology,” notes Prasanna Balaprakash, a computer scientist in MCS and ALCF.

Data and compute size also drove the convergence of more traditional techniques, such as simulation and data analysis, with machine and deep learning. Where analysis of data generated by simulation would eventually lead to changes in an underlying model, that data is now being fed back into machine learning models and used to guide more precise simulations.

“More or less anybody who is doing large-scale computation is adopting an approach that puts machine learning in the middle of this complex computing process and AI will continue to integrate with simulation in new ways,” says Stevens.

“And where the majority of users are in theory-modeling-simulation, they will be integrated with experimentalists on data-intense efforts. So the population of people who will be part of this initiative will be more diverse.”

But while AI is leading to faster time-to-solution and more precise results, the number of data points, parameters and iterations required to get to those results can still prove monumental.

Focused on the automated design and development of scalable algorithms, Balaprakash and his Argonne colleagues are developing new types of AI algorithms and methods to more efficiently solve large-scale problems that deal with different ranges of data. These additions are intended to make existing systems scale better on supercomputers, like those housed at the ALCF; a necessity in the light of exascale computing.

“We are developing an automated machine learning system for a wide range of scientific applications, from analyzing cancer drug data to climate modeling,” says Balaprakash. ​“One way to speed up a simulation is to replace the computationally expensive part with an AI-based predictive model that can make the simulation faster.”

Industry support

The AI techniques that are expected to drive discovery are only as good as the tech that drives them, making collaboration between industry and the national labs essential.

“Industry is investing a tremendous amount in building up AI tools,” says Taylor. ​“Their efforts shouldn’t be duplicated, but they should be leveraged. Also, industry comes in with a different perspective, so by working together, the solutions become more robust.”

Argonne has long had relationships with computing manufacturers to deliver a succession of ever-more powerful machines to handle the exponential growth in data size and simulation scale. Its most recent partnership is that with semiconductor chip manufacturer Intel and supercomputer manufacturer Cray to develop the exascale machine Aurora.

But the Laboratory is also collaborating with a host of other industrial partners in the development or provision of everything from chip design to deep learning-enabled video cameras.

One of these, Cerebras, is working with Argonne to test a first-of-its-kind AI accelerator that provides a 100–500 times improvement over existing AI accelerators. As its first U.S. customer, Argonne will deploy the Cerebras CS-1 to enhance scientific AI models for cancer, cosmology, brain imaging and materials science, among others.

The National Science Foundation-funded Array of Things, a partnership between Argonne, the University of Chicago and the City of Chicago, actively seeks commercial vendors to supply technologies for its edge computing network of programmable, multi-sensor devices.

But Argonne and the other national labs are not the only ones to benefit from these collaborations. Companies understand the value in working with such organizations, recognizing that the AI tools developed by the labs, combined with the kinds of large-scale problems they seek to solve, offer industry unique benefits in terms of business transformation and economic growth, explains Balaprakash.

“Companies are interested in working with us because of the type of scientific applications that we have for machine learning,” he adds ​“What we have is so diverse, it makes them think a lot harder about how to architect a chip or design software for these types of workloads and science applications. It’s a win-win for both of us.”

AI’s future, our future

“There is one area where I don’t see AI surpassing humans any time soon, and that is hypotheses formulation,” says Stan, ​“because that requires creativity. Humans propose interesting projects and for that you need to be creative, make correlations, propose something out of the ordinary. It’s still human territory but machines may soon take the lead.

“It may happen,” he says, and adds that he’s working on it.

In the meantime, Argonne researchers continue to push the boundaries of existing AI methods and forge new components for the AI toolbox. Deep learning techniques like neuromorphic algorithms that exhibit the adaptive nature of insects in an equally small computational space can be used at the ​“edge” — where there are few computing resources; as in cell phones or urban sensors.

An optimizing neural network called a neural architecture search, where one neural network system improves another, is helping to automate deep-learning-based predictive model development in several scientific and engineering domains, such as cancer drug discovery and weather forecasting using supercomputers.

Just as big data and better computational tools drove the convergence of simulation, data analysis and visualization, the introduction of the exascale computer Aurora into the Argonne complex of leadership-class tools and experts will only serve to accelerate the evolution of AI and witness its full assimilation into traditional techniques.

The tools may change, the definitions may change, but AI is here to stay as an integral part of the scientific method and our lives.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit www.anl.gov.

About ALCF
The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

Discover new materials for batteries
Predict the impacts of global climate change
Unravel the origins of the universe
Develop renewable energy technologies

Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

Argonne Lab Campus

#ai-artificial-intelligence, #anl-alcf, #argonne-will-deploy-the-cerebras-cs-1-to-enhance-scientific-ai-models-for-cancer-cosmology-brain-imaging-and-materials-science-among-others, #aurora-exascale-supercomputer, #bigger-and-better-telescopes-and-accelerators-and-of-course-supercomputers-on-which-they-could-run-larger-multiscale-simulations, #machine-learning, #robotics, #the-influx-of-massive-data-sets-and-the-computing-power-to-sift-sort-and-analyze-it, #the-size-of-the-simulations-we-are-running-is-so-big-the-problems-that-we-are-trying-to-solve-are-getting-bigger-so-that-these-ai-methods-can-no-longer-be-seen-as-a-luxury-but-as-must-have-technology

From Argonne Leadership Computing Facility: “Large cosmological simulation to run on Mira”

Argonne Lab
News from Argonne National Laboratory

From Argonne Leadership Computing Facility

An extremely large cosmological simulation—among the five most extensive ever conducted—is set to run on Mira this fall and exemplifies the scope of problems addressed on the leadership-class supercomputer at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

Argonne physicist and computational scientist Katrin Heitmann leads the project. Heitmann was among the first to leverage Mira’s capabilities when, in 2013, the IBM Blue Gene/Q system went online at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Among the largest cosmological simulations ever performed at the time, the Outer Rim Simulation she and her colleagues carried out enabled further scientific research for many years.

For the new effort, Heitmann has been allocated approximately 800 million core-hours to perform a simulation that reflects cutting-edge observational advances from satellites and telescopes and will form the basis for sky maps used by numerous surveys. Evolving a massive number of particles, the simulation is designed to help resolve mysteries of dark energy and dark matter.

“By transforming this simulation into a synthetic sky that closely mimics observational data at different wavelengths, this work can enable a large number of science projects throughout the research community,” Heitmann said. “But it presents us with a big challenge.” That is, in order to generate synthetic skies across different wavelengths, the team must extract relevant information and perform analysis either on the fly or after the fact in post-processing. Post-processing requires the storage of massive amounts of data—so much, in fact, that merely reading the data becomes extremely computationally expensive.

Since Mira was launched, Heitmann and her team have implemented in their Hardware/Hybrid Accelerated Cosmology Code (HACC) more sophisticated analysis tools for on-the-fly processing. “Moreover, compared to the Outer Rim Simulation, we’ve effected three major improvements,” she said. “First, our cosmological model has been updated so that we can run a simulation with the best possible observational inputs. Second, as we’re aiming for a full-machine run, volume will be increased, leading to better statistics. Most importantly, we set up several new analysis routines that will allow us to generate synthetic skies for a wide range of surveys, in turn allowing us to study a wide range of science problems.”

The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing predictive tools and aid the development of new models, impacting both ongoing and upcoming cosmological surveys, including the Dark Energy Spectroscopic Instrument (DESI), the Large Synoptic Survey Telescope (LSST), SPHEREx, and the “Stage-4” ground-based cosmic microwave background experiment (CMB-S4).

LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018


NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

LSST

LSST Camera, built at SLAC



LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

NASA’s SPHEREx Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer depiction

4

The value of the simulation derives from its tremendous volume (which is necessary to cover substantial portions of survey areas) and from attaining levels of mass and force resolution sufficient to capture the small structures that host faint galaxies.

The volume and resolution pose steep computational requirements, and because they are not easily met, few large-scale cosmological simulations are carried out. Contributing to the difficulty of their execution is the fact that the memory footprints of supercomputers have not advanced proportionally with processing speed in the years since Mira’s introduction. This makes that system, despite its relative age, rather optimal for a large-scale campaign when harnessed in full.

“A calculation of this scale is just a glimpse at what the exascale resources in development now will be capable of in 2021/22,” said Katherine Riley, ALCF Director of Science. “The research community will be taking advantage of this work for a very long time.”

Funding for the simulation is provided by DOE’s High Energy Physics program. Use of ALCF computing resources is supported by DOE’s Advanced Scientific Computing Research program.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit www.anl.gov.

About ALCF
The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

Discover new materials for batteries
Predict the impacts of global climate change
Unravel the origins of the universe
Develop renewable energy technologies

Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

Argonne Lab Campus

#large-cosmological-simulation-to-run-on-mira, #anl-alcf, #astronomy, #astrophysics, #basic-research, #cosmology, #dark-energy-and-dark-matter, #supercomputing

From Argonne Leadership Computing Facility: “Predicting material properties with quantum Monte Carlo”

Argonne Lab
News from Argonne National Laboratory

From Argonne Leadership Computing Facility

July 9, 2019
Nils Heinonen

1
For one of their efforts, the team used diffusion Monte Carlo to compute how doping affects the energetics of nickel oxide. Their simulations revealed the spin density difference between bulks of potassium-doped nickel oxide and pure nickel oxide, showing the effects of substituting a potassium atom (center atom) for a nickel atom on the spin density of the bulk. Credit: Anouar Benali, Olle Heinonen, Joseph A. Insley, and Hyeondeok Shin, Argonne National Laboratory.

Recent advances in quantum Monte Carlo (QMC) methods have the potential to revolutionize computational materials science, a discipline traditionally driven by density functional theory (DFT). While DFT—an approach that uses quantum-mechanical modeling to examine the electronic structure of complex systems—provides convenience to its practitioners and has unquestionably yielded a great many successes throughout the decades since its formulation, it is not without shortcomings, which have placed a ceiling on the possibilities of materials discovery. QMC is poised to break this ceiling.

The key challenge is to solve the quantum many-body problem accurately and reliably enough for a given material. QMC solves these problems via stochastic sampling—that is, by using random numbers to sample all possible solutions. The use of stochastic methods allows the full many-body problem to be treated while circumventing large approximations. Compared to traditional methods, they offer extraordinary potential accuracy, strong suitability for high-performance computing, and—with few known sources of systematic error—transparency. For example, QMC satisfies a mathematical principle that allows it to set a bound for a given system’s ground state energy (the lowest-energy, most stable state).

QMC’s accurate treatment of quantum mechanics is very computationally demanding, necessitating the use of leadership-class computational resources and thus limiting its application. Access to the computing systems at the Argonne Leadership Computing Facility (ALCF) and the Oak Ridge Leadership Computing Facility (OLCF)—U.S. Department of Energy (DOE) Office of Science User Facilities—has enabled a team of researchers led by Paul Kent of Oak Ridge National Laboratory (ORNL) to meet the steep demands posed by QMC. Supported by DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, the team’s goal is to simulate promising materials that elude DFT’s investigative and predictive powers.

To conduct their work, the researchers employ QMCPACK, an open-source QMC code developed by the team. It is written specifically for high-performance computers and runs on all the DOE machines. It has been run at the ALCF since 2011.

Functional materials

The team’s efforts are focused on studies of materials combining transition metal elements with oxygen. Many of these transition metal oxides are functional materials that have striking and useful properties. Small perturbations in the make-up or structure of these materials can cause them to switch from metallic to insulating, and greatly change their magnetic properties and ability to host and transport other atoms. Such attributes make the materials useful for technological applications while posing fundamental scientific questions about how these properties arise.

The computational challenge has been to simulate the materials with sufficient accuracy: the materials’ properties are sensitive to small changes due to complex quantum mechanical interactions, which make them very difficult to model.

The computational performance and large memory of the ALCF’s Theta system have been particularly helpful to the team. Theta’s storage capacity has enabled studies of material changes caused by small perturbations such as additional elements or vacancies. Over three years the team developed a new technique to more efficiently store the quantum mechanical wavefunctions used by QMC, greatly increasing the range of materials that could be run on Theta.

ANL ALCF Theta Cray XC40 supercomputer

Experimental Validation

Kent noted that experimental validation is a key component of the INCITE project. “The team is leveraging facilities located at Argonne and Oak Ridge National Laboratories to grow high-quality thin films of transition-metal oxides,” he said, including vanadium oxide (VO2) and variants of nickel oxide (NiO) that have been modified with other compounds.

For VO2, the team combined atomic force microscopy, Kelvin probe force microscopy, and time-of-flight secondary ion mass spectroscopy on VO2 grown at ORNL’s Center for Nanophase Materials Science (CNMS) to demonstrate how oxygen vacancies suppress the transition from metallic to insulating VO2. A combination of QMC, dynamical mean field theory, and DFT modeling was deployed to identify the mechanism by which this phenomenon occurs: oxygen vacancies leave positively charged holes that are localized around the vacancy site and end up distorting the structure of certain vanadium orbitals.

For NiO, the challenge was to understand how a small quantity of dopant atoms, in this case potassium, modifies the structure and optical properties. Molecular beam epitaxy at Argonne’s Materials Science Division was used to create high quality films that were then probed via techniques such as x-ray scattering and x-ray absorption spectroscopy at Argonne’s Advanced Photon Source (APS) [below] for direct comparison with computational results. These experimental results were subsequently compared against computational models employing QMC and DFT. The APS and CNMS are DOE Office of Science User Facilities.

So far the team has been able to compute, understand, and experimentally validate how the band gap of materials containing a single transition metal element varies with composition. Band gaps determine a material’s usefulness as a semiconductor—a substance that can alternately conduct or cease the flow of electricity (which is important for building electronic sensors or devices). The next steps of the study will be to tackle more complex materials, with additional elements and more subtle magnetic properties. While more challenging, these materials could lead to greater discoveries.

New chemistry applications

Many of the features that make QMC attractive for materials also make it attractive for chemistry applications. An outside colleague—quantum chemist Kieron Burke of the University of California, Irvine—provided the impetus for a paper published in Journal of Chemical Theory and Computation. Burke approached the team’s collaborators with a problem he had encountered while trying to formulate a new method for DFT. Moving forward with his attempt required benchmarks against which to test his method’s accuracy. As QMC was the only means by which sufficiently precise benchmarks could be obtained, the team produced a series of calculations for him.

The reputed gold standard for many-body system numerical techniques in quantum chemistry is known as coupled cluster theory. While it is extremely accurate for many molecules, some are so strongly correlated quantum-mechanically that they can be thought of as existing in a superposition of quantum states. The conventional coupled cluster method cannot handle something so complicated. Co-principal investigator Anouar Benali, a computational scientist at the ALCF and Argonne’s Computational Sciences Division, spent some three years collaborating on efforts to expand QMC’s capability so as to include both low-cost and highly efficient support for these states that will in future also be needed for materials problems. Performing analysis on the system for which Burke needed benchmarks required this superposition support; he verified the results of his newly developed DFT approach against the calculations generated with Benali’s QMC expansion. They were in close agreement with each other, but not with the results conventional coupled cluster had generated—which, for one particular compound, contained significant errors.

“This collaboration and its results have therefore identified a potential new area of research for the team and QMC,” Kent said. “That is, tackling challenging quantum chemical problems.”

The research was supported by DOE’s Office of Science. ALCF and OLCF computing time and resources were allocated through the INCITE program.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit www.anl.gov.

About ALCF
The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

Discover new materials for batteries
Predict the impacts of global climate change
Unravel the origins of the universe
Develop renewable energy technologies

Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

Argonne Lab Campus


#anl-alcf, #applied-research-technology, #atomic-force-microscopy, #computational-materials-science, #coupled-cluster-theory, #dft-density-functional-theory, #kelvin-probe-force-microscopy, #material-sciences-2, #molecular-beam-epitaxy, #quantum-mechanics, #quantum-monte-carlo-qmc-modeling

From Argonne Leadership Computing Facility: “Tapping the power of AI and high-performance computing to extend evolution to superconductors”

Argonne Lab
News from Argonne National Laboratory

From Argonne Leadership Computing Facility

May 29, 2019
Jared Sagoff

1
This image depicts the algorithmic evolution of a defect structure in a superconducting material. Each iteration serves as the basis for a new defect structure. Redder colors indicate a higher current-carrying capacity. Credit: Argonne National Laboratory/Andreas Glatz

Owners of thoroughbred stallions carefully breed prizewinning horses over generations to eke out fractions of a second in million-dollar races. Materials scientists have taken a page from that playbook, turning to the power of evolution and artificial selection to develop superconductors that can transmit electric current as efficiently as possible.

Perhaps counterintuitively, most applied superconductors can operate at high magnetic fields because they contain defects. The number, size, shape and position of the defects within a superconductor work together to enhance the electric current carrying capacity in the presence of a magnetic field. Too many defects, however, can lead to blocking the electric current pathway or a breakdown of the superconducting material, so scientists need to be selective in how they incorporate defects into a material.

In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, researchers used the power of artificial intelligence and high-performance supercomputers to introduce and assess the impact of different configurations of defects on the performance of a superconductor.

The researchers developed a computer algorithm that treated each defect like a biological gene. Different combinations of defects yielded superconductors able to carry different amounts of current. Once the algorithm identified a particularly advantageous set of defects, it re-initialized with that set of defects as a ​“seed,” from which new combinations of defects would emerge.

“Each run of the simulation is equivalent to the formation of a new generation of defects that the algorithm seeks to optimize,” said Argonne distinguished fellow and senior materials scientist Wai-Kwong Kwok, an author of the study. ​“Over time, the defect structures become progressively refined, as we intentionally select for defect structures that will allow for materials with the highest critical current.”

The reason defects form such an essential part of a superconductor lies in their ability to trap and anchor magnetic vortices that form in the presence of a magnetic field. These vortices can move freely within a pure superconducting material when a current is applied. When they do so, they start to generate a resistance, negating the superconducting effect. Keeping vortices pinned, while still allowing current to travel through the material, represents a holy grail for scientists seeking to find ways to transmit electricity without loss in applied superconductors.

To find the right combination of defects to arrest the motion of the vortices, the researchers initialized their algorithm with defects of random shape and size. While the researchers knew this would be far from the optimal setup, it gave the model a set of neutral initial conditions from which to work. As the researchers ran through successive generations of the model, they saw the initial defects transform into a columnar shape and ultimately a periodic arrangement of planar defects.

“When people think of targeted evolution, they might think of people who breed dogs or horses,” said Argonne materials scientist Andreas Glatz, the corresponding author of the study. ​“Ours is an example of materials by design, where the computer learns from prior generations the best possible arrangement of defects.”

One potential drawback to the process of artificial defect selection lies in the fact that certain defect patterns can become entrenched in the model, leading to a kind of calcification of the genetic data. ​“In a certain sense, you can kind of think of it like inbreeding,” Kwok said. ​“Conserving most information in our defect ​‘gene pool’ between generations has both benefits and limitations as it does not allow for drastic systemwide transformations. However, our digital ​‘evolution’ can be repeated with different initial seeds to avoid these problems.”

In order to run their model, the researchers required high-performance computing facilities at Argonne and Oak Ridge National Laboratory. The Argonne Leadership Computing Facility and Oak Ridge Leadership Computing Facility are both DOE Office of Science User Facilities.

An article based on the study, ​“Targeted evolution of pinning landscapes for large superconducting critical currents,” appeared in the May 21 edition of the PNAS. In addition to Kwok and Glatz, Argonne’s Ivan Sadovskyy, Alexei Koshelev and Ulrich Welp also collaborated.

Funding for the research came from the DOE’s Office of Science.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit www.anl.gov.

About ALCF
The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

Discover new materials for batteries
Predict the impacts of global climate change
Unravel the origins of the universe
Develop renewable energy technologies

Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

Argonne Lab Campus

#tapping-the-power-of-ai-and-high-performance-computing-to-extend-evolution-to-superconductors, #anl-alcf, #applied-research-technology, #basic-research

From insideHPC: “Argonne ALCF Looks to Singularity for HPC Code Portability”

From insideHPC

February 10, 2019

Over at Argonne, Nils Heinonen writes that Researchers are using the open source Singularity framework as a kind of Rosetta Stone for running supercomputing code most anywhere.

Scaling code for massively parallel architectures is a common challenge the scientific community faces. When moving from a system used for development—a personal laptop, for instance, or even a university’s computing cluster—to a large-scale supercomputer like those housed at the Argonne Leadership Computing Facility [see below], researchers traditionally would only migrate the target application: the underlying software stack would be left behind.

To help alleviate this problem, the ALCF has deployed the service Singularity. Singularity, an open-source framework originally developed by Lawrence Berkeley National Laboratory (LBNL) and now supported by Sylabs Inc., is a tool for creating and running containers (platforms designed to package code and its dependencies so as to facilitate fast and reliable switching between computing environments)—albeit one intended specifically for scientific workflows and high-performance computing resources.

“here is a definite need for increased reproducibility and flexibility when a user is getting started here, and containers can be tremendously valuable in that regard,” said Katherine Riley, Director of Science at the ALCF. “Supporting emerging technologies like Singularity is part of a broader strategy to provide users with services and tools that help advance science by eliminating barriers to productive use of our supercomputers.”

2
This plot shows the number of events ATLAS events simulated (solid lines) with and without containerization. Linear scaling is shown (dotted lines) for reference.

The demand for such services has grown at the ALCF as a direct result of the HPC community’s diversification.

When the ALCF first opened, it was catering to a smaller user base representative of the handful of domains conventionally associated with scientific computing (high energy physics and astrophysics, for example).

ANL ALCF Cetus IBM supercomputer

ANL ALCF Theta Cray supercomputer

ANL ALCF Cray Aurora supercomputer

ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

HPC is now a principal research tool in new fields such as genomics, which perhaps lack some of the computing culture ingrained in certain older disciplines. Moreover, researchers tackling problems in machine learning, for example, constitute a new community. This creates a strong incentive to make HPC more immediately approachable to users so as to reduce the amount of time spent preparing code and establishing migration protocols, and thus hasten the start of research.

Singularity, to this end, promotes strong mobility of compute and reproducibility due to the framework’s employment of a distributable image format. This image format incorporates the entire software stack and runtime environment of the application into a single monolithic file. Users thereby gain the ability to define, create, and maintain an application on different hosts and operating environments. Once a containerized workflow is defined, its image can be snapshotted, archived, and preserved for future use. The snapshot itself represents a boon for scientific provenance by detailing the exact conditions under which given data were generated: in theory, by providing the machine, the software stack, and the parameters, one’s work can be completely reproduced. Because reproducibility is so crucial to the scientific process, this capability can be seen as one of the primary assets of container technology.

ALCF users have already begun to take advantage of the service. Argonne computational scientist Taylor Childers (in collaboration with a team of researchers from Brookhaven National Laboratory, LBNL, and the Large Hadron Collider’s ATLAS experiment) led ASCR Leadership Computing Challenge and ALCF Data Science Program projects to improve the performance of ATLAS software and workflows on DOE supercomputers.

CERN/ATLAS detector

Every year ATLAS generates petabytes of raw data, the interpretation of which requires even larger simulated datasets, making recourse to leadership-scale computing resources an attractive option. The ATLAS software itself—a complex collection of algorithms with many different authors—is terabytes in size and features manifold dependencies, making manual installation a cumbersome task.

The researchers were able to run the ATLAS software on Theta inside a Singularity container via Yoda, an MPI-enabled Python application the team developed to communicate between CERN and ALCF systems and ensure all nodes in the latter are supplied with work throughout execution. The use of Singularity resulted in linear scaling on up to 1024 of Theta’s nodes, with event processing improved by a factor of four.

“All told, with this setup we were able to deliver to ATLAS 65 million proton collisions simulated on Theta using 50 million core-hours,” said John Taylor Childers from ALCF.

Containerization also effectively circumvented the software’s relative “unfriendliness” toward distributed shared file systems by accelerating metadata access calls; tests performed without the ATLAS software suggested that containerization could speed up such access calls by a factor of seven.

While Singularity can present a tradeoff between immediacy and computational performance (because the containerized software stacks, generally speaking, are not written to exploit massively parallel architectures), the data-intensive ATLAS project demonstrates the potential value in such a compromise for some scenarios, given the impracticality of retooling the code at its center.

Because containers afford users the ability to switch between software versions without risking incompatibility, the service has also been a mechanism to expand research and try out new computing environments. Rick Stevens—Argonne’s Associate Laboratory Director for Computing, Environment, and Life Sciences (CELS)—leads the Aurora Early Science Program project Virtual Drug Response Prediction. The machine learning-centric project, whose workflow is built from the CANDLE (CANcer Distributed Learning Environment) framework, enables billions of virtual drugs to be screened singly and in numerous combinations while predicting their effects on tumor cells. Their distribution made possible by Singularity containerization, CANDLE workflows are shared between a multitude of users whose interests span basic cancer research, deep learning, and exascale computing. Accordingly, different subsets of CANDLE users are concerned with experimental alterations to different components of the software stack.

CANDLE users at health institutes, for instance, may have no need for exotic code alterations intended to harness the bleeding-edge capabilities of new systems, instead requiring production-ready workflows primed to address realistic problems,” explained Tom Brettin, Strategic Program Manager for CELS and a co-principal investigator on the project. Meanwhile, through the support of DOE’s Exascale Computing Project, CANDLE is being prepared for exascale deployment.

Containers are relatively new technology for HPC, and their role may well continue to grow. “I don’t expect this to be a passing fad,” said Riley. “It’s functionality that, within five years, will likely be utilized in ways we can’t even anticipate yet.”

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

insideHPC
2825 NW Upshur
Suite G
Portland, OR 97239

Phone: (503) 877-5048

#anl-alcf, #candle-cancer-distributed-learning-environment-framework, #exascale-computing-project, #insidehpc, #singularity

From Argonne National Laboratory ALCF: “Argonne’s pioneering computing program pivots to exascale”

Argonne Lab
News from Argonne National Laboratory

From Argonne National Laboratory ALCF

November 12, 2018

Laura Wolf
Gail Pieper

ANL ALCF Cetus IBM supercomputer

ANL ALCF Theta Cray supercomputer

ANL ALCF Cray Aurora supercomputer

ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

When it comes to the breadth and range of the U.S. Department of Energy’s (DOE) Argonne National Laboratory’s contributions to the field of high-performance computing (HPC), few if any other organizations come close. Argonne has been building advanced parallel computing environments and tools since the 1970s. Today, the laboratory serves as both an expertise center and a world-renowned source of cutting-edge computing resources used by researchers to tackle the most pressing challenges in science and engineering.

Since its digital automatic computer days in the early 1950s, Argonne has been interested in designing and developing algorithms and mathematical software for scientific purposes, such as the Argonne Subroutine Library in the 1960s and the so-called ​“PACKs” – e.g., EISPACK, LINPACK, MINPACK and FUNPACK – as well as Basic Linear Algebra Subprograms (BLAS) in the 1970s. In the 1980s, Argonne established a parallel computing program – nearly a decade before computational science was explicitly recognized as the new paradigm for scientific investigation and the government inaugurated the first major federal program to develop the hardware, software and workforce needed to solve ​“grand challenge” problems.

A place for experimenting and community building

By the late 1980s, the Argonne Computing Research Facility (ACRF) housed as many as 10 radically different parallel computer designs – nearly every emerging parallel architecture – on which applied mathematicians and computer scientists could explore algorithm interaction, program portability and parallel programming tools and languages. By 1987, Argonne was hosting a regular series of hands-on training courses on ACRF systems for attendees from universities, industry and research labs.

In 1992, at DOE’s request, the laboratory acquired an IBM SP – the first scalable, parallel system to offer multiple levels of input/output (I/O) capability essential for increasingly complex scientific applications – and, with that system, embarked on a new focus on experimental production machines. Argonne’s High-Performance Computing Research Center (1992–1997) focused on production-oriented parallel computing for grand challenges in addition to computer science and emphasized collaborative research with computational scientists. By 1997, Argonne’s supercomputing center was recognized by the DOE as one of the nation’s four high-end resource providers.

Becoming a leadership computing center

In 2002, Argonne established the Laboratory Computing Resource Center and in 2004 formed the Blue Gene Consortium with IBM and other national laboratories to design, evaluate and develop code for a series of massively parallel computers. The laboratory installed a 5-teraflop IBM Blue Gene/L in 2005, a prototype and proving ground for what in 2006 would become the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Along with another leadership computing facility at Oak Ridge National Laboratory, the ALCF was chartered to operate some of the fastest supercomputing resources in the world dedicated to scientific discovery.

In 2007, the ALCF installed a 100-teraflop Blue Gene/P and began to support projects under the Innovative and Novel Computational Impact on Theory and Experiment program. In 2008, ALCF’s 557-teraflop IBM Blue Gene/P, Intrepid, was named the fastest supercomputer in the world for open science (and third fastest machine overall) on the TOP500 list and, in 2009, entered production operation.

4
ALCF’s 557-teraflop IBM Blue Gene/P, Intrepid

Intrepid also topped the first Graph 500 list in 2010 and again in 2011. In 2012, ALCF’s 10-petaflop IBM Blue Gene/Q, Mira [above], ranked third on the June TOP500 list and entered production operation in 2013.

Next on the horizon: exascale

Argonne is part of a broader community working to achieve a capable exascale computing ecosystem for scientific discoveries. The benefits of exascale computing – computing capability that can achieve at least a billion billion operations per second – is primarily in the applications it will enable. To take advantage of this immense computing power, Argonne researchers are contributing to the emerging convergence of simulation, big data analytics and machine learning across a wide variety of science and engineering domains and disciplines.

In 2016, the laboratory launched an initiative to explore new ways to foster data-driven discoveries, with an eye to growing a new community of HPC users. The ALCF Data Science Program, the first of its kind in the leadership computing space, targets users with ​“big data” science problems and provides time on ALCF resources, staff support and training to improve computational methods across all scientific disciplines.

In 2017, Argonne launched an Intel/Cray machine, Theta [above], doubling the ALCF’s capacity to do impactful science. The facility currently is operating at the frontier of data-centric and high-performance supercomputing.

Argonne researchers are also getting ready for the ALCF’s future exascale system, Aurora [depicted above], expected in 2021. Using innovative technologies from Intel and Cray, Aurora will provide over 1,000 petaflops for research and development in three areas: simulation-based computational science; data-centric and data-intensive computing; and learning – including machine learning, deep learning, and other artificial intelligence techniques.

The ALCF has already inaugurated an Early Science Program to prepare key applications and libraries for the innovative architecture. Moreover, ALCF computational scientists and performance engineers are working closely with Argonne’s Mathematics and Computer Science (MCS) division as well as its Computational Science and Data Science and Learning divisions with the aim of advancing the boundaries of HPC technologies ahead of Aurora. (The MCS division is the seedbed for such groundbreaking software as BLAS3, p4, Automatic Differentiation of Fortran Codes (ADIFOR), the PETSc toolkit of parallel computing software, and a version of the Message Passing Interface known as MPICH.)

The ALCF also continues to add new services, helping researchers near and far to manage workflow execution of large experiments and to co-schedule jobs between ALCF systems, thereby extending Argonne’s reach even further as a premier provider of computing and data analysis resources for the scientific research community.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit www.anl.gov.

About ALCF

The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

Discover new materials for batteries
Predict the impacts of global climate change
Unravel the origins of the universe
Develop renewable energy technologies

Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

Argonne Lab Campus

#anl-alcf, #applied-research-technology, #by-the-late-1980s-the-argonne-computing-research-facility-acrf-housed-as-many-as-10-radically-different-parallel-computer-designs, #next-on-the-horizon-exascale, #supercomputing