Tagged: ANL-ALCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:02 am on March 14, 2017 Permalink | Reply
    Tags: , ANL-ALCF, , , Vector boson plus jet event   

    From ALCF: “High-precision calculations help reveal the physics of the universe” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    March 9, 2017
    Joan Koka

    1
    With the theoretical framework developed at Argonne, researchers can more precisely predict particle interactions such as this simulation of a vector boson plus jet event. Credit: Taylor Childers, Argonne National Laboratory

    On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before.

    Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified.

    The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales.

    “So far, the Standard Model of particle physics has been very successful in describing the particle interactions we have seen experimentally, but we know that there are things that this model doesn’t describe completely.


    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    We don’t know the full theory,” said Argonne theorist Radja Boughezal, who developed the framework with her team.

    “The first step in discovering the full theory and new models involves looking for deviations with respect to the physics we know right now. Our hope is that there is deviation, because it would mean that there is something that we don’t understand out there,” she said.

    The theoretical method developed by the Argonne team is currently being deployed on Mira, one of the fastest supercomputers in the world, which is housed at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Using Mira, researchers are applying the new framework to analyze the production of missing energy in association with a jet, a particle interaction of particular interest to researchers at the Large Hadron Collider (LHC) in Switzerland.




    LHC at CERN

    Physicists at the LHC are attempting to produce new particles that are known to exist in the universe but have yet to be seen in the laboratory, such as the dark matter that comprises a quarter of the mass and energy of the universe.


    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Although scientists have no way today of observing dark matter directly — hence its name — they believe that dark matter could leave a “missing energy footprint” in the wake of a collision that could indicate the presence of new particles not included in the Standard Model. These particles would interact very weakly and therefore escape detection at the LHC. The presence of a “jet”, a spray of Standard Model particles arising from the break-up of the protons colliding at the LHC, would tag the presence of the otherwise invisible dark matter.

    In the LHC detectors, however, the production of a particular kind of interaction — called the Z-boson plus jet process — can mimic the same signature as the potential signal that would arise from as-yet-unknown dark matter particles. Boughezal and her colleagues are using their new framework to help LHC physicists distinguish between the Z-boson plus jet signature predicted in the Standard Model from other potential signals.

    Previous attempts using less precise calculations to distinguish the two processes had so much uncertainty that they were simply not useful for being able to draw the fine mathematical distinctions that could potentially identify a new dark matter signal.

    “It is only by calculating the Z-boson plus jet process very precisely that we can determine whether the signature is indeed what the Standard Model predicts, or whether the data indicates the presence of something new,” said Frank Petriello, another Argonne theorist who helped develop the framework. “This new framework opens the door to using Z-boson plus jet production as a tool to discover new particles beyond the Standard Model.”

    Applications for this method go well beyond studies of the Z-boson plus jet. The framework will impact not only research at the LHC, but also studies at future colliders which will have increasingly precise, high-quality data, Boughezal and Petriello said.

    “These experiments have gotten so precise, and experimentalists are now able to measure things so well, that it’s become necessary to have these types of high-precision tools in order to understand what’s going on in these collisions,” Boughezal said.

    “We’re also so lucky to have supercomputers like Mira because now is the moment when we need these powerful machines to achieve the level of precision we’re looking for; without them, this work would not be possible.”

    Funding and resources for this work was previously allocated through the Argonne Leadership Computing Facility’s (ALCF’s) Director’s Discretionary program; the ALCF is supported by the DOE’s Office of Science’s Advanced Scientific Computing Research program. Support for this work will continue through allocations coming from the Innovation and Novel Computational Impact on Theory and Experiment (INCITE) program.

    The INCITE program promotes transformational advances in science and technology through large allocations of time on state-of-the-art supercomputers.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 2:33 pm on January 23, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , , Stable versions of synthetic peptides, Tailor-made drug molecules   

    From ALCF: “A rising peptide: Supercomputing helps scientists come closer to tailoring drug molecules” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    January 23, 2017
    Robert Grant

    1
    An artificial peptide made from a mixture of natural L-amino acids (the right half of the molecule) and non-natural, mirror-image D-amino acids (the left half of the molecule), designed computationally using INCITE resources. This peptide is designed to fold into a stable structure with a topology not found in nature, featuring a canonical right-handed alpha-helix packing against a non-canonical left-handed alpha-helix. Since structure imparts function, the ability to design non-natural structures permits scientists to create exciting new functions never explored by natural proteins. This peptide was synthesized chemically, and its structure was solved by nuclear magnetic resonance spectroscopy to confirm that it does indeed adopt this fold. The peptide backbone is shown as a translucent gold ribbon, and amino acid side-chains are shown as dark sticks. The molecular surface is shown as a transparent outline. Credit: Vikram Mulligan, University of Washington

    A team of researchers led by biophysicists at the University of Washington have come one step closer to designing tailor-made drug molecules that are more precise and carry fewer side effects than most existing therapeutic compounds.

    With the help of the Mira supercomputer, located at the Argonne Leadership Computing Facility at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the scientists have successfully designed and verified stable versions of synthetic peptides, components that join together to form proteins.

    They published their work in a recent issue of Nature.

    The computational protocol, which was validated by assembling physical peptides in the chemistry lab and comparing them to the computer models, may one day enable drug developers to craft novel, therapeutic peptides that precisely target specific disease-causing molecules within the body. And the insights the researchers gleaned constitute a significant advance in the fundamental understanding of protein folding.

    “That you can design molecules from scratch that fold up into structures, some of which are quite unlike what you see in nature, demonstrates a pretty fundamental understanding of what goes on at the molecular level,” said David Baker, the University of Washington biophysicist who led the research. “That’s certainly one of the more exciting things about this work.”

    Baker Lab

    David Baker
    David Baker

    The majority of drugs that humans use to treat the variety of ailments we suffer are so-called “small molecules.” These tiny compounds easily pass through different body systems to target receptor proteins studded in the membranes of our cells.

    Most do their job well, but they come with a major drawback: “Most drugs in use right now are small molecules, which are very tiny and nonspecific. They bind to lots of different things, which produces lots of side effects,” said Vikram Mulligan, a postdoctoral researcher in Baker’s lab and coauthor on the paper.

    More complex protein drugs ameliorate this problem, but they less readily disperse throughout the body because the more bulky molecules have a harder time passing through blood vessels, the linings of the digestive tract and other barriers.

    And proteins, which are giant on the molecular scale, have several layers of structure that all overlap to make them less static and more dynamic, making predicting their binding behavior a tricky prospect.

    But between the extremes of small, but imprecise, molecules and floppy, but high-specificity proteins, there exists a middle ground – peptides. These short chains of amino acids, which normally link together to make complex proteins, can target specific receptors, diffuse easily throughout the body and also sustain a rigid structure.

    Some naturally-occurring peptides are already used as drugs, such as the immunosuppressant ciclosporin, but researchers could open up a world of pharmaceutical opportunity if they could design and synthesize peptides.

    That’s precisely what Baker and his team did, tweaking the Rosetta software package that they built for the design of protein structures to accommodate synthetic amino acids that do not exist in nature, in addition to the 20 natural amino acids.

    After designing the chemical building blocks of peptides, the researchers used the supercomputer Mira, with its 10 petaflops of processing power and more than 780,000 cores, to model scores of potential shapes, or conformations, that specific backbone sequences of amino acids might take.

    “We basically sample millions and millions of these conformations,” said Yuri Alexeev, a project specialist in computational science at the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility. “At the same time you also improve the energy functions,” which are measurements to describe the efficiency and stability of each possible folding arrangement.

    Though he was not a coauthor on the Nature paper, Alexeev helped Baker’s team scale up previous programs it had used to design proteins for modeling peptides on Mira.

    Executing so many calculations simultaneously would be virtually impossible without Mira’s computing power, according to Mulligan.

    “The big challenge with designing peptides that fold is that you have a chain of amino acids that can exist in an astronomical number of conformations,” he said.

    Baker and his colleagues had tasked Mira with modeling millions of potential peptide conformations before, but this study stands out for two reasons.

    First, the researchers arrived at a handful of peptides with specific conformations that the computations predicted would be stable.

    Second, when Baker’s lab created seven of these peptides in their physical wet lab, the reality of the peptides’ conformations and stability corresponded remarkably well with the computer models.

    “At best, what comes out of a computer is a prediction, and at worst what comes out of a computer is a fantasy. So we never really consider it a result until we’ve actually made the molecule in the wet lab and confirmed that it actually has the structure that we designed it to have,” said Mulligan.

    “That’s exactly what we did in this paper,” he said. “We made a panel of these peptides that were designed to fold into very specific shapes, diverse shapes, and we experimentally confirmed that all of them folded into the shapes that we designed.”

    While this experiment sought to create totally new peptides in stable conformations as a proof of concept, Mulligan says that the Baker lab is now moving on to design functional peptides with specific targets in mind.

    Further research may bring the team closer to a protocol that could actually be used to design peptide drugs that target a specific receptor, such as those that make viruses like Ebola or HIV susceptible to attack.

    Computer time was awarded by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program; the project also used resources of the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility at Pacific Northwest National Laboratory.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 3:29 pm on January 4, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , Wind studies   

    From ALCF: “Supercomputer simulations helping to improve wind predictions” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    January 3, 2017
    Katie Jones

    1
    Station locations and lists of instruments deployed within the Columbia River Gorge, Columbia River Basin, and surrounding region. Credit:
    James Wilczak, NOAA

    A research team led by the National Oceanic and Atmospheric Administration (NOAA) is performing simulations at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, to develop numerical weather prediction models that can provide more accurate wind forecasts in regions with complex terrain. The team, funded by DOE in support of its Wind Forecast Improvement Project II (WFIP 2), is testing and validating the computational models with data being collected from a network of environmental sensors in the Columbia River Gorge region.

    Wind turbines dotting the Columbia River Gorge in Washington and Oregon can collectively generate about 4,500 megawatts (MW) of power, or more than that of five, 800-MW nuclear power plants. However, the gorge region and its dramatic topography create highly variable wind conditions, posing a challenge for utility operators who use weather forecast models to predict when wind power will be available on the grid.

    If predictions are unreliable, operators must depend on steady power sources like coal and nuclear plants to meet demand. Because they take a long time to fuel and heat, conventional power plants operate on less flexible timetables and can generate power that is then wasted if wind energy unexpectedly floods the grid.

    To produce accurate wind predictions over complex terrain, researchers are using Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to increase resolution and improve physical representations to better simulate wind features in national forecast models. In a unique intersection of field observation and computer simulation, the research team has installed and is collecting data from a network of environmental instruments in the Columbia River Gorge region that is being used to test and validate model improvements.

    This research is part of the Wind Forecast Improvement Project II (WFIP 2), an effort sponsored by DOE in collaboration with NOAA, Vaisala—a manufacturer of environmental and meteorological equipment—and a number of national laboratories and universities. DOE aims to increase U.S. wind energy from five to 20 percent of total energy use by 2020, which means optimizing how wind is used on the grid.

    “Our goal is to give utility operators better forecasts, which could ultimately help make the cost of wind energy a little cheaper,” said lead model developer Joe Olson of NOAA. “For example, if the forecast calls for a windy day but operators don’t trust the forecast, they won’t be able to turn off coal plants, which are releasing carbon dioxide when maybe there was renewable wind energy available.”

    The complicated physics of wind

    For computational efficiency, existing forecast models assume the Earth’s surface is relatively flat—which works well at predicting wind on the flat terrain of the Midwestern United States where states like Texas and Iowa generate many thousands of megawatts of wind power. Yet, as the Columbia River Gorge region demonstrates, some of the ripest locations for harnessing wind energy could be along mountains and coastlines where conditions are difficult to predict.

    “There are a lot of complications predicting wind conditions for terrain with a high degree of complexity at a variety of spatial scales,” Olson said.

    Two major challenges include overcoming a model resolution that is too low for resolving wind features in sharp valleys and mountain gaps and a lack of observational data.

    At the NOAA National Center for Environmental Prediction, two atmospheric models run around the clock to provide national weather forecasts: the 13-km Rapid Refresh (RAP) and the 3-km High-Resolution Rapid Refresh (HRRR). Only a couple of years old, the HRRR model has improved storm and winter weather predictions by resolving atmospheric features at 9 km2—or about 2.5 times the size of Central Park in New York City.

    At a resolution of a few kilometers, HRRR can capture processes at the mesoscale—about the size of storms—but cannot resolve features at the microscale, which is a few hundred feet. Some phenomena important to wind prediction that cannot be modeled in RAP or HRRR include mountain wakes (the splitting of airflow obstructed by the side of a mountain); mountain waves (the oscillation of air flow on the side of the mountain that affects cloud formation and turbulence); and gap flow (small-scale winds that can blow strongly through gaps in mountains and gorge ridges).

    The 750-meter leap

    To make wind predictions that are sufficiently accurate for utility operators, Olson said they need to model physical parameters at a 750-m resolution—about one-sixth the size of Central Park, or an average wind farm. This 16-times increase in resolution will require a lot of real-world data for model testing and validation, which is why the WFIP 2 team outfitted the Columbia River Gorge region with more than 20 environmental sensor stations.

    “We haven’t been able to identify all the strengths and weaknesses for wind predictions in the model because we haven’t had a complete, detailed dataset,” Olson said. “Now we have an expansive network of wind profilers and other weather instruments. Some are sampling wind in mountain gaps and valleys, others are on ridges. It’s a multiscale network that can capture the high-resolution aspects of the flow, as well as the broader mesoscale flows.”

    Many of the sensors send data every 10 minutes. Considering data will be collected for an 18-month period that began in October 2015 and ends March 2017, this steady stream will ultimately amount to about half a petabyte. The observational data is initially sent to Pacific Northwest National Laboratory where it is stored until it is used to test model parameters on Mira at Argonne.

    The WFIP 2 research team needed Mira’s highly parallel architecture to simulate an ensemble of about 20 models with varied parameterizations. ALCF researchers Ray Loy and Ramesh Balakrishnan worked with the team to optimize the HRRR architectural configuration and craft a strategy that allowed them to run the necessary ensemble jobs.

    “We wanted to run on Mira because ALCF has experience using HRRR for climate simulations and running ensembles jobs that would allow us to compare the models’ physical parameters,” said Rao Kotamarthi, chief scientist and department head of Argonne’s Climate and Atmospheric Science Department. “The ALCF team helped to scale the model to Mira and instructed us on how to bundle jobs so they avoid interrupting workflow, which is important for a project that often has new data coming in.”

    The ensemble approach allowed the team to create case studies that are used to evaluate how each simulation compared to observational data.

    “We pick certain case studies where the model performs very poorly, and we go back and change the physics in the model until it improves, and we keep doing that for each case study so that we have significant improvement across many scenarios,” Olson said.

    At the end of the field data collection, the team will simulate an entire year of weather conditions with an emphasis on wind in the Columbia River Gorge region using the control model—the 3-km HRRR model before any modifications were made—and a modified model with the improved physical parameterizations.

    “That way, we’ll be able to get a good measure of how much has improved overall,” Olson said.

    Computing time on Mira was awarded through the ASCR Leadership Computing Challenge (ALCC). Collaborating institutions include DOE’s Wind Energy Technologies Office, NOAA, Argonne, Pacific Northwest National Laboratory, Lawrence Livermore National Laboratory, the National Renewable Energy Laboratory, the University of Colorado, Notre Dame University, Texas Tech University, and Vaisala.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 3:28 pm on November 23, 2016 Permalink | Reply
    Tags: ANL-ALCF, , , Computerworld, ,   

    From ALCF via Computerworld: “U.S. sets plan to build two exascale supercomputers” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    1

    COMPUTERWORLD

    Nov 21, 2016
    Patrick Thibodeau

    2
    ARM

    The U.S believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

    The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials.

    But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump’s administration will change directions. The incoming administration is a wild card. Supercomputing wasn’t a topic during the campaign, and Trump’s dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer.

    At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated.”

    Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

    Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project, appeared sanguine. “We believe that an important goal of the exascale computing project is to help economic competitiveness and economic security,” said Messina. “I could imagine that the administration would think that those are important things.”

    Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

    The ongoing exascale research funding (the U.S. budget is $150 million this year) will help with advances in software, memory, processors and other technologies that ultimately filter out to the broader commercial market.

    This is very much a global race, which is something the Trump administration will have to be mindful of. China, Europe and Japan are all developing exascale systems.

    China plans to have an exascale system ready by 2020. These nations see exascale — and the computing advances required to achieve it — as a pathway to challenging America’s tech dominance.

    “I’m not losing sleep over it yet,” said Messina, of the possibility that the incoming Trump administration may have different supercomputing priorities. “Maybe I will in January.”

    The U.S. will award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

    The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

    The last big performance milestone in supercomputing occurred in 2008 with the development of a petaflop system. An exaflop is a 1,000-petaflop system and building it is challenging because of the limits of Moore’s Law, a 1960s-era observation that noted the number of transistors on a chip doubles about every two years.

    “Now we’re at the point where Moore’s Law is just about to end,” said Messina in an interview. That means the key to building something faster “is by having much more parallelism, and many more pieces. That’s how you get the extra speed.”

    An exascale system will solve a problem 50 times faster than the 20-petaflop systems in use in government labs today.

    Development work has begun on the systems and applications that can utilize hundreds of millions of simultaneous parallel events. “How do you manage it — how do you get it all to work smoothly?” said Messina.

    Another major problem is energy consumption. An exascale machine can be built today using current technology, but such a system would likely need its own power plant. The U.S. wants an exascale system that can operate on 20 megawatts and certainly no more than 30 megawatts.

    Scientists will have to come up with a way “to vastly reduce the amount of energy it takes to do a calculation,” said Messina. The applications and software development are critical because most of the energy is used to move data. And new algorithms will be needed.

    About 500 people are working at universities and national labs on the DOE’s coordinated effort to develop the software and other technologies exascale will need.

    Aside from the cost of building the systems, the U.S. will spend millions funding the preliminary work. Vendors want to maintain the intellectual property of what they develop. If it cost, for instance, $50 million to develop a certain aspect of a system, the U.S. may ask the vendor to pay 40% of that cost if they want to keep the intellectual property.

    A key goal of the U.S. research funding is to avoid creation of one-off technologies that can only be used in these particular exascale systems.

    “We have to be careful,” Terri Quinn, a deputy associate director for HPC at Lawrence Livermore National Laboratory, said at the SC16 panel session. “We don’t want them (vendors) to give us capabilities that are not sustainable in a business market.”

    The work under way will help ensure that the technology research is far enough along to enable the vendors to respond to the 2019 request for proposals.

    Supercomputers can deliver advances in modeling and simulation. Instead of building physical prototypes of something, a supercomputer can allow modeling virtually. This can speed the time it takes something to get to market, whether a new drug or car engine. Increasingly, HPC is used in big data and is helping improve cybersecurity through rapid analysis; artificial intelligence and robotics are other fields with strong HPC demand.

    China will likely beat the U.S. in developing an exascale system, but the real test will be their usefulness.

    Messina said the U.S. approach is to develop an exascale eco-system involving vendors, universities and the government. The hope is that the exascale systems will not only a have a wide range of applications ready for them, but applications that are relatively easy to program. Messina wants to see these systems quickly put to immediate and broad use.

    “Economic competitiveness does matter to a lot of people,” said Messina.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:59 pm on November 12, 2016 Permalink | Reply
    Tags: ANL-ALCF, , ,   

    From Argonne Leadership Computing Facility “Exascale Computing Project announces $48 million to establish four exascale co-design centers” 

    ANL Lab
    News from Argonne National Laboratory

    Argonne Leadership Computing Facility
    A DOE Office of Science user facility

    November 11, 2016
    Mike Bernhardt

    1
    Co-design and integration of hardware, software, applications and platforms, is essential to deploying exascale-class systems that will meet the future requirements of the scientific communities these systems will serve. Credit: Andre Schleife, UIUC

    The U.S. Department of Energy’s (DOE’s) Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4-year $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients.

    The ECP is responsible for the planning, execution and delivery of technologies necessary for a capable exascale ecosystem to support the nation’s exascale imperative, including software, applications, hardware and early testbed platforms.

    Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.

    According to Doug Kothe, ECP Director of Application Development: “Co-design lies at the heart of the Exascale Computing Project. ECP co-design, an intimate interchange of the best that hardware technologies, software technologies and applications have to offer each other, will be a catalyst for delivery of exascale-enabling science and engineering solutions for the U.S.”

    “By targeting common patterns of computation and communication, known as “application motifs,” we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer,” he said.

    The development of capable exascale systems requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology and a new generation of computational science applications are collaboratively involved in a participatory design process referred to as co-design.

    The co-design process is paramount to ensuring that future exascale applications adequately reflect the complex interactions and trade-offs associated with the many new—and sometimes conflicting—design options, enabling these applications to tackle problems they currently can’t address.

    According to ECP Director Paul Messina, “The establishment of these and future co design centers is foundational to the creation of an integrated, usable and useful exascale ecosystem. After a lengthy review, we are pleased to announce that we have initially selected four proposals for funding. The establishment of these co-design centers, following on the heels of our recent application development awards, signals the momentum and direction of ECP as we bring together the necessary ecosystem and infrastructure to drive the nation’s exascale imperative.”
    The four selected co-design proposals and their principal investigators are as follows:

    CODAR: Co-Design Center for Online Data Analysis and Reduction at the Exascale

    Principal Investigator: Ian Foster, Argonne National Laboratory Distinguished Fellow

    This co-design center will focus on overcoming the rapidly growing gap between compute speed and storage input/output rates by evaluating, deploying and integrating novel online data analysis and reduction methods for the exascale. Working closely with Exascale Computing Project applications, CODAR will undertake a focused co-design process that targets both common and domain-specific data analysis and reduction methods, with the goal of allowing application developers to choose and configure methods to output just the data needed by the application. CODAR will engage directly with providers of ECP hardware, system software, programming models, data analysis and reduction algorithms and applications in order to better understand and guide tradeoffs in the development of exascale systems, applications and software frameworks, given constraints relating to application development costs, application fidelity, performance portability, scalability and power efficiency.

    “Argonne is pleased to be leading CODAR efforts in support of the Exascale Computing Project,” said Argonne Distinguished Fellow Ian Foster. “We aim in CODAR to co-optimize applications, data services and exascale platforms to deliver the right bits in the right place at the right time.”

    Block-Structured AMR Co-Design Center

    Principal Investigator: John Bell, Lawrence Berkeley National Laboratory

    The Block-Structured Adaptive Mesh Refinement Co-Design Center will be led by Lawrence Berkeley National Laboratory with support from Argonne National Laboratory and the National Renewable Energy Laboratory. The goal is to develop a new framework, AMReX, to support the development of block-structured adaptive mesh refinement algorithms for solving systems of partial differential equations with complex boundary conditions on exascale architectures. Block-structured adaptive mesh refinement provides a natural framework in which to focus computing power on the most critical parts of the problem in the most computationally efficient way possible. Block-structured AMR is already widely used to solve many problems relevant to DOE. Specifically, at least six of the 22 exascale application projects announced last month—in the areas of accelerators, astrophysics, combustion, cosmology, multiphase flow and subsurface flow—will rely on block-structured AMR as part of the ECP.

    “This co-design center reflects the important role of adaptive mesh refinement in accurately simulating problems at scales ranging from the edges of flames to global climate to the makeup of the universe, and how AMR will be critical to tackling problems at the exascale,” said David Brown, director of Berkeley Lab’s Computational Research Division. “It’s also important to note that AMR will be a critical component in a third of the 22 exascale application projects announced in September, which will help ensure that researchers can make productive use of exascale systems when they are deployed.”

    Center for Efficient Exascale Discretizations (CEED)

    Principal Investigator: Tzanio Kolev, Lawrence Livermore National Laboratory

    Fully exploiting future exascale architectures will require a rethinking of the algorithms used in the large-scale applications that advance many science areas vital to DOE and the National Nuclear Security Administration (NNSA), such as global climate modeling, turbulent combustion in internal combustion engines, nuclear reactor modeling, additive manufacturing, subsurface flow and national security applications. The newly established Center for Efficient Exascale Discretizations aims to help these DOE and NNSA applications to take full advantage of exascale hardware by using state-of-the-art ‘high-order discretizations’ that provide an order of magnitude performance improvement over traditional methods.

    In simple mathematical terms, discretization denotes the process of dividing a geometry into finite elements, or building blocks, in preparation for analysis. This process, which can dramatically improve application performance, involves making simplifying assumptions to reduce demands on the computer, but with minimal loss of accuracy. Recent developments in supercomputing make it increasingly clear that the high-order discretizations, which CEED is focused on, have the potential to achieve optimal performance and deliver fast, efficient and accurate simulations on exascale systems.

    The CEED Co-Design Center is a research partnership of two DOE labs and five universities. Partners include Lawrence Livermore National Laboratory; Argonne National Laboratory; the University of Illinois Urbana-Champaign; Virginia Tech; University of Tennessee, Knoxville; Colorado University, Boulder; and the Rensselaer Polytechnic Institute.

    “The CEED team I have the privilege to lead is dedicated to the development of next-generation discretization software and algorithms that will enable a wide range of applications to run efficiently on future hardware,” said CEED director Tzanio Kolev of Lawrence Livermore National Laboratory. “Our co-design center is focused first and foremost on applications. We bring to this enterprise a collaborative team of application scientists, computational mathematicians and computer scientists with a strong track record of delivering performant software on leading-edge platforms. Collectively, we support hundreds of users in national labs, industry and academia, and we are committed to pushing simulation capabilities to new levels across an ever-widening range of applications.”

    Co-design center for Particle Applications (CoPA)

    Principal Investigator: Tim Germann, Los Alamos National Laboratory

    This co-design center will serve as a centralized clearinghouse for particle-based ECP applications, communicating their requirements and evaluating potential uses and benefits of ECP hardware and software technologies using proxy applications. Particle-based simulation approaches are ubiquitous in computational science and engineering, and they involve the interaction of each particle with its environment by direct particle-particle interactions at shorter ranges and/or by particle-mesh interactions with a local field that is set up by longer-range effects. Best practices in code portability, data layout and movement, and performance optimization will be developed and disseminated via sustainable, productive and interoperable co-designed numerical recipes for particle-based methods that meet the application requirements within the design space of software technologies and subject to exascale hardware constraints. The ultimate goal is the creation of scalable open exascale software platforms suitable for use by a variety of particle-based simulations.

    “Los Alamos is delighted to be leading the Co-Design Center for Particle-Based Methods: From Quantum to Classical, Molecular to Cosmological, which builds on the success of ExMatEx, the Exascale CoDesign Center for Materials in Extreme Environments,” said John Sarrao, Associate Director for Theory, Simulation, and Computation at Los Alamos. “Advancing deterministic particle-based methods is essential for simulations at the exascale, and Los Alamos has long believed that co-design is the right approach for advancing these frontiers. We look forward to partnering with our colleague laboratories in successfully executing this important element of the Exascale Computing Project.”

    About ECP

    The ECP is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within DOE, responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 7:56 am on September 8, 2016 Permalink | Reply
    Tags: ANL-ALCF, , , Two Argonne-led projects among $39.8 million in first-round Exascale Computing Project awards   

    From ALCF at ANL: “Two Argonne-led projects among $39.8 million in first-round Exascale Computing Project awards” 

    ANL Lab

    News from Argonne National Laboratory

    September 7, 2016
    Brian Grabowski and Jared Sagoff

    The U.S. Department of Energy’s (DOE’s) Exascale Computing Project (ECP) today announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations.

    Exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 to 100 times faster than the nation’s most powerful supercomputers in use today.

    The 15 awards being announced total $39.8 million, targeting advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy and national security, as well as collaborations such as the Precision Medicine Initiative with the National Institutes of Health’s National Cancer Institute.

    Of the proposals announced that are receiving full funding, two are being led by principal investigators at the DOE’s Argonne National Laboratory:

    Computing the Sky at Extreme Scales equips cosmologists with the ability to design foundational simulations to create “virtual universes” on demand at the extreme fidelities demanded by future multi-wavelength sky surveys. The new discoveries that will emerge from the combination of sky surveys and advanced simulation provided by the ECP will shed more light on three key ingredients of our universe: dark energy, dark matter and inflation. All three of these concepts reach beyond the known boundaries of the Standard Model of particle physics.

    Salman Habib, Principal Investigator, Argonne National Laboratory, with Los Alamos National Laboratory and Lawrence Berkeley National Laboratory.

    Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer focuses on building a scalable deep neural network code called the CANcer Distributed Learning Environment (CANDLE) that addresses three top challenges of the National Cancer Institute: understanding the molecular basis of key protein interactions, developing predictive models for drug response and automating the analysis and extraction of information from millions of cancer patient records to determine optimal cancer treatment strategies.

    Rick Stevens, Principal Investigator, Argonne National Laboratory, with Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory and the National Cancer Institute.

    Additionally, a third project led by Argonne will be receiving seed funding:

    Multiscale Coupled Urban Systems will create an integrated modeling framework comprising data curation, analytics, modeling and simulation components that will equip city designers, planners and managers to scientifically develop and evaluate solutions to issues that affect cities now and in the future. The framework will focus first on integrating urban atmosphere and infrastructure heat exchange and air flow; building energy demand at district or city-scale, generation and use; urban dynamics and socioeconomic models; population mobility and transportation; and hooks to expand to include energy systems (biofuels, electricity and natural gas) and water resources.

    Charlie Catlett, Principal Investigator, Argonne National Laboratory, with Lawrence Berkeley National Laboratory, National Renewable Energy Laboratory, Oak Ridge National Laboratory and Pacific Northwest National Laboratory.

    The application efforts will help guide DOE’s development of a U.S. exascale ecosystem as part of President Obama’s National Strategic Computing Initiative. DOE, the U.S. Department of Defense and the National Science Foundation have been designated as lead agencies, and ECP is the primary DOE contribution to the initiative.

    The ECP’s multiyear mission is to maximize the benefits of high-performance computing for U.S. economic competitiveness, national security and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms and workforce development needs critical to the effective development and deployment of future exascale systems.

    Leadership of the ECP comes from six DOE national laboratories: the Office of Science’s Oak Ridge, Argonne and Lawrence Berkeley national labs and the National Nuclear Security Administration’s (NNSA’s) Lawrence Livermore, Los Alamos and Sandia national labs.

    The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the NNSA. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    Established by Congress in 2000, NNSA is a semi-autonomous agency within DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the U.S. and abroad.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:12 pm on August 5, 2016 Permalink | Reply
    Tags: ANL-ALCF, , , Self-Healing Diamond-Like Carbon   

    From ANL: “Argonne Discovery Yields Self-Healing Diamond-Like Carbon” 

    ANL Lab

    News from Argonne National Laboratory

    1

    Argonne Leadership Computing Facility

    August 5, 2016
    Katie Jones
    Greg Cunningham

    1
    Mira simulations also allowed researchers to look beyond the current study by virtually testing other potential catalysts (other metals and hydrocarbons in coatings and oils) for their “self-healing” properties in a high-temperature, high-pressure engine environment.
    Joseph Insley, Argonne National Laboratory

    Large-scale reactive molecular dynamics simulations carried out on the Mira supercomputer at the Argonne Leadership Computing Facility, along with experiments conducted by researchers in Argonne’s Energy Systems Division, enabled the design of a “self-healing,” anti-wear coating that dramatically reduces friction and related degradation in engines and moving machinery. Now, the computational work advanced for this purpose is being used to identify the friction-fighting potential of other catalysts.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Fans of Superman surely recall how the Man of Steel used immense heat and pressure generated by his bare hands to form a diamond out of a lump of coal.

    The tribologists – scientists who study friction, wear and lubrication – and computational materials scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory will probably never be mistaken for superheroes. However, they recently applied the same principles and discovered a revolutionary diamond-like film of their own that is generated by the heat and pressure typical of an automotive engine.

    The discovery of this ultra-durable, self-lubricating tribofilm – a film that forms between moving surfaces – was first reported today in the journal Nature. It could have profound implications for the efficiency and durability of future engines and other moving metal parts that can be made to develop self-healing, diamond-like carbon (DLC) tribofilms.

    “This is a very unique discovery, and one that was a little unexpected,” said Ali Erdemir, the Argonne Distinguished Fellow who leads the team. “We have developed many types of diamond-like carbon coatings of our own, but we’ve never found one that generates itself by breaking down the molecules of the lubricating oil and can actually regenerate the tribofilm as it is worn away.”

    The phenomenon was first discovered several years ago by Erdemir and his colleague Osman Levent Eryilmaz in the Tribology and Thermal-Mechanics Department in Argonne’s Center for Transportation Research. But it took theoretical insight enhanced by the massive computing resources available at Argonne to fully understand what was happening at the molecular level in the experiments. The theoretical understanding was provided by lead theoretical researcher Subramanian Sankaranarayanan and postdoctoral researcher Badri Narayanan from the Center for Nanoscale Materials (CNM), while the computing power was provided by the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. CNM, ALCF and NERSC are all DOE Office of Science User Facilities.

    The original discovery occurred when Erdemir and Eryilmaz decided to see what would happen when a small steel ring was coated with a catalytically active nanocoating – tiny molecules of metals that promote chemical reactions to break down other materials – then subjected to high pressure and heat using a base oil without the complex additives of modern lubricants. When they looked at the ring after the endurance test, they didn’t see the expected rust and surface damage, but an intact ring with an odd blackish deposit on the contact area.

    “This test creates extreme contact pressure and temperatures, which are supposed to cause the ring to wear and eventually seize,” said Eryilmaz. “But this ring didn’t significantly wear and this blackish deposit was visible. We said ‘this material is strange. Maybe this is what is causing this unusual effect.’”

    Looking at the deposit using high-powered optical and laser Raman microscopes, the experimentalists realized the deposit was a tribofilm of diamond-like carbon, similar to several other DLCs developed at Argonne in the past. But it worked even better. Tests revealed the DLC tribofilm reduced friction by 25 to 40 percent and that wear was reduced to unmeasurable values.

    Further experiments, led by postdoctoral researcher Giovanni Ramirez, revealed that multiple types of catalytic coatings can yield DLC tribofilms. The experiments showed the coatings interact with the oil molecules to create the DLC film, which adheres to the metal surfaces. When the tribofilm is worn away, the catalyst in the coating is re-exposed to the oil, causing the catalysis to restart and develop new layers of tribofilm. The process is self-regulating, keeping the film at consistent thickness. The scientists realized the film was developing spontaneously between the sliding surfaces and was replenishing itself, but they needed to understand why and how.

    To provide the theoretical understanding of what the tribology team was seeing in its experiments, they turned to Sankaranarayanan and Narayanan, who used the immense computing power of ALCF’s 10-petaflop supercomputer, Mira. They ran large-scale simulations to understand what was happening at the atomic level, and determined that the catalyst metals in the nanocomposite coatings were stripping hydrogen atoms from the hydrocarbon chains of the lubricating oil, then breaking the chains down into smaller segments. The smaller chains joined together under pressure to create the highly durable DLC tribofilm.

    “This is an example of catalysis under extreme conditions created by friction. It is opening up a new field where you are merging catalysis and tribology, which has never been done before,” said Sankaranarayanan. “This new field of tribocatalysis has the potential to change the way we look at lubrication.”

    The theorists explored the origins of the catalytic activity to understand how catalysis operates under the extreme heat and pressure in an engine. By gaining this understanding, they were able to predict which catalysts would work, and which would create the most advantageous tribofilms.

    “Interestingly, we found several metals or composites that we didn’t think would be catalytically active, but under these circumstances, they performed quite well,” said Narayanan. “This opens up new pathways for scientists to use extreme conditions to enhance catalytic activity.”

    Using the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code, Sankaranarayanan and Narayanan modeled as many as two million atoms per simulation, making this one of the few atomistic studies of friction — of any kind, not just tribocatalysis — at this scale. Millions of time steps per simulation enabled researchers to identify the initial catalytic processes that occur within nanoseconds of machine operation and cannot be readily observed under experimental conditions.

    “Tribology has traditionally been an applied field, but the advent of supercomputers like Mira is now allowing us to gain fundamental insights into the complex reactions that are at play at the tribological interfaces,” Sankaranarayanan said.

    Mira simulations also allowed researchers to look beyond the current study by virtually testing other potential catalysts (other metals and hydrocarbons in coatings and oils) for their “self-healing” properties in a high-temperature, high-pressure engine environment.

    “This study has profound implications for pushing the frontiers of atomistic modeling towards rapid, predictive design and discovery of next-generation, anti-wear lubricants,” Narayanan said.

    With the help of ALCF staff in 2015, a team of domain and computational scientists worked to improve LAMMPS performance. The improvements targeted several parts of the code, including the ReaxFF module, an add-on package used to model the chemical reactions occurring in the system.

    In collaboration with researchers from IBM, Lawrence Berkeley National Laboratory (LBNL), and Sandia National Laboratories, ALCF optimized LAMMPS by replacing Message Passing Interface (MPI) point-to-point communication with MPI collectives in key algorithms, making use of MPI I/O, and adding OpenMP threading to the ReaxFF module. These enhancements doubled the code’s performance.

    Contributors to the code optimization work included Paul Coffman, Wei Jiang, Chris Knight, and Nichols A. Romero from the ALCF; Hasan Metin Aktulga from LBNL (now at Michigan State University); and Tzu-Ray Shan from Sandia (now at Materials Design, Inc.).

    The implications of the new tribofilm for efficiency and reliability of engines are huge. Manufacturers already use many different types of coatings – some developed at Argonne – for metal parts in engines and other applications. The problem is those coatings are expensive and difficult to apply, and once they are in use, they only last until the coating wears through. The new catalyst allows the tribofilm to be continually renewed during operation.

    Additionally, because the tribofilm develops in the presence of base oil, it could allow manufacturers to reduce, or possibly eliminate, some of the modern anti-friction and anti-wear additives in oil. These additives can decrease the efficiency of vehicle catalytic converters and can be harmful to the environment because of their heavy metal content.

    The results are published in Nature in a study titled Carbon-based Tribofilms from Lubricating Oils. The research was funded by DOE’s Office of Energy Efficiency & Renewable Energy.

    The team also includes microscopy expert Yifeng Liao and computational scientist Ganesh Kamath.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 10:48 am on June 15, 2016 Permalink | Reply
    Tags: 3-D simulations illuminate supernova explosions, ANL-ALCF, , , ,   

    From Argonne: “3-D simulations illuminate supernova explosions” 

    Argonne Lab

    News from Argonne National Laboratory

    June 2, 2016
    Jim Collins

    1

    2
    Magnetohydrodynamic turbulence powered by neutrino-driven convection behind the stalled shock of a core-collapse supernova simulation. This simulation shows that the presence of rotation and weak magnetic fields dramatically impacts the development of the supernova mechanism as compared to nonrotating, nonmagnetic stars. The nascent neutron star is just barely visible in the center below the turbulent convection. (Image credit: Sean M. Couch, Michigan State University)

    In the landmark television series “Cosmos,” astronomer Carl Sagan famously proclaimed, “We are made of star stuff.”

    Supernova in Messier 101
    Supernova in Messier 101 2011 Image credit: NASA / Swift.

    Supernova remnant Crab nebula. NASA/ESA Hubble
    Supernova remnant Crab nebula. NASA/ESA Hubble

    At the end of their life cycles, these massive stars explode in spectacular fashion, scattering their guts — which consist of carbon, iron and basically all other natural elements — across the cosmos. These elements go on to form new stars, solar systems and everything else in the universe — including the building blocks for life on Earth.

    Despite this fundamental role in cosmology, the mechanisms that drive supernova explosions are still not well understood.

    “If we want to understand the chemical evolution of the entire universe and how the stuff that we’re made of was processed and distributed throughout the universe, we have to understand the supernova mechanism,” said Sean Couch, assistant professor of physics and astronomy at Michigan State University.

    To shed light on this complex phenomenon, Couch is leading an effort to use Mira, the Argonne Leadership Computing Facility’s (ALCF’s) 10-petaflops supercomputer, to carry out some of the largest and most detailed three-dimensional (3-D) simulations ever performed of core-collapse supernovas.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    The ALCF is a U.S. Department of Energy (DOE) Office of Science User Facility.

    After millions of years of burning ever-heavier elements, these super-giant stars (at least eight solar masses, or eight times the mass of the sun) eventually run out of nuclear fuel and develop an iron core. No longer able to support themselves against their own immense gravitational pull, they start to collapse. But a process, not yet fully understood, intervenes that reverses the collapse and causes the star to explode.

    “What theorists like me are trying to understand is that in-between step,” Couch said. “How do we go from this collapsing iron core to an explosion?”

    Through his work at the ALCF, Couch and his team are developing and demonstrating a high-fidelity 3-D simulation approach that is providing a more realistic look at this “in-between step” than previous supernova simulations.

    While this 3-D method is still in its infancy, Couch’s early results have been promising. In 2015, his team published a paper in the Astrophysical Journal Letters, detailing their 3-D simulations of the final three minutes of iron core growth in a 15 solar-mass star. They found that more accurate representations of the star’s structure and the motion generated by turbulent convection (measured at several hundred kilometers per second) play a substantial role at the point of collapse.

    “Not surprisingly, we’re showing that more realistic initial conditions have a significant impact on the results,” Couch said.

    Adding another dimension

    Despite the fact that stars rotate, have magnetic fields and are not perfect spheres, most one- and two-dimensional supernova simulations to date have modeled nonrotating, nonmagnetic, spherically symmetrical stars. Scientists were forced to take this simplified approach because modeling supernovas is an extremely computationally demanding task. Such simulations involve highly complex multiphysics calculations and extreme timescales: the stars evolve over millions of years, yet the supernova mechanism occurs in a second.

    According to Couch, working with unrealistic initial conditions has led to difficulties in triggering robust and consistent explosions in simulations — a long-standing challenge in computational astrophysics.

    However, thanks to recent advances in computing hardware and software, Couch and his peers are making significant strides toward more accurate supernova simulations by employing the 3-D approach.

    The emergence of petascale supercomputers like Mira has made it possible to include high-fidelity treatments of rotation, magnetic fields and other complex physics processes that were not feasible in the past.

    “Generally when we’ve done these kinds of simulations in the past, we’ve ignored the fact that magnetic fields exist in the universe because when you add them into a calculation, it increases the complexity by about a factor of two,” Couch said. “But with our simulations on Mira, we’re finding that magnetic fields can add a little extra kick at just the right time to help push the supernova toward explosion.”

    On the software side, Couch continues to collaborate with ALCF computational scientists to improve the open-source FLASH code and its ability to simulate supernovas.

    But even with today’s high-performance computing hardware and software, it is not yet feasible to include high-fidelity treatments of all the relevant physics in a single simulation; that would require a future exascale system, Couch said.

    “Our simulations are only a first step toward truly realistic 3-D simulations of supernova,” he said. “But they are already providing a proof-of-principle that the final minutes of a massive star evolution can and should be simulated in 3-D.”

    The team’s results were published in Astrophysical Journal Letters in a 2015 paper titled The Three-Dimensional Evolution to Core Collapse of a Massive Star. The study also used computing resources at the Texas Advanced Computing Center at the University of Texas at Austin.

    TACC bloc
    TACC

    Couch’s supernova research began at the ALCF with a Director’s Discretionary award and now continues with computing time awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This work is being funded by the DOE Office of Science (Advanced Scientific Computing Research) and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 3:53 pm on June 6, 2016 Permalink | Reply
    Tags: ANL-ALCF, Colleen Bertoni,   

    From Argonne: “ALCF announces its next Margaret Butler Fellow” Women in Science 

    News from Argonne National Laboratory

    June 6, 2016
    Jim Collins

    1
    Colleen Bertoni

    The Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, has named Iowa State University doctoral student Colleen Bertoni as the next recipient of its Margaret Butler Fellowship in Computational Science.

    Bertoni, who was introduced today at Argonne National Laboratory’s annual Margaret Butler Celebration event, will join the ALCF this fall to advance quantum chemistry studies of liquid water and ion solvation by employing and optimizing ab initio-based fragmentation methods on the facility’s supercomputers.

    As a graduate student in Professor Mark Gordon’s quantum theory group, Bertoni’s efforts include deriving the expression for the analytical gradient of the effective fragment molecular orbital method, coding it in GAMESS, and applying the method to demonstrate its energy-conservation properties in dynamical simulations. She is expected to graduate this fall with a PhD in chemistry.

    The ALCF fellowship, which began in 2014, honors the lifetime achievements of Margaret Butler, a pioneering researcher in both computer science and nuclear energy. Butler served as the director of Argonne’s National Energy Software Center and was the first female Fellow of the American Nuclear Society.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:14 am on June 3, 2016 Permalink | Reply
    Tags: ANL-ALCF, , ,   

    From ALCF: “3D simulations illuminate supernova explosions” 

    ANL Lab
    News from Argonne National Laboratory

    June 1, 2016
    Jim Collins

    1
    Top: This visualization is a volume rendering of a massive star’s radial velocity. In comparison to previous 1D simulations, none of the structure seen here would be present.

    2

    Bottom: Magnetohydrodynamic turbulence powered by neutrino-driven convection behind the stalled shock of a core-collapse supernova simulation. This simulation shows that the presence of rotation and weak magnetic fields dramatically impacts the development of the supernova mechanism as compared to non-rotating, non-magnetic stars. The nascent neutron star is just barely visible in the center below the turbulent convection.

    Credit:
    Sean M. Couch, Michigan State University

    Researchers from Michigan State University are using Mira to perform large-scale 3D simulations of the final moments of a supernova’s life cycle.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    While the 3D simulation approach is still in its infancy, early results indicate that the models are providing a clearer picture of the mechanisms that drive supernova explosions than ever before.

    In the landmark television series “Cosmos,” astronomer Carl Sagan famously proclaimed, “we are made of star stuff,” in reference to the ubiquitous impact of supernovas.

    At the end of their life cycles, these massive stars explode in spectacular fashion, scattering their guts—which consist of carbon, iron, and basically all other natural elements—across the cosmos. These elements go on to form new stars, solar systems, and everything else in the universe (including the building blocks for life on Earth).

    Despite this fundamental role in cosmology, the mechanisms that drive supernova explosions are still not well understood.

    “If we want to understand the chemical evolution of the entire universe and how the stuff that we’re made of was processed and distributed throughout the universe, we have to understand the supernova mechanism,” said Sean Couch, assistant professor of physics and astronomy at Michigan State University.

    To shed light on this complex phenomenon, Couch is leading an effort to use Mira, the Argonne Leadership Computing Facility’s (ALCF’s) 10-petaflops supercomputer, to carry out some of the largest and most detailed 3D simulations ever performed of core-collapse supernovas. The ALCF is a U.S. Department of Energy (DOE) Office of Science User Facility.

    After millions of years of burning ever-heavier elements, these super-giant stars (at least eight solar masses, or eight times the mass of the sun) eventually run out of nuclear fuel and develop an iron core. No longer able to support themselves against their own immense gravitational pull, they start to collapse. But a process, not yet fully understood, intervenes that reverses the collapse and causes the star to explode.

    “What theorists like me are trying to understand is that in-between step,” Couch said. “How do we go from this collapsing iron core to an explosion?”

    Through his work at the ALCF, Couch and his team are developing and demonstrating a high-fidelity 3D simulation approach that is providing a more realistic look at this “in-between step” than previous supernova simulations.

    While this 3D method is still in its infancy, Couch’s early results have been promising. In 2015, his team published a paper* in the Astrophysical Journal Letters, detailing their 3D simulations of the final three minutes of iron core growth in a 15 solar-mass star. They found that more accurate representations of the star’s structure and the motion generated by turbulent convection (measured at several hundred kilometers per second) play a substantial role at the point of collapse.

    “Not surprisingly, we’re showing that more realistic initial conditions have a significant impact on the results,” Couch said.

    Adding another dimension

    Despite the fact that stars rotate, have magnetic fields, and are not perfect spheres, most 1D and 2D supernova simulations to date have modeled non-rotating, non-magnetic, spherically symmetric stars. Scientists were forced to take this simplified approach because modeling supernovas is an extremely computationally demanding task. Such simulations involve highly complex multiphysics calculations and extreme timescales (the stars evolve over millions of years, yet the supernova mechanism occurs in a second).

    According to Couch, working with unrealistic initial conditions has led to difficulties in triggering robust and consistent explosions in simulations—a long-standing challenge in computational astrophysics.

    However, thanks to recent advances in computing hardware and software, Couch and his peers are making significant strides toward more accurate supernova simulations by employing the 3D approach.

    The emergence of petascale supercomputers like Mira has made it possible to include high-fidelity treatments of rotation, magnetic fields, and other complex physics processes that were not feasible in the past.

    “Generally when we’ve done these kinds of simulations in the past, we’ve ignored the fact that magnetic fields exist in the universe because when you add them into a calculation, it increases the complexity by about a factor of two,” Couch said. “But with our simulations on Mira, we’re finding that magnetic fields can add a little extra kick at just the right time to help push the supernova toward explosion.”

    Advances to the team’s open-source FLASH hydrodynamics code have also aided simulation efforts. Couch, a co-developer of FLASH, was involved in porting and optimizing the code for Mira as part of the ALCF’s Early Science Program in 2012. For his current project, Couch continues to collaborate with ALCF computational scientists to enhance the performance, scalability, and capabilities of FLASH to carry out certain tasks. For example, ALCF staff modified the code for writing Hierarchical Data Format (HDF5) files that sped up I/O performance by about a factor of 10.

    But even with today’s high-performance computing hardware and software, it is not yet feasible to include high-fidelity treatments of all the relevant physics in a single simulation; that would require a future exascale system, Couch said. For their ongoing simulations, Couch and his team have been forced to make a number of approximations, including a reduced nuclear network and simulating only one eighth of the full star.

    “Our simulations are only a first step toward truly realistic 3D simulations of supernova,” Couch said. “But they are already providing a proof-of-principle that the final minutes of a massive star evolution can and should be simulated in 3D.”

    The team’s results were published in Astrophysical Journal Letters in a 2015 paper titled “The Three-Dimensional Evolution to Core Collapse of a Massive Star.” The study also used computing resources at the Texas Advanced Computing Center at the University of Texas at Austin.

    Couch’s supernova research began at the ALCF with a Director’s Discretionary award and now continues with computing time awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This work is being funded by the DOE Office of Science and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: