Tagged: ANL-ALCF Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:23 am on October 9, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , , , ,   

    From Science Node: “US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021” 

    Science Node bloc
    Science Node

    September 27, 2017
    Tiffany Trader

    ANL ALCF Cray Aurora supercomputer

    At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the “Aurora” supercomputer is on track to be the United States’ first exascale system. Aurora, originally named as the third pillar of the CORAL “pre-exascale” project, will still be built by Intel and Cray for Argonne National Laboratory, but the delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).

    2

    The fate of the Argonne Aurora “CORAL” supercomputer has been in limbo since the system failed to make it into the U.S. DOE budget request, while the same budget proposal called for an exascale machine “of novel architecture” to be deployed at Argonne in 2021.

    Until now, the only official word from the U.S. Exascale Computing Project was that Aurora was being “reviewed for changes and would go forward under a different timeline.”

    Officially, the contract has been “extended,” and not cancelled, but the fact remains that the goal of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative to stand up two distinct pre-exascale architectures was not met.

    According to sources we spoke with, a number of people at the DOE are not pleased with the Intel/Cray (Intel is the prime contractor, Cray is the subcontractor) partnership. It’s understood that the two companies could not deliver on the 180-200 petaflops system by next year, as the original contract called for. Now Intel/Cray will push forward with an exascale system that is some 50x larger than any they have stood up.

    It’s our understanding that the cancellation of Aurora is not a DOE budgetary measure as has been speculated, and that the DOE and Argonne wanted Aurora. Although it was referred to as an “interim,” or “pre-exascale” machine, the scientific and research community was counting on that system, was eager to begin using it, and they regarded it as a valuable system in its own right. The non-delivery is regarded as disruptive to the scientific/research communities.

    Another question we have is that since Intel/Cray failed to deliver Aurora, and have moved on to a larger exascale system contract, why hasn’t their original CORAL contract been cancelled and put out again to bid?

    With increased global competitiveness, it seems that the DOE stakeholders did not want to further delay the non-IBM/Nvidia side of the exascale track. Conceivably, they could have done a rebid for the Aurora system, but that would leave them with an even bigger gap if they had to spin up a new vendor/system supplier to replace Intel and Cray.

    Starting the bidding process over again would delay progress toward exascale – and it might even have been the death knell for exascale by 2021, but Intel and Cray now have a giant performance leap to make and three years to do it. There is an open question on the processor front as the retooled Aurora will not be powered by Phi/Knights Hill as originally proposed.

    These events beg the question regarding the IBM-led effort and whether IBM/Nvidia/Mellanox are looking very good by comparison. The other CORAL thrusts — Summit at Oak Ridge and Sierra at Lawrence Livermore — are on track, with Summit several weeks ahead of Sierra, although it is looking like neither will make the cut-off for entry onto the November Top500 list as many had speculated.

    ORNL IBM Summit supercomputer depiction

    LLNL IBM Sierra supercomputer

    We reached out to representatives from Cray, Intel and the Exascale Computing Project (ECP) seeking official comment on the revised Aurora contract. Cray and Intel declined to comment and we did not hear back from ECP by press time. We will update the story as we learn more.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:17 am on October 7, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , Leaning into the supercomputing learning curve,   

    From ALCF: “Leaning into the supercomputing learning curve” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    1
    Recently, 70 scientists — graduate students, computational scientists, and postdoctoral and early-career researchers — attended the fifth annual Argonne Training Program on Extreme-Scale Computing (ATPESC) in St. Charles, Illinois. Over two weeks, they learned how to seize opportunities offered by the world’s fastest supercomputers. Credit: Image by Argonne National Laboratory

    October 6, 2017
    Andrea Manning

    What would you do with a supercomputer that is at least 50 times faster than today’s fastest machines? For scientists and engineers, the emerging age of exascale computing opens a universe of possibilities to simulate experiments and analyze reams of data — potentially enabling, for example, models of atomic structures that lead to cures for disease.

    But first, scientists need to learn how to seize this opportunity, which is the mission of the Argonne Training Program on Extreme-Scale Computing (ATPESC). The training is part of the Exascale Computing Project, a collaborative effort of the U.S. Department of Energy’s (DOE) Office of Science and its National Nuclear Security Administration.

    Starting in late July, 70 participants — graduate students, computational scientists, and postdoctoral and early-career researchers — gathered at the Q Center in St. Charles, Illinois, for the program’s fifth annual training session. This two-week course is designed to teach scientists key skills and tools and the most effective ways to use leading-edge supercomputers to further their research aims.

    This year’s ATPESC agenda once again was packed with technical lectures, hands-on exercises and dinner talks.

    “Supercomputers are extremely powerful research tools for a wide range of science domains,” said ATPESC program director Marta García, a computational scientist at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility at the department’s Argonne National Laboratory.

    “But using them efficiently requires a unique skill set. With ATPESC, we aim to touch on all of the key skills and approaches a researcher needs to take advantage of the world’s most powerful computing systems.”

    To address all angles of high-performance computing, the training focuses on programming methodologies that are effective across a variety of supercomputers — and that are expected to apply to exascale systems. Renowned scientists, high-performance computing experts and other leaders in the field served as lecturers and guided the hands-on sessions.

    This year, experts covered:

    Hardware architectures
    Programming models and languages
    Data-intensive computing, input/output (I/O) and machine learning
    Numerical algorithms and software for extreme-scale science
    Performance tools and debuggers
    Software productivity
    Visualization and data analysis

    In addition, attendees tapped hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including the ALCF’s Mira, Cetus, Vesta, Cooley and Theta systems; the Oak Ridge Leadership Computing Facility’s Titan system; and the National Energy Research Scientific Computing Center’s Cori and Edison systems – all DOE Office of Science User Facilities.

    “I was looking at how best to optimize what I’m currently using on these new architectures and also figure out where things are going,” said Justin Walker, a Ph.D. student in the University of Wisconsin-Madison’s Physics Department. “ATPESC delivers on instructing us on a lot of things.”

    Shikhar Kumar, Ph.D. candidate in nuclear science and engineering at the Massachusetts Institute of Technology, elaborates: “On the issue of I/O, data processing, data visualization and performance tools, there isn’t a single option that is regarded as the ‘industry standard.’ Instead, we learned about many of the alternatives, which encourages learning high-performance computing from the ground up.”

    “You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”

    Jonathan Hoy, a Ph.D. student at the University of Southern California, pointed to the larger, “ripple effect” role of this type of gathering: “It is good to have all these people sit down together. In a way, we’re setting standards here.”

    Lisa Goodenough, a postdoctoral researcher in high energy physics at Argonne, said: “The theme has been about barriers coming down.” Goodenough referred to both barriers to entry and training barriers hindering scientists from realizing scientific objectives.

    “The program was of huge benefit for my postdoctoral researcher,” said Roseanna Zia, assistant professor of chemical engineering at Stanford University. “Without the financial assistance, it would have been out of my reach,” she said, highlighting the covered tuition fees, domestic airfare, meals and lodging.

    Now, anyone can learn from the program’s broad curriculum, including the slides and videos of the lectures from some of the world’s foremost experts in extreme-scale computing, online — underscoring program organizers’ efforts to extend its reach beyond the classroom. The slides and the videos of the lectures captured at ATPESC 2017 are now available online at: http://extremecomputingtraining.anl.gov/2017-slides and http://extremecomputingtraining.anl.gov/2017-videos, respectively.

    For more information on ATPESC, including on applying for selection to attend next year’s program, visit http://extremecomputingtraining.anl.gov.

    The Exascale Computing Project is a collaborative effort of two DOE organizations — the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 2:39 pm on July 3, 2017 Permalink | Reply
    Tags: ANL-ALCF, , Argonne's Theta supercomputer goes online, ,   

    From ALCF: “Argonne’s Theta supercomputer goes online” 

    Argonne Lab
    News from Argonne National Laboratory

    ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    July 3, 2017
    Laura Wolf

    Theta, a new production supercomputer located at the U.S. Department of Energy’s Argonnne National Laboratory is officially open to the research community. The new machine’s massively parallel, many-core architecture continues Argonne’s leadership computing program towards its future Aurora system.

    Theta was built onsite at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, where it will operate alongside Mira, an IBM Blue Gene/Q supercomputer. Both machines are fully dedicated to supporting a wide range of scientific and engineering research campaigns. Theta, an Intel-Cray system, entered production on July 1.

    The new supercomputer will immediately begin supporting several 2017-2018 DOE Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) projects. The ALCC is a major allocation program that supports scientists from industry, academia, and national laboratories working on advancements in targeted DOE mission areas. Theta will also support projects from the ALCF Data Science Program, ALCF’s discretionary award program, and, eventually, the DOE’s Innovative and Novel Computing Computational Impact on Theory and Experiment (INCITE) program—the major means by which the scientific community gains access to the DOE’s fastest supercomputers dedicated to open science.

    Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Now that Theta is available as a production resource, researchers can apply for computing time through the facility’s various allocation programs. Although the INCITE and ALCC calls for proposals recently closed, researchers can apply for Director’s Discretionary awards at any time.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:37 pm on July 1, 2017 Permalink | Reply
    Tags: Advanced Scientific Computing Research (ASCR), ALCC program awards ALCF computing time to 24 projects, ANL-ALCF, Theta Early Science Program   

    From ALCF: “ALCC program awards ALCF computing time to 24 projects” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    ALCC program awards ALCF computing time to 24 projects

    June 26, 2017
    No writer credit found

    1
    For one of the 2017-2018 ALCC projects, Argonne physicist Katrin Heitmann will use ALCF computing resources to continue work to build a suite of multi-wavelength, multi-cosmology synthetic sky maps. The left image (red) shows the baryonic density in a large cluster of galaxies, while the right image (blue) shows the dark matter content in the same cluster.

    The U.S. Department of Energy’s (DOE’s) ASCR Leadership Computing Challenge (ALCC) has awarded 24 projects a total of 2.1 billion core-hours at the Argonne Leadership Computing Facility (ALCF). The one-year awards are set to begin July 1.

    Several of the 2017-2018 ALCC projects will be the first to run on the ALCF’s new 9.65 petaflops Intel-Cray supercomputer, Theta, when it opens to the full user community July 1.

    Projects in the Theta Early Science Program performed science simulations on the system, but those runs served a dual purpose of helping to stress-test and evaluate Theta’s capabilities.

    Each year, the ALCC program selects projects with an emphasis on high-risk, high-payoff simulations in areas directly related to the DOE mission and for broadening the community of researchers capable of using leadership computing resources.

    Managed by the Advanced Scientific Computing Research (ASCR) program within DOE’s Office of Science, the ALCC program provides awards of computing time that range from a few million to several-hundred-million core-hours to researchers from industry, academia, and government agencies. These allocations support work at the ALCF, the Oak Ridge Leadership Computing Facility (OLCF),

    and the National Energy Research Scientific Computing Center (NERSC),

    all DOE Office of Science User Facilities.

    In 2017, the ALCC program awarded 40 projects totaling 4.1 billion core-hours across the three ASCR facilities. Additional projects may be announced at a later date as ALCC proposals can be submitted throughout the year.

    The 24 projects awarded time at the ALCF are noted below. Some projects received additional computing time at OLCF and/or NERSC.

    Thomas Blum from University of Connecticut received 220 million core-hours for “Hadronic Light-by-Light Scattering and Vacuum Polarization Contributions to the Muon Anomalous Magnetic Moment from Lattice QCD with Chiral Fermions.”
    Choong-Seock Chang from Princeton Plasma Physics Laboratory [PPPL] received 80 million core-hours for “High-Fidelity Gyrokinetic Study of Divertor Heat-Flux Width and Pedestal Structure.”
    John T. Childers from Argonne National Laboratory received 58 million core-hours for “Simulating Particle Interactions and the Resulting Detector Response at the LHC and Fermilab.”
    Frederico Fiuza from SLAC National Accelerator Laboratory [SLAC][ received 50 million core-hours for “Studying Astrophysical Particle Acceleration in HED Plasmas.”
    Marco Govoni from Argonne National Laboratory received 60 million core- hours for “Computational Engineering of Electron-Vibration Coupling Mechanisms.”
    William Gustafson from Pacific Northwest National Laboratory [PNNL] received 74 million core-hours for “Large-Eddy Simulation Component of the Mesoscale Convective System Climate Model Development and Validation (CMDV-MCS) Project.”
    Olle Heinonen from Argonne National Laboratory received 5 million core-hours for “Quantum Monte Carlo Computations of Chemical Systems.”
    Katrin Heitmann from Argonne National Laboratory received 40 million core-hours for “Extreme-Scale Simulations for Multi-Wavelength Cosmology Investigations.”
    Phay Ho from Argonne National Laboratory received 68 million core-hours for “Imaging Transient Structures in Heterogeneous Nanoclusters in Intense X-ray Pulses.”
    George Karniadakis from Brown University received 20 million core-hours for “Multiscale Simulations of Hematological Disorders.”
    Daniel Livescu from Los Alamos National Laboratory [LANL] received 60 million core-hours for “Non-Boussinesq Effects on Buoyancy-Driven Variable Density Turbulence.”
    Alessandro Lovato from Argonne National Laboratory received 35 million core-hours for “Nuclear Spectra with Chiral Forces.”
    Elia Merzari from Argonne National Laboratory received 85 million core-hours for “High-Fidelity Numerical Simulation of Wire-Wrapped Fuel Assemblies.”
    Paul Messina from Argonne National Laboratory received 530 million core-hours for “ECP Consortium for Exascale Computing.”
    Aleksandr Obabko from Argonne National Laboratory received 50 million core-hours for “Numerical Simulation of Turbulent Flows in Advanced Steam Generators – Year 3.”
    Mark Petersen from Los Alamos National Laboratory received 25 million core-hours for “Understanding the Role of Ice Shelf-Ocean Interactions in a Changing Global Climate.”
    Benoit Roux from the University of Chicago received 80 million core-hours for “Protein-Protein Recognition and HPC Infrastructure.”
    Emily Shemon from Argonne National Laboratory received 44 million core-hours for “Elimination of Modeling Uncertainties through High-Fidelity Multiphysics Simulation to Improve Nuclear Reactor Safety and Economics.”
    J. Ilja Siepmann from University of Minnesota received 130 million core-hours for “Predictive Modeling of Functional Nanoporous Materials, Nanoparticle Assembly, and Reactive Systems.”
    Tjerk Straatsma from Oak Ridge National Laboratory [ORNL] received 20 million core-hours for “Portable Application Development for Next-Generation Supercomputer Architectures.”
    Sergey Syritsyn from RIKEN BNL Research Center received 135 million core-hours for “Nucleon Structure and Electric Dipole Moments with Physical Chirally-Symmetric Quarks.”
    Sergey Varganov from University of Nevada, Reno received 42 million core-hours for “Spin-Forbidden Catalysis on Metal-Sulfur Proteins.”
    Robert Voigt from Leidos received 110 million core-hours for “Demonstration of the Scalability of Programming Environments By Simulating Multi-Scale Applications.”
    Brian Wirth from Oak Ridge National Laboratory received 98 million core-hours for “Modeling Helium-Hydrogen Plasma Mediated Tungsten Surface Response to Predict Fusion Plasma Facing Component Performance.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 8:33 pm on June 2, 2017 Permalink | Reply
    Tags: ANL-ALCF, , ,   

    From ALCF: “ALCF workshop prepares researchers for Theta, Mira” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    June 1, 2017
    Jim Collins

    1
    More than 60 researchers attended the 2017 ALCF Computational Performance Workshop to work directly with ALCF staff and invited experts to test, debug, and optimize their applications on the facility’s supercomputers.

    For most supercomputer users, running science simulations on a leading-edge system for the first time requires more than just a how-to guide.

    “There are special tools and techniques you need to know to take full advantage of these massive supercomputers,” said Sean Dettrick, lead computational scientist at Tri Alpha Energy, a California-based company pursuing the development of clean fusion energy technology.

    Dettrick was one of the more than 60 researchers who attended the Argonne Leadership Computing Facility’s (ALCF) Computational Performance Workshop from May 2-5, 2017, for guidance on preparing and improving their codes for ALCF supercomputers, including Theta, the facility’s new 9.65 petaflops Intel-Cray system.

    1
    ANL ALCF Theta 9.65 petaflop Intel-Cray supercomputer

    Every year, the ALCF hosts an intensive, hands-on workshop to connect both current and prospective users with the experts who know the systems inside out—ALCF computational scientists, performance engineers, data scientists, and visualization experts, as well as invited guests from Intel, Cray, Allinea (now part of ARM), ParaTools (TAU), and Rice University (HPCToolkit). With dedicated access to ALCF computing resources, the workshop provides an opportunity for attendees to work directly with these experts to test, debug, and optimize their applications on leadership-class supercomputers.

    “The workshop is designed to help participants take their code performance to a higher level and get them computationally ready to pursue large-scale science projects on our systems,” said Ray Loy, the ALCF’s lead for training, debuggers, and math libraries. “This year, we had the added attraction of Theta, which previously had only been available to users in the Early Science Program.”

    Theta will be opened up to the broader user community when it enters production mode on July 1, 2017. The new system will be available to researchers awarded projects through the 2017-2018 ASCR Leadership Computing Challenge (ALCC) and the 2018 Innovative and Novel Computational Impact on Theory and Experiment (INCITE) programs. One of the ALCF workshop’s goals is to help researchers demonstrate code scalability for INCITE and ALCC project proposals, which are required to convey both scientific merit and computational readiness.

    For Dettrick and his colleagues, the workshop presented an opportunity to begin preparing for Theta. The team currently has a small Director’s Discretionary project at the ALCF, but they have their sights set on applying for a larger allocation through the INCITE program in the future.

    “Our company has an in-house computing cluster that is like training wheels for the large supercomputers available here,” said Dettrick. “By moving some of our modeling work to ALCF systems, our goal is to inform and expedite our experimental research efforts by carrying out larger simulations more quickly.”

    Working with ALCF staff members, the Tri Alpha Energy researchers were able to compile and run two plasma simulation codes on Theta. In the process, they worked with an Intel representative to use the Intel VTune performance profiler to identify and address some performance and scalability issues. The ALCF team suggested a number of strategies to improve threading of the codes and reduce I/O time on Theta.

    “This experience definitely planted some seeds in my mind about how we can improve productivity moving forward,” Dettrick said.

    Mark Kostuk, a mathematical modeler and optimizer from General Atomics’ Magnetic Fusion Energy Division, also brought a plasma code to the workshop to prepare for a future INCITE award. Initially, Kostuk encountered several intermittent run failures on Theta.

    He was able to overcome the issue by working with several of the on-site experts. Using the Allinea DDT Debugger, they identified one of the issues—memory errors that were appearing in calls to a math library. The collaborative effort continued into the following week, allowing Kostuk to pinpoint and fix the bug causing the run failures.

    “It really worked out great. I received a lot of hands-on help with the code,” Kostuk said. “Once we resolved the issues, I was able to run a significant set of benchmarks and scaling tests as part of our preparations for INCITE.”

    In addition to the hands-on sessions, the ALCF workshop featured talks on the facility’s system architectures, performance tools, optimization techniques, and data science capabilities (view the full agenda and presentation slides here).

    For Juan Pedro Mendez Granado, a postdoc at Caltech, the workshop provided a crash course in how to take advantage of leadership computing resources to advance his research into lithium-based batteries. Granado, a graduate of the 2016 Argonne Training Program for Extreme-Scale Computing, has been modeling the process of lithiation and delithiation for silicon anodes using a computing cluster that allows him to simulate hundreds of thousands of atoms at a time.

    “With the ALCF’s supercomputers, I could simulate millions of atoms over much larger time scales,” he said. “Simulations at this scale would give us a much better understanding of the process at an atomistic level.”

    Granado came to the workshop to explore his options for accessing ALCF computing systems. He left with intentions to apply for a Director’s Discretionary award to begin preparing for a more substantial award in the future.

    “Not only does the workshop help participants improve their code performance, it also allows us to bring new researchers into the leadership computing pipeline,” said Loy of the ALCF. “Ultimately, we’re looking to grow the community of researchers who can use our systems for science.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 9:51 pm on May 26, 2017 Permalink | Reply
    Tags: ANL-ALCF, Laser-driven magnetic reconnection, , ,   

    From ALCF: “Fields and flows fire up cosmic accelerators” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    May 15, 2017
    John Spizzirri

    1
    A visualization from a 3D OSIRIS simulation of particle acceleration in laser-driven magnetic reconnection. The trajectories of the most energetic electrons (colored by energy) are shown as the two magnetized plasmas (grey isosurfaces) interact. Electrons are accelerated by the reconnection electric field at the interaction region and escape in a fan-like profile. Credit: Frederico Fiuza, SLAC National Accelerator Laboratory/OSIRIS

    Every day, with little notice, the Earth is bombarded by energetic particles that shower its inhabitants in an invisible dusting of radiation, observed only by the random detector, or astronomer, or physicist duly noting their passing. These particles constitute, perhaps, the galactic residue of some far distant supernova, or the tangible echo of a pulsar. These are cosmic rays.

    But how are these particles produced? And where do they find the energy to travel unchecked by immense distances and interstellar obstacles?

    These are the questions Frederico Fiuza has pursued over the last three years, through ongoing projects at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility.

    High-power lasers, such as those available at the University of Rochester’s Laboratory for Laser Energetics or at the National Ignition Facility in the Lawrence Livermore National Laboratory, can produce peak powers in excess of 1,000 trillion watts. At these high-powers, lasers can instantly ionize matter and create energetic plasma flows for the desired studies of particle acceleration.

    U Rochester Omega Laser

    LLNL/NIF

    A physicist at the SLAC National Accelerator Laboratory in California, Fiuza and his team are conducting thorough investigations of plasma physics to discern the fundamental processes that accelerate particles. The answers could provide an understanding of how cosmic rays gain their energy and how similar acceleration mechanisms could be probed in the laboratory and used for practical applications.

    Because the range in scales is so dramatic, they turned to the petascale power of Mira, the ALCF’s Blue Gene/Q supercomputer, to run the first-ever 3D simulations of these laboratory scenarios.

    To drive the simulation, they used OSIRIS, a state-of-the-art, particle-in-cell code for modeling plasmas, developed by UCLA and the Instituto Superior Técnico, in Portugal, where Fiuza earned his PhD.

    In the first phase of this project, Fiuza’s team showed that a plasma instability, the Weibel instability, is able to convert a large fraction of the energy in plasma flows to magnetic fields. They have shown a strong agreement in a one-to-one comparison of the experimental data with the 3D simulation data, which was published in Nature Physics, in 2015. This helped them understand how the strong fields required for particle acceleration can be generated in astrophysical environments.

    The team’s results, which were published in Physical Review Letters, in 2016, show that laser-driven reconnection leads to strong particle acceleration. As two expanding plasma plumes interact with each other, they form a thin current sheet, or reconnection layer, which becomes unstable, breaking into smaller sheets. During this process, the magnetic field is annihilated and a strong electric field is excited in the reconnection region, efficiently accelerating electrons as they enter the region.

    This research is supported by the DOE Office of Science. Computing time at the ALCF was allocated through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 5:21 am on May 16, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , Particle acceleration, , Rochester’s Laboratory for Laser Energetics,   

    From ALCF: “Fields and flows fire up cosmic accelerators” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    May 15, 2017
    John Spizzirri

    1
    A visualization from a 3D OSIRIS simulation of particle acceleration in laser-driven magnetic reconnection. The trajectories of the most energetic electrons (colored by energy) are shown as the two magnetized plasmas (grey isosurfaces) interact. Electrons are accelerated by the reconnection electric field at the interaction region and escape in a fan-like profile. Credit: Frederico Fiuza, SLAC National Accelerator Laboratory/OSIRIS

    Every day, with little notice, the Earth is bombarded by energetic particles that shower its inhabitants in an invisible dusting of radiation, observed only by the random detector, or astronomer, or physicist duly noting their passing. These particles constitute, perhaps, the galactic residue of some far distant supernova, or the tangible echo of a pulsar. These are cosmic rays.

    But how are these particles produced? And where do they find the energy to travel unchecked by immense distances and interstellar obstacles?

    These are the questions Frederico Fiuza has pursued over the last three years, through ongoing projects at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility.

    A physicist at the SLAC National Accelerator Laboratory in California, Fiuza and his team are conducting thorough investigations of plasma physics to discern the fundamental processes that accelerate particles.

    The answers could provide an understanding of how cosmic rays gain their energy and how similar acceleration mechanisms could be probed in the laboratory and used for practical applications.

    While the “how” of particle acceleration remains a mystery, the “where” is slightly better understood. “The radiation emitted by electrons tells us that these particles are accelerated by plasma processes associated with energetic astrophysical objects,” says Fiuza.

    The visible universe is filled with plasma, ionized matter formed when gas is super-heated, separating electrons from ions. More than 99 percent of the observable universe is made of plasmas, and the radiation emitted from them creates the beautiful, eerie colors that accentuate nebulae and other astronomical wonders.

    The motivation for these projects came from asking whether it was possible to reproduce similar plasma conditions in the laboratory and study how particles are accelerated.

    High-power lasers, such as those available at the University of Rochester’s Laboratory for Laser Energetics or at the National Ignition Facility in the Lawrence Livermore National Laboratory, can produce peak powers in excess of 1,000 trillion watts.

    2
    Rochester’s Laboratory for Laser Energetics


    At these high-powers, lasers can instantly ionize matter and create energetic plasma flows for the desired studies of particle acceleration.

    Intimate Physics

    To determine what processes can be probed and how to conduct experiments efficiently, Fiuza’s team recreates the conditions of these laser-driven plasmas using large-scale simulations. Computationally, he says, it becomes very challenging to simultaneously solve for the large scale of the experiment and the very small-scale physics at the level of individual particles, where these flows produce fields that in turn accelerate particles.

    Because the range in scales is so dramatic, they turned to the petascale power of Mira, the ALCF’s Blue Gene/Q supercomputer, to run the first-ever 3D simulations of these laboratory scenarios. To drive the simulation, they used OSIRIS, a state-of-the-art, particle-in-cell code for modeling plasmas, developed by UCLA and the Instituto Superior Técnico, in Portugal, where Fiuza earned his PhD.

    Part of the complexity involved in modeling plasmas is derived from the intimate coupling between particles and electromagnetic radiation — particles emit radiation and the radiation affects the motion of the particles.

    In the first phase of this project, Fiuza’s team showed that a plasma instability, the Weibel instability, is able to convert a large fraction of the energy in plasma flows to magnetic fields. They have shown a strong agreement in a one-to-one comparison of the experimental data with the 3D simulation data, which was published in Nature Physics, in 2015. This helped them understand how the strong fields required for particle acceleration can be generated in astrophysical environments.

    Fiuza uses tennis as an analogy to explain the role these magnetic fields play in accelerating particles within shock waves. The net represents the shockwave and the racquets of the two players are akin to magnetic fields. If the players move towards the net as they bounce the ball between each other, the ball, or particles, rapidly accelerate.

    “The bottom line is, we now understand how magnetic fields are formed that are strong enough to bounce these particles back and forth to be energized. It’s a multi-step process: you need to start by generating strong fields — and we found an instability that can generate strong fields from nothing or from very small fluctuations — and then these fields need to efficiently scatter the particles,” says Fiuza.

    Reconnecting

    NASA Magnetic reconnection, Credit: M. Aschwanden et al. (LMSAL), TRACE, NASA

    But particles can be energized in another way if the system provides the strong magnetic fields from the start.

    “In some scenarios, like pulsars, you have extraordinary magnetic field amplitudes,” notes Fiuza. “There, you want to understand how the enormous amount of energy stored in these fields can be directly transferred to particles. In this case, we don’t tend to think of flows or shocks as the dominant process, but rather magnetic reconnection.”

    Magnetic reconnection, a fundamental process in astrophysical and fusion plasmas, is believed to be the cause of solar flares, coronal mass ejections, and other volatile cosmic events. When magnetic fields of opposite polarity are brought together, their topologies are changed. The magnetic field lines rearrange in such a way as to convert magnetic energy into heat and kinetic energy, causing an explosive reaction that drives the acceleration of particles. This was the focus of Fiuza’s most recent project at the ALCF.

    Again, Fiuza’s team modeled the possibility of studying this process in the laboratory with laser-driven plasmas. To conduct 3D, first-principles simulations (simulations derived from fundamental theoretical assumptions/predictions), Fiuza needed to model tens of billions of particles to represent the laser-driven magnetized plasma system. They modeled the motion of every particle and then selected the thousand most energetic ones. The motion of those particles was individually tracked to determine how they were accelerated by the magnetic reconnection process.

    “What is quite amazing about these cosmic accelerators is that a very, very small number of particles carry a large fraction of the energy in the system, let’s say 20 percent. So you have this enormous energy in this astrophysical system, and from some miraculous process, it all goes to a few lucky particles,” he says. “That means that the individual motion of particles and the trajectory of particles are very important.”

    The team’s results, which were published in Physical Review Letters, in 2016, show that laser-driven reconnection leads to strong particle acceleration. As two expanding plasma plumes interact with each other, they form a thin current sheet, or reconnection layer, which becomes unstable, breaking into smaller sheets. During this process, the magnetic field is annihilated and a strong electric field is excited in the reconnection region, efficiently accelerating electrons as they enter the region.

    Fiuza expects that, like his previous project, these simulation results can be confirmed experimentally and open a window into these mysterious cosmic accelerators.

    This research is supported by the DOE Office of Science. Computing time at the ALCF was allocated through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:02 am on March 14, 2017 Permalink | Reply
    Tags: , ANL-ALCF, , , Vector boson plus jet event   

    From ALCF: “High-precision calculations help reveal the physics of the universe” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    March 9, 2017
    Joan Koka

    1
    With the theoretical framework developed at Argonne, researchers can more precisely predict particle interactions such as this simulation of a vector boson plus jet event. Credit: Taylor Childers, Argonne National Laboratory

    On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before.

    Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified.

    The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales.

    “So far, the Standard Model of particle physics has been very successful in describing the particle interactions we have seen experimentally, but we know that there are things that this model doesn’t describe completely.


    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    We don’t know the full theory,” said Argonne theorist Radja Boughezal, who developed the framework with her team.

    “The first step in discovering the full theory and new models involves looking for deviations with respect to the physics we know right now. Our hope is that there is deviation, because it would mean that there is something that we don’t understand out there,” she said.

    The theoretical method developed by the Argonne team is currently being deployed on Mira, one of the fastest supercomputers in the world, which is housed at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Using Mira, researchers are applying the new framework to analyze the production of missing energy in association with a jet, a particle interaction of particular interest to researchers at the Large Hadron Collider (LHC) in Switzerland.




    LHC at CERN

    Physicists at the LHC are attempting to produce new particles that are known to exist in the universe but have yet to be seen in the laboratory, such as the dark matter that comprises a quarter of the mass and energy of the universe.


    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Although scientists have no way today of observing dark matter directly — hence its name — they believe that dark matter could leave a “missing energy footprint” in the wake of a collision that could indicate the presence of new particles not included in the Standard Model. These particles would interact very weakly and therefore escape detection at the LHC. The presence of a “jet”, a spray of Standard Model particles arising from the break-up of the protons colliding at the LHC, would tag the presence of the otherwise invisible dark matter.

    In the LHC detectors, however, the production of a particular kind of interaction — called the Z-boson plus jet process — can mimic the same signature as the potential signal that would arise from as-yet-unknown dark matter particles. Boughezal and her colleagues are using their new framework to help LHC physicists distinguish between the Z-boson plus jet signature predicted in the Standard Model from other potential signals.

    Previous attempts using less precise calculations to distinguish the two processes had so much uncertainty that they were simply not useful for being able to draw the fine mathematical distinctions that could potentially identify a new dark matter signal.

    “It is only by calculating the Z-boson plus jet process very precisely that we can determine whether the signature is indeed what the Standard Model predicts, or whether the data indicates the presence of something new,” said Frank Petriello, another Argonne theorist who helped develop the framework. “This new framework opens the door to using Z-boson plus jet production as a tool to discover new particles beyond the Standard Model.”

    Applications for this method go well beyond studies of the Z-boson plus jet. The framework will impact not only research at the LHC, but also studies at future colliders which will have increasingly precise, high-quality data, Boughezal and Petriello said.

    “These experiments have gotten so precise, and experimentalists are now able to measure things so well, that it’s become necessary to have these types of high-precision tools in order to understand what’s going on in these collisions,” Boughezal said.

    “We’re also so lucky to have supercomputers like Mira because now is the moment when we need these powerful machines to achieve the level of precision we’re looking for; without them, this work would not be possible.”

    Funding and resources for this work was previously allocated through the Argonne Leadership Computing Facility’s (ALCF’s) Director’s Discretionary program; the ALCF is supported by the DOE’s Office of Science’s Advanced Scientific Computing Research program. Support for this work will continue through allocations coming from the Innovation and Novel Computational Impact on Theory and Experiment (INCITE) program.

    The INCITE program promotes transformational advances in science and technology through large allocations of time on state-of-the-art supercomputers.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 2:33 pm on January 23, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , , , Stable versions of synthetic peptides, Tailor-made drug molecules   

    From ALCF: “A rising peptide: Supercomputing helps scientists come closer to tailoring drug molecules” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    January 23, 2017
    Robert Grant

    1
    An artificial peptide made from a mixture of natural L-amino acids (the right half of the molecule) and non-natural, mirror-image D-amino acids (the left half of the molecule), designed computationally using INCITE resources. This peptide is designed to fold into a stable structure with a topology not found in nature, featuring a canonical right-handed alpha-helix packing against a non-canonical left-handed alpha-helix. Since structure imparts function, the ability to design non-natural structures permits scientists to create exciting new functions never explored by natural proteins. This peptide was synthesized chemically, and its structure was solved by nuclear magnetic resonance spectroscopy to confirm that it does indeed adopt this fold. The peptide backbone is shown as a translucent gold ribbon, and amino acid side-chains are shown as dark sticks. The molecular surface is shown as a transparent outline. Credit: Vikram Mulligan, University of Washington

    A team of researchers led by biophysicists at the University of Washington have come one step closer to designing tailor-made drug molecules that are more precise and carry fewer side effects than most existing therapeutic compounds.

    With the help of the Mira supercomputer, located at the Argonne Leadership Computing Facility at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the scientists have successfully designed and verified stable versions of synthetic peptides, components that join together to form proteins.

    They published their work in a recent issue of Nature.

    The computational protocol, which was validated by assembling physical peptides in the chemistry lab and comparing them to the computer models, may one day enable drug developers to craft novel, therapeutic peptides that precisely target specific disease-causing molecules within the body. And the insights the researchers gleaned constitute a significant advance in the fundamental understanding of protein folding.

    “That you can design molecules from scratch that fold up into structures, some of which are quite unlike what you see in nature, demonstrates a pretty fundamental understanding of what goes on at the molecular level,” said David Baker, the University of Washington biophysicist who led the research. “That’s certainly one of the more exciting things about this work.”

    Baker Lab

    David Baker
    David Baker

    The majority of drugs that humans use to treat the variety of ailments we suffer are so-called “small molecules.” These tiny compounds easily pass through different body systems to target receptor proteins studded in the membranes of our cells.

    Most do their job well, but they come with a major drawback: “Most drugs in use right now are small molecules, which are very tiny and nonspecific. They bind to lots of different things, which produces lots of side effects,” said Vikram Mulligan, a postdoctoral researcher in Baker’s lab and coauthor on the paper.

    More complex protein drugs ameliorate this problem, but they less readily disperse throughout the body because the more bulky molecules have a harder time passing through blood vessels, the linings of the digestive tract and other barriers.

    And proteins, which are giant on the molecular scale, have several layers of structure that all overlap to make them less static and more dynamic, making predicting their binding behavior a tricky prospect.

    But between the extremes of small, but imprecise, molecules and floppy, but high-specificity proteins, there exists a middle ground – peptides. These short chains of amino acids, which normally link together to make complex proteins, can target specific receptors, diffuse easily throughout the body and also sustain a rigid structure.

    Some naturally-occurring peptides are already used as drugs, such as the immunosuppressant ciclosporin, but researchers could open up a world of pharmaceutical opportunity if they could design and synthesize peptides.

    That’s precisely what Baker and his team did, tweaking the Rosetta software package that they built for the design of protein structures to accommodate synthetic amino acids that do not exist in nature, in addition to the 20 natural amino acids.

    After designing the chemical building blocks of peptides, the researchers used the supercomputer Mira, with its 10 petaflops of processing power and more than 780,000 cores, to model scores of potential shapes, or conformations, that specific backbone sequences of amino acids might take.

    “We basically sample millions and millions of these conformations,” said Yuri Alexeev, a project specialist in computational science at the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility. “At the same time you also improve the energy functions,” which are measurements to describe the efficiency and stability of each possible folding arrangement.

    Though he was not a coauthor on the Nature paper, Alexeev helped Baker’s team scale up previous programs it had used to design proteins for modeling peptides on Mira.

    Executing so many calculations simultaneously would be virtually impossible without Mira’s computing power, according to Mulligan.

    “The big challenge with designing peptides that fold is that you have a chain of amino acids that can exist in an astronomical number of conformations,” he said.

    Baker and his colleagues had tasked Mira with modeling millions of potential peptide conformations before, but this study stands out for two reasons.

    First, the researchers arrived at a handful of peptides with specific conformations that the computations predicted would be stable.

    Second, when Baker’s lab created seven of these peptides in their physical wet lab, the reality of the peptides’ conformations and stability corresponded remarkably well with the computer models.

    “At best, what comes out of a computer is a prediction, and at worst what comes out of a computer is a fantasy. So we never really consider it a result until we’ve actually made the molecule in the wet lab and confirmed that it actually has the structure that we designed it to have,” said Mulligan.

    “That’s exactly what we did in this paper,” he said. “We made a panel of these peptides that were designed to fold into very specific shapes, diverse shapes, and we experimentally confirmed that all of them folded into the shapes that we designed.”

    While this experiment sought to create totally new peptides in stable conformations as a proof of concept, Mulligan says that the Baker lab is now moving on to design functional peptides with specific targets in mind.

    Further research may bring the team closer to a protocol that could actually be used to design peptide drugs that target a specific receptor, such as those that make viruses like Ebola or HIV susceptible to attack.

    Computer time was awarded by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program; the project also used resources of the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility at Pacific Northwest National Laboratory.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 3:29 pm on January 4, 2017 Permalink | Reply
    Tags: ANL-ALCF, , , Wind studies   

    From ALCF: “Supercomputer simulations helping to improve wind predictions” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    January 3, 2017
    Katie Jones

    1
    Station locations and lists of instruments deployed within the Columbia River Gorge, Columbia River Basin, and surrounding region. Credit:
    James Wilczak, NOAA

    A research team led by the National Oceanic and Atmospheric Administration (NOAA) is performing simulations at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, to develop numerical weather prediction models that can provide more accurate wind forecasts in regions with complex terrain. The team, funded by DOE in support of its Wind Forecast Improvement Project II (WFIP 2), is testing and validating the computational models with data being collected from a network of environmental sensors in the Columbia River Gorge region.

    Wind turbines dotting the Columbia River Gorge in Washington and Oregon can collectively generate about 4,500 megawatts (MW) of power, or more than that of five, 800-MW nuclear power plants. However, the gorge region and its dramatic topography create highly variable wind conditions, posing a challenge for utility operators who use weather forecast models to predict when wind power will be available on the grid.

    If predictions are unreliable, operators must depend on steady power sources like coal and nuclear plants to meet demand. Because they take a long time to fuel and heat, conventional power plants operate on less flexible timetables and can generate power that is then wasted if wind energy unexpectedly floods the grid.

    To produce accurate wind predictions over complex terrain, researchers are using Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to increase resolution and improve physical representations to better simulate wind features in national forecast models. In a unique intersection of field observation and computer simulation, the research team has installed and is collecting data from a network of environmental instruments in the Columbia River Gorge region that is being used to test and validate model improvements.

    This research is part of the Wind Forecast Improvement Project II (WFIP 2), an effort sponsored by DOE in collaboration with NOAA, Vaisala—a manufacturer of environmental and meteorological equipment—and a number of national laboratories and universities. DOE aims to increase U.S. wind energy from five to 20 percent of total energy use by 2020, which means optimizing how wind is used on the grid.

    “Our goal is to give utility operators better forecasts, which could ultimately help make the cost of wind energy a little cheaper,” said lead model developer Joe Olson of NOAA. “For example, if the forecast calls for a windy day but operators don’t trust the forecast, they won’t be able to turn off coal plants, which are releasing carbon dioxide when maybe there was renewable wind energy available.”

    The complicated physics of wind

    For computational efficiency, existing forecast models assume the Earth’s surface is relatively flat—which works well at predicting wind on the flat terrain of the Midwestern United States where states like Texas and Iowa generate many thousands of megawatts of wind power. Yet, as the Columbia River Gorge region demonstrates, some of the ripest locations for harnessing wind energy could be along mountains and coastlines where conditions are difficult to predict.

    “There are a lot of complications predicting wind conditions for terrain with a high degree of complexity at a variety of spatial scales,” Olson said.

    Two major challenges include overcoming a model resolution that is too low for resolving wind features in sharp valleys and mountain gaps and a lack of observational data.

    At the NOAA National Center for Environmental Prediction, two atmospheric models run around the clock to provide national weather forecasts: the 13-km Rapid Refresh (RAP) and the 3-km High-Resolution Rapid Refresh (HRRR). Only a couple of years old, the HRRR model has improved storm and winter weather predictions by resolving atmospheric features at 9 km2—or about 2.5 times the size of Central Park in New York City.

    At a resolution of a few kilometers, HRRR can capture processes at the mesoscale—about the size of storms—but cannot resolve features at the microscale, which is a few hundred feet. Some phenomena important to wind prediction that cannot be modeled in RAP or HRRR include mountain wakes (the splitting of airflow obstructed by the side of a mountain); mountain waves (the oscillation of air flow on the side of the mountain that affects cloud formation and turbulence); and gap flow (small-scale winds that can blow strongly through gaps in mountains and gorge ridges).

    The 750-meter leap

    To make wind predictions that are sufficiently accurate for utility operators, Olson said they need to model physical parameters at a 750-m resolution—about one-sixth the size of Central Park, or an average wind farm. This 16-times increase in resolution will require a lot of real-world data for model testing and validation, which is why the WFIP 2 team outfitted the Columbia River Gorge region with more than 20 environmental sensor stations.

    “We haven’t been able to identify all the strengths and weaknesses for wind predictions in the model because we haven’t had a complete, detailed dataset,” Olson said. “Now we have an expansive network of wind profilers and other weather instruments. Some are sampling wind in mountain gaps and valleys, others are on ridges. It’s a multiscale network that can capture the high-resolution aspects of the flow, as well as the broader mesoscale flows.”

    Many of the sensors send data every 10 minutes. Considering data will be collected for an 18-month period that began in October 2015 and ends March 2017, this steady stream will ultimately amount to about half a petabyte. The observational data is initially sent to Pacific Northwest National Laboratory where it is stored until it is used to test model parameters on Mira at Argonne.

    The WFIP 2 research team needed Mira’s highly parallel architecture to simulate an ensemble of about 20 models with varied parameterizations. ALCF researchers Ray Loy and Ramesh Balakrishnan worked with the team to optimize the HRRR architectural configuration and craft a strategy that allowed them to run the necessary ensemble jobs.

    “We wanted to run on Mira because ALCF has experience using HRRR for climate simulations and running ensembles jobs that would allow us to compare the models’ physical parameters,” said Rao Kotamarthi, chief scientist and department head of Argonne’s Climate and Atmospheric Science Department. “The ALCF team helped to scale the model to Mira and instructed us on how to bundle jobs so they avoid interrupting workflow, which is important for a project that often has new data coming in.”

    The ensemble approach allowed the team to create case studies that are used to evaluate how each simulation compared to observational data.

    “We pick certain case studies where the model performs very poorly, and we go back and change the physics in the model until it improves, and we keep doing that for each case study so that we have significant improvement across many scenarios,” Olson said.

    At the end of the field data collection, the team will simulate an entire year of weather conditions with an emphasis on wind in the Columbia River Gorge region using the control model—the 3-km HRRR model before any modifications were made—and a modified model with the improved physical parameterizations.

    “That way, we’ll be able to get a good measure of how much has improved overall,” Olson said.

    Computing time on Mira was awarded through the ASCR Leadership Computing Challenge (ALCC). Collaborating institutions include DOE’s Wind Energy Technologies Office, NOAA, Argonne, Pacific Northwest National Laboratory, Lawrence Livermore National Laboratory, the National Renewable Energy Laboratory, the University of Colorado, Notre Dame University, Texas Tech University, and Vaisala.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: