Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:33 pm on August 25, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Livermore Lab: “Calculating conditions at the birth of the universe” 


    Lawrence Livermore National Laboratory

    08/25/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    Using a calculation originally proposed seven years ago to be performed on a petaflop computer, Lawrence Livermore researchers computed conditions that simulate the birth of the universe.

    When the universe was less than one microsecond old and more than one trillion degrees, it transformed from a plasma of quarks and gluons into bound states of quarks – also known as protons and neutrons, the fundamental building blocks of ordinary matter that make up most of the visible universe.

    The theory of quantum chromodynamics (QCD) governs the interactions of the strong nuclear force and predicts it should happen when such conditions occur.

    In a paper appearing in the Aug. 18 edition of Physical Review Letters, Lawrence Livermore scientists Chris Schroeder, Ron Soltz and Pavlos Vranas calculated the properties of the QCD phase transition using LLNL’s Vulcan, a five-petaflop machine. This work was done within the LLNL-led HotQCD Collaboration, involving Los Alamos National Laboratory, Institute for Nuclear Theory, Columbia University, Central China Normal University, Brookhaven National Laboratory and Universität Bielefed in Germany.

    vulcan
    A five Petaflop IBM Blue Gene/Q supercomputer named Vulcan

    This is the first time that this calculation has been performed in a way that preserves a certain fundamental symmetry of the QCD, in which the right and left-handed quarks (scientists call this chirality) can be interchanged without altering the equations. These important symmetries are easy to describe, but they are computationally very challenging to implement.

    “But with the invention of petaflop computing, we were able to calculate the properties with a theory proposed years ago when petaflop-scale computers weren’t even around yet,” Soltz said.

    The research has implications for our understanding of the evolution of the universe during the first microsecond after the Big Bang, when the universe expanded and cooled to a temperature below 10 trillion degrees.

    Below this temperature, quarks and gluons are confined, existing only in hadronic bound states such as the familiar proton and neutron. Above this temperature, these bound states cease to exist and quarks and gluons instead form plasma, which is strongly coupled near the transition and coupled more and more weakly as the temperature increases.

    “The result provides an important validation of our understanding of the strong interaction at high temperatures, and aids us in our interpretation of data collected at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN.” Soltz said.

    Brookhaven RHIC
    RHIC at Brookhaven

    CERN LHC Grand Tunnel
    LHC at CERN

    Soltz and Pavlos Vranas, along with former colleague Thomas Luu, wrote an essay predicting that if there were powerful enough computers, the QCD phase transition could be calculated. The essay was published in Computing in Science & Engineering in 2007, “back when a petaflop really did seem like a lot of computing,” Soltz said. “With the invention of petaflop computers, the calculation took us several months to complete, but the 2007 estimate turned out to be pretty close.”

    The extremely computationally intensive calculation was made possible through a Grand Challenge allocation of time on the Vulcan Blue Gene/Q Supercomputer at Lawrence Livermore National Laboratory.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:00 pm on August 19, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    From Livermore Lab: “New project is the ACME of addressing climate change” 


    Lawrence Livermore National Laboratory

    08/19/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    High performance computing (HPC) will be used to develop and apply the most complete climate and Earth system model to address the most challenging and demanding climate change issues.

    Eight national laboratories, including Lawrence Livermore, are combining forces with the National Center for Atmospheric Research, four academic institutions and one private-sector company in the new effort. Other participating national laboratories include Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest and Sandia.

    The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications. The plan is to exploit advanced software and new high performance computing machines as they become available.

    book

    The initial focus will be on three climate change science drivers and corresponding questions to be answered during the project’s initial phase:

    Water Cycle: How do the hydrological cycle and water resources interact with the climate system on local to global scales? How will more realistic portrayals of features important to the water cycle (resolution, clouds, aerosols, snowpack, river routing, land use) affect river flow and associated freshwater supplies at the watershed scale?
    Biogeochemistry: How do biogeochemical cycles interact with global climate change? How do carbon, nitrogen and phosphorus cycles regulate climate system feedbacks, and how sensitive are these feedbacks to model structural uncertainty?
    Cryosphere Systems: How do rapid changes in cryospheric systems, or areas of the earth where water exists as ice or snow, interact with the climate system? Could a dynamical instability in the Antarctic Ice Sheet be triggered within the next 40 years?

    Over a planned 10-year span, the project aim is to conduct simulations and modeling on the most sophisticated HPC machines as they become available, i.e., 100-plus petaflop machines and eventually exascale supercomputers. The team initially will use U.S. Department of Energy (DOE) Office of Science Leadership Computing Facilities at Oak Ridge and Argonne national laboratories.

    “The grand challenge simulations are not yet possible with current model and computing capabilities,” said David Bader, LLNL atmospheric scientist and chair of the ACME council. “But we developed a set of achievable experiments that make major advances toward answering the grand challenge questions using a modeling system, which we can construct to run on leading computing architectures over the next three years.”
    To address the water cycle, the project plan (link below) hypothesized that: 1) changes in river flow over the last 40 years have been dominated primarily by land management, water management and climate change associated with aerosol forcing; 2) during the next 40 years, greenhouse gas (GHG) emissions in a business as usual scenario may drive changes to river flow.

    “A goal of ACME is to simulate the changes in the hydrological cycle, with a specific focus on precipitation and surface water in orographically complex regions such as the western United States and the headwaters of the Amazon,” the report states.

    To address biogeochemistry, ACME researchers will examine how more complete treatments of nutrient cycles affect carbon-climate system feedbacks, with a focus on tropical systems, and investigate the influence of alternative model structures for below-ground reaction networks on global-scale biogeochemistry-climate feedbacks.

    For cryosphere, the team will examine the near-term risks of initiating the dynamic instability and onset of the collapse of the Antarctic Ice Sheet due to rapid melting by warming waters adjacent to the ice sheet grounding lines.

    The experiment would be the first fully-coupled global simulation to include dynamic ice shelf-ocean interactions for addressing the potential instability associated with grounding line dynamics in marine ice sheets around Antarctica.

    Other LLNL researchers involved in the program leadership are atmospheric scientist Peter Caldwell (co-leader of the atmospheric model and coupled model task teams) and computer scientists Dean Williams (council member and workflow task team leader) and Renata McCoy (project engineer).

    Initial funding for the effort has been provided by DOE’s Office of Science.

    More information can be found in the Accelerated Climate Modeling For Energy: Project Strategy and Initial Implementation Plan.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 12:36 pm on July 21, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing,   

    From Oak Ridge Lab: “‘Engine of Explosion’ Discovered at OLCF now Observed in Nearby Supernova Remnant’ 

    i1

    Oak Ridge National Laboratory

    May 6, 2014
    Katie Elyce Jones

    Data gathered with high-energy x-ray telescope support the SASI model—a decade later

    Back in 2003, researchers using the Oak Ridge Leadership Computing Facility’s (OLCF’s) first supercomputer, Phoenix, started out with a bang. Astrophysicists studying core-collapse [Type II]supernovae—dying massive stars that violently explode after running out of fuel—asked themselves what mechanism triggers explosion and a fusion chain reaction that releases all the elements found in the universe, including those that make up the matter around us?

    “This is really one of the most important problems in science because supernovae give us all the elements in nature,” said Tony Mezzacappa of the University of Tennessee–Knoxville.

    Leading up to the 2003 simulations on Phoenix, one-dimensional supernovae models simulated a shock wave that pushes stellar material outward, expanding to a certain radius before, ultimately, succumbing to gravity. The simulations did not predict that stellar material would push beyond the shock wave radius; instead, infalling matter from the fringes of the expanding star tamped the anticipated explosion. Yet, humans have recorded supernovae explosions throughout history.

    “There have been a lot of supernovae observations,” Mezzacappa said. “But these observations can’t really provide information on the engine of explosion because you need to observe what is emitted from deep within the supernova, such as gravitational waves or neutrinos. It’s hard to do this from Earth.”

    Then simulations on Phoenix offered a solution: the SASI, or standing accretion shock instability, a sloshing of stellar material that destabilizes the expanding shock and helps lead to an explosion.

    “Once we discovered the SASI, it became very much a part of core-collapse supernova theory,” Mezzacappa said. “People feel it is an important missing ingredient.”

    The SASI provided a logical answer supported by other validated physics models, but it was still theoretical because it had only been demonstrated computationally.

    Now, more than a decade later, researchers mapping radiation signatures from the Cassiopeia A supernova with NASA’s NuSTAR high-energy x-ray telescope array have published observational evidence that supports the SASI model.

    NASA NuSTAR
    NASA/NuStar

    Cass A
    Cas A
    A false color image off Cassiopeia using observations from both the Hubble and Spitzer telescopes as well as the Chandra X-ray Observatory (cropped).
    Courtesy NASA/JPL-Caltech

    “What they’re seeing are x-rays that come from the radioactive decay of Titanium-44 in Cas A,” Mezzacappa said.

    Because Cassiopeia A is only 11,000 light-years away within the Milky Way galaxy (relatively nearby in astronomical distances), NuSTAR is capable of detecting Ti-44 located deep in the supernova ejecta. Mapping the radiative signature of this titanium isotope provides information on the supernova’s engine of explosion.

    “The distribution of titanium is what suggests that the supernova ‘sloshes’ before it explodes, like the SASI predicts,” Mezzacappa said.

    This is a rare example of simulation predicting a physical phenomenon before it is observed experimentally.

    “Usually it’s the other way around. You observe something experimentally then try to model it,” said the OLCF’s Bronson Messer. “The SASI was discovered computationally and has now been confirmed observationally.”

    The authors of the Nature letter that discusses the NuSTAR results cite Mezzacappa’s 2003 paper introducing the SASI in The Astrophysical Journal, which was coauthored by John Blondin and Christine DeMarino, as a likely model to describe the Ti-44 distribution.

    Despite observational support for the SASI, researchers are uncertain whether the SASI is entirely responsible for triggering a supernova explosion or if it is just part of the explanation. To further explore the model, Mezzacappa’s team, including the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) project’s principal investigator Eric Lentz, are taking supernovae simulations to the next level on the OLCF’s 27-petaflop Titan supercomputer located at Oak Ridge National Laboratory.

    ORNL Titan Supercomputer
    Titan at ORNL

    “The role of the SASI in generating explosion and whether or not the models are sufficiently complete to predict the course of explosion is the important question now,” Mezzacappa said. “The NuSTAR observation suggests it does aid in generating the explosion.”

    Although the terascale runs that predicted the SASI in 2003 were in three dimensions, they did not include much of the physics that can now be solved on Titan. Today, the team is using 85 million core hours and scaling to more than 60,000 cores to simulate a supernova in three dimensions with a fully physics-based model. The petascale Titan simulation, which will be completed later this year, could be the most revealing supernova explosion yet—inside our solar system anyway.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    See the full article here.

    i2


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 2:58 pm on June 25, 2014 Permalink | Reply
    Tags: , Supercomputing,   

    From Fermilab: “Supercomputers help answer the big questions about the universe” 


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Wednesday, June 25, 2014
    Jim Simone

    The proton is a complicated blob. It is composed of three particles, called quarks, which are surrounded by a roiling sea of gluons that “glue” the quarks together. In addition to interacting with its surrounding particles, each gluon can also turn itself temporarily into a quark-antiquark pair and then back into a gluon.

    proton
    Proton -2 up quarks, one down quark

    gluon
    Gluon, after Feynman

    This tremendously complicated subatomic dance affects measurements that are crucial to answering important questions about the universe, such as: What is the origin of mass in the universe? Why do the elementary particles we know come in three generations? Why is there so much more matter than antimatter in the universe?

    A large group of theoretical physicists at U.S. universities and DOE national laboratories, known as the USQCD collaboration, aims to help experimenters solve the mysteries of the universe by computing the effects of this tremendously complicated dance of quarks and gluons on experimental measurements. The collaboration members use powerful computers to solve the complex equations of the theory of quantum chromodynamics, or QCD, which govern the behavior of quarks and gluons.

    The USQCD computing needs are met through a combination of INCITE resources at the DOE Leadership Class Facilities at Argonne and Oak Ridge national laboratories; NSF facilities such as the NCSA Blue Waters; a small Blue Gene/Q supercomputer at Brookhaven National Laboratory; and dedicated computer clusters housed at Fermilab and Jefferson Lab. USQCD also exploits floating point accelerators such as Graphic Processing Units (GPUs) and Intel’s Xeon Phi architecture.

    With funding from the DOE Office of Science SCIDAC program, the USQCD collaboration coordinates and oversees the development of community software that benefits all lattice QCD groups, enabling scientists to make the most efficient use of the latest supercomputer architectures and GPU clusters. Efficiency gains are achieved through new computing algorithms and techniques, such as communication avoidance, data compression and the use of mixed precision to represent numbers.

    The nature of lattice QCD calculations is very conducive to cooperation among collaborations, even among groups that focus on different scientific applications of QCD effects. Why? The most time-consuming and expensive computing in lattice QCD—the generation of gauge configuration files—is the basis for all lattice QCD calculations. (Gauge configurations represent the sea of gluons and virtual quarks that represent the QCD vacuum.) They are most efficiently generated on the largest leadership-class supercomputers. The MILC collaboration, a subgroup of the larger USQCD collaboration, is well known for the calculation of state-of-the-art gauge configurations and freely shares them with researchers worldwide.

    Specific predictions require more specialized computations and rely on the gauge configurations as input. These calculations are usually performed on dedicated computer hardware at the labs, such as the clusters at Fermilab and Jefferson Lab and the small Blue Gene/Q at BNL, which are funded by the DOE Office of Science ‘s LQCD-ext Project for hardware infrastructure.
    With the powerful human and computer resources of USQCD, particle physicists working on many different experiments—from measurements at the Large Hadron Collider to neutrino experiments at Fermilab—have a chance to get to the bottom of the universe’s most pressing questions.

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:23 pm on June 23, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From DOE Pulse: “Supercomputer exposes enzyme’s secrets” 

    pulse

    June 23, 2014
    Heather Lammers, 303.275.4084,
    heather.lammers@nrel.gov

    Thanks to newer and faster supercomputers, today’s computer simulations are opening hidden vistas to researchers in all areas of science. These powerful machines are used for everything from understanding how proteins work to answering questions about how galaxies began. Sometimes the data they create manage to surprise the very researchers staring back at the computer screen—that’s what recently happened to a researcher at DOE’s National Renewable Energy Laboratory (NREL).

    “What I saw was completely unexpected,” NREL engineer Gregg Beckham said.

    two
    NREL Biochemist Michael Resch (left) and NREL Engineer Gregg Beckham discuss results of vials containing an enzymatic digestion assay of cellulose.
    Photo by Dennis Schroeder, NREL

    What startled Beckham was a computer simulation of an enzyme from the fungus Trichoderma reesei (Cel7A). The simulation showed that a part of an enzyme, the linker, may play a necessary role in breaking down biomass into the sugars used to make alternative transportation fuels.

    “A couple of years ago we decided to run a really long—well, really long being a microsecond—simulation of the entire enzyme on the surface of cellulose,” Beckham said. “We noticed the linker section of the enzyme started to bind to the cellulose—in fact, the entire linker binds to the surface of the cellulose.”

    The enzymes that the NREL researchers are examining have several different components that work together to break down biomass. The enzymes have a catalytic domain—which is the primary part of the enzyme that breaks down the material into the needed sugars. There is also a binding module, the sticky part that attaches the cellulose to the catalytic domain. The catalytic domain and the binding module are connected to each other by a linker.

    “For decades, many people have thought these linkers are quite boring,” Beckham said. “Indeed, we predicted that linkers alone act like wet noodles—they are really flexible, and unlike the catalytic domain or the binding module, they didn’t have a known, well-defined structure. But the computer simulation suggests that the linker has some function other than connecting the binding module to the catalytic domain; namely, it may have some cellulose binding function as well.”

    Cellulose is a long linear chain of glucose that makes up the main part of plant cell walls, but the bonds between the glucose molecules make it very tough to break apart. In fact, cellulose in fossilized leaves can remain intact for millions of years, but enzymes have evolved to break down this biomass into sugars by threading a single strand of cellulose up into the enzymes’ catalytic domain and cleaving the bonds that connect glucose molecules together. Scientists are interested in the enzymes in fungi like Trichoderma reesei because they are quite effective at breaking down biomass—and fungi can make a lot of protein, which is also important for biomass conversion.

    To make an alternative fuel like cellulosic ethanol or drop-in hydrocarbon fuels, biomass is pretreated with acid, hot water, or some other chemicals and heat to open up the plant cell wall. Next, enzymes are added to the biomass to break down the cellulose into glucose, which is then fermented and converted into fuel.

    While Beckham and his colleagues were excited by what the simulation showed, there was also some trepidation.

    “At first we didn’t believe it, and we thought that it must be wrong, so a colleague, Christina Payne [formerly at NREL, now an assistant professor in chemical and materials engineering at the University of Kentucky], ran another simulation on the second most abundant enzyme in Trichoderma reesei (Cel6A),” Beckham explained. “And we found exactly the same thing.

    “Many research teams have been engineering catalytic domains and binding modules, but this result perhaps suggests that we should also consider the functions of linkers. We now know they are important for binding, and we know binding is important for activity—but many unanswered questions remain that the team is working on now.”

    The NREL research team experimentally verified the computational predictions by working with researchers at the University of Colorado Boulder (CU Boulder), Swedish University of Agricultural Sciences, and Ghent University in Belgium. Using proteins made and characterized by the international project team, NREL’s Michael Resch showed that by measuring the binding affinity of the binding module and then comparing it to the binding module with the linker added, the linker imparted an order of magnitude in binding affinity to cellulose. These results were published in an article in the Proceedings of the National Academy of Sciences (PNAS). In addition to Beckham, Payne, and Resch, co-authors on the study include: Liqun Chen and Zhongping Tan (CU Boulder); Michael F. Crowley, Michael E. Himmel, and Larry E. Taylor II (NREL); Mats Sandgren and Jerry Ståhlberg (Swedish University of Agricultural Sciences); and Ingeborg Stals (University College Ghent).

    “In terms of fuels, if you make even a small improvement in these enzymes, you could then lower the enzyme loadings. On a commodities scale, there is potential for dramatic savings that will help make renewable fuels competitive with fuels derived from fossil resources,” Beckham said.

    According to Beckham, improving these enzymes is very challenging but incredibly important for cost-effective biofuels production, which the Energy Department has long recognized. “We are still unraveling a lot of the basic mechanisms about how they work. For instance, our recent paper suggests that this might be another facet of how these enzymes work and another target for improving them.”

    The research work at NREL is funded by the Energy Department’s Bioenergy Technologies Office, and the computer time was provided by both the Energy Department and the National Science Foundation (NSF). The original simulation was run on a supercomputer named Athena at the National Institute for Computational Sciences, part of the NSF Extreme Science and Engineering Discovery Environment (XSEDE). The Energy Department’s Red Mesa supercomputer at Sandia National Laboratories was used for the subsequent simulations.

    See the full article here.

    DOE Pulse highlights work being done at the Department of Energy’s national laboratories. DOE’s laboratories house world-class facilities where more than 30,000 scientists and engineers perform cutting-edge research spanning DOE’s science, energy, National security and environmental quality missions. DOE Pulse is distributed twice each month.

    DOE Banner


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 12:48 pm on June 18, 2014 Permalink | Reply
    Tags: , , , Supercomputing,   

    From Princeton: “Familiar yet strange: Water’s ‘split personality’ revealed by computer model” 

    Princeton University
    Princeton University

    June 18, 2014
    Catherine Zandonella, Office of the Dean for Research

    Seemingly ordinary, water has quite puzzling behavior. Why, for example, does ice float when most liquids crystallize into dense solids that sink?

    Using a computer model to explore water as it freezes, a team at Princeton University has found that water’s weird behaviors may arise from a sort of split personality: at very cold temperatures and above a certain pressure, water may spontaneously split into two liquid forms.

    The team’s findings were reported in the journal Nature.

    “Our results suggest that at low enough temperatures water can coexist as two different liquid phases of different densities,” said Pablo Debenedetti, the Class of 1950 Professor in Engineering and Applied Science and Princeton’s dean for research, and a professor of chemical and biological engineering.

    The two forms coexist a bit like oil and vinegar in salad dressing, except that the water separates from itself rather than from a different liquid. “Some of the molecules want to go into one phase and some of them want to go into the other phase,” said Jeremy Palmer, a postdoctoral researcher in the Debenedetti lab.

    The finding that water has this dual nature, if it can be replicated in experiments, could lead to better understanding of how water behaves at the cold temperatures found in high-altitude clouds where liquid water can exist below the freezing point in a “supercooled” state before forming hail or snow, Debenedetti said. Understanding how water behaves in clouds could improve the predictive ability of current weather and climate models, he said.

    chart
    Pressure–temperature phase diagram, including an illustration of the liquid–liquid transition line proposed for several polyamorphous materials. This liquid–liquid phase transition would be a first order, discontinuous transition between low and high density liquids (labelled 1 and 2). This is analogous to polymorphism of crystalline materials, where different stable crystalline states (solid 1, 2 in diagram) of the same substance can exist (e.g. diamond and graphite are two polymorphs of carbon). Like the ordinary liquid–gas transition, the liquid–liquid transition is expected to end in a critical point. At temperatures beyond these critical points there is a continuous range of fluid states, i.e. the distinction between liquids and gasses is lost. If crystallisation is avoided the liquid–liquid transition can be extended into the metastable supercooled liquid regime.

    The new finding serves as evidence for the “liquid-liquid transition” hypothesis, first suggested in 1992 by Eugene Stanley and co-workers at Boston University and the subject of recent debate. The hypothesis states that the existence of two forms of water could explain many of water’s odd properties — not just floating ice but also water’s high capacity to absorb heat and the fact that water becomes more compressible as it gets colder.

    deb
    Princeton University researchers conducted computer simulations to explore what happens to water as it is cooled to temperatures below freezing and found that the supercooled liquid separated into two liquids with different densities. The finding agrees with a two-decade-old hypothesis to explain water’s peculiar behaviors, such as becoming more compressible and less dense as it is cooled. The X axis above indicates the range of crystallinity (Q6) from liquid water (less than 0.1) to ice (greater than 0.5) plotted against density (ρ) on the Y axis. The figure is a two-dimensional projection of water’s calculated “free energy surface,” a measure of the relative stability of different phases, with orange indicating high free energy and blue indicating low free energy. The two large circles in the orange region reveal a high-density liquid at 1.15 g/cm3 and low-density liquid at 0.90 g/cm3. The blue area represents cubic ice, which in this model forms at a density of about 0.88 g/cm3. (Image courtesy of Jeremy Palmer)

    At cold temperatures, the molecules in most liquids slow to a sedate pace, eventually settling into a dense and orderly solid that sinks if placed in liquid. Ice, however, floats in water due to the unusual behavior of its molecules, which as they get colder begin to push away from each other. The result is regions of lower density — that is, regions with fewer molecules crammed into a given volume — amid other regions of higher density. As the temperature falls further, the low-density regions win out, becoming so prevalent that they take over the mixture and freeze into a solid that is less dense than the original liquid.

    The work by the Princeton team suggests that these low-density and high-density regions are remnants of the two liquid phases that can coexist in a fragile, or “metastable” state, at very low temperatures and high pressures. “The existence of these two forms could provide a unifying theory for how water behaves at temperatures ranging from those we experience in everyday life all the way to the supercooled regime,” Palmer said.

    Since the proposal of the liquid-liquid transition hypothesis, researchers have argued over whether it really describes how water behaves. Experiments would settle the debate, but capturing the short-lived, two-liquid state at such cold temperatures and under pressure has proved challenging to accomplish in the lab.

    Instead, the Princeton researchers used supercomputers to simulate the behavior of water molecules — the two hydrogens and the oxygen that make up “H2O” — as the temperature dipped below the freezing point.

    The team used computer code to represent several hundred water molecules confined to a box, surrounded by an infinite number of similar boxes. As they lowered the temperature in this virtual world, the computer tracked how the molecules behaved.

    The team found that under certain conditions — about minus 45 degrees Celsius and about 2,400-times normal atmospheric pressure — the virtual water molecules separated into two liquids that differed in density.

    The pattern of molecules in each liquid also was different, Palmer said. Although most other liquids are a jumbled mix of molecules, water has a fair amount of order to it. The molecules link to their neighbors via hydrogen bonds, which form between the oxygen of one molecule and a hydrogen of another. These molecules can link — and later unlink — in a constantly changing network. On average, each H2O links to four other molecules in what is known as a tetrahedral arrangement.

    The researchers found that the molecules in the low-density liquid also contained tetrahedral order, but that the high-density liquid was different. “In the high-density liquid, a fifth neighbor molecule was trying to squeeze into the pattern,” Palmer said.

    image
    Normal ice (left) contains water molecules linked into ring-like structures via hydrogen bonds (dashed blue lines) between the oxygen atoms (red beads) and hydrogen atoms (white beads) of neighboring molecules, with six water molecules per ring. Each water molecule in ice also has four neighbors that form a tetrahedron (right), with a center molecule linked via hydrogen bonds to four neighboring molecules. The green lines indicate the edges of the tetrahedron. Water molecules in liquid water form distorted tetrahedrons and ring structures that can contain more or less than six molecules per ring. (Image courtesy of Jeremy Palmer)

    The researchers also looked at another facet of the two liquids: the tendency of the water molecules to form rings via hydrogen bonds. Ice consists of six water molecules per ring. Calculations by Fausto Martelli, a postdoctoral research associate advised by Roberto Car, the Ralph W. *31 Dornte Professor in Chemistry, found that in this computer model the average number of molecules per ring decreased from about seven in the high-density liquid to just above six in the low-density liquid, but then climbed slightly before declining again to six molecules per ring as ice, suggesting that there is more to be discovered about how water molecules behave during supercooling.

    A better understanding of water’s behavior at supercooled temperatures could lead to improvements in modeling the effect of high-altitude clouds on climate, Debenedetti said. Because water droplets reflect and scatter the sunlight coming into the atmosphere, clouds play a role in whether the sun’s energy is reflected away from the planet or is able to enter the atmosphere and contribute to warming. Additionally, because water goes through a supercooled phase before forming hail or snow, such research may aid strategies for preventing ice from forming on airplane wings.

    “The research is a tour de force of computational physics and provides a splendid academic look at a very difficult problem and a scholarly controversy,” said C. Austen Angell, professor of chemistry and biochemistry at Arizona State University, who was not involved in the research. “Using a particular computer model, the Debenedetti group has provided strong support for one of the theories that can explain the outstanding properties of real water in the supercooled region.”

    In their computer simulations, the team used an updated version of a model noted for its ability to capture many of water’s unusual behaviors first developed in 1974 by Frank Stillinger, then at Bell Laboratories in Murray Hill, N.J., and now a senior chemist at Princeton; and Aneesur Rahman, then at the U.S. Argonne National Laboratory. The same model was used to develop the liquid-liquid transition hypothesis.

    Collectively, the work took several million computer hours, which would take several human lifetimes using a typical desktop computer, Palmer said. In addition to the initial simulations, the team verified the results using six calculation methods. The computations were performed at Princeton’s High-Performance Computing Research Center’s Terascale Infrastructure for Groundbreaking Research in Science and Engineering (TIGRESS).

    The team included Yang Liu, who earned her doctorate at Princeton in 2012, and Athanassios Panagiotopoulos, the Susan Dod Brown Professor of Chemical and Biological Engineering.

    Support for the research was provided by the National Science Foundation (CHE 1213343) and the U.S. Department of Energy (DE-SC0002128 and DE-SC0008626).

    The article, Metastable liquid-liquid transition in a molecular model of water, by Jeremy C. Palmer, Fausto Martelli, Yang Liu, Roberto Car, Athanassios Z. Panagiotopoulos and Pablo G. Debenedetti, appeared in the journal Nature.

    See the full article here.

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 5:49 am on June 3, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From The Kavli Institute at Stanford: “Solving big questions requires big computation” 

    KavliFoundation

    The Kavli Foundation

    Understanding the origins of our solar system, the future of our planet or humanity requires complex calculations run on high-power computers.

    A common thread among research efforts across Stanford’s many disciplines is the growing use of sophisticated algorithms, run by brute computing power, to solve big questions.

    In Earth sciences, computer models of climate change or carbon sequestration help drive policy decisions, and in medicine computation is helping unravel the complex relationship between our DNA and disease risk. Even in the social sciences, computation is being used to identify relationships between social networks and behaviors, work that could influence educational programs.

    dell sc

    “There’s really very little research that isn’t dependent on computing,” says Ann Arvin, vice provost and dean of research. Arvin helped support the recently opened Stanford Research Computing Center (SRCC) located at SLAC National Accelerator Laboratory, which expands the available research computing space at Stanford. The building’s green technology also reduces the energy used to cool the servers, lowering the environmental costs of carrying out research.

    “Everyone we’re hiring is computational, and not at a trivial level,” says Stanford Provost John Etchemendy, who provided an initial set of servers at the facility. “It is time that we have this facility to support those faculty.”

    Here are just a few examples of how Stanford faculty are putting computers to work to crack the mysteries of our origins, our planet and ourselves.

    Myths once explained our origins. Now we have algorithms.

    Our Origins

    Q: How did the universe form?

    For thousands of years, humans have looked to the night sky and created myths to explain the origins of the planets and stars. The real answer could soon come from the elegant computer simulations conducted by Tom Abel, an associate professor of physics at Stanford.

    Cosmologists face an ironic conundrum. By studying the current universe, we have gained a tremendous understanding of what occurred in the fractions of a second after the Big Bang, and how the first 400,000 years created the ingredients – gases, energy, etc. – that would eventually become the stars, planets and everything else. But we still don’t know what happened after those early years to create what we see in the night sky.

    “It’s the perfect problem for a physicist, because we know the initial conditions very well,” says Abel, who is also director of the Kavli Institute for Particle Astrophysics and Cosmology at SLAC. “If you know the laws of physics correctly, you should be able to exactly calculate what will happen next.”

    Easier said than done. Abel’s calculations must incorporate the laws of chemistry, atomic physics, gravity, how atoms and molecules radiate, gas and fluid dynamics and interactions, the forces associated with dark matter and so on. Those processes must then be simulated out over the course of hundreds of millions, and eventually billions, of years. Further complicating matters, a single galaxy holds one billion moving stars, and the simulation needs to consider their interactions in order to create an accurate prediction of how the universe came to be.

    “Any of the advances we make will come from writing smarter algorithms,” Abel says. “The key point of the new facility is it will allow for rapid turnaround, which will allow us to constantly develop and refine and validate new algorithms. And this will help us understand how the very first things were formed in the universe.” —Bjorn Carey //

    Q: How did we evolve?

    The human genome is essentially a gigantic data set. Deep within each person’s six billion data points are minute variations that tell the story of human evolution, and provide clues to how scientists can combat modern-day diseases.

    To better understand the causes and consequences of these genetic variations, Jonathan Pritchard, a professor of genetics and of biology, writes computer programs that can investigate those links. “Genetic variation affects how cells work, both in healthy variation and in response to disease,” Pritchard says. How that variation displays itself – in appearance or how cells work – and whether natural selection favors those changes within a population drives evolution.

    Consider, for example, variation in the gene that codes for lactase, an enzyme that allows mammals to digest milk. Most mammals turn off the lactase gene after they’ve been weaned from their mother’s milk. In populations that have historically revolved around dairy farming, however, Pritchard’s algorithms have helped to elucidate signals of strong selection since the advent of agriculture to enable people to process milk active throughout life. There has been similarly strong selection on skin pigmentation in non-Africans that allow better synthesis of vitamin D in regions where people are exposed to less sunlight.

    The algorithms and machine learning methods Pritchard used have the potential to yield powerful medical insights. Studying variations in how genes are regulated within a population could reveal how and where particular proteins bind to DNA, or which genes are turned on in different cell types­ – information that could help design novel therapies. These inquiries can generate hundreds of thousands of data sets and can only be parsed with up to tens of thousands of hours of computer work.

    Pritchard is bracing for an even bigger explosion of data; as genome sequencing technologies become less expensive, he expects the number of individually sequenced genomes to jump by as much as a hundredfold in the next few years. “Storing and analyzing vast amounts of data is a fundamental challenge that all genomics groups are dealing with,” says Pritchard, who is a member of Stanford Bio-X.

    “Having access to SRCC will make our inquiries go easier and more quickly, and we can move on faster to making the next discovery.” —Bjorn Carey //
    7 billion people live on Earth. Computers might help us survive ourselves.

    Our Planet
    Q: How can we predict future climates?

    There is no lab large enough to conduct experiments on the global-scale interactions between air, water and land that control Earth’s climate, so Stanford’s Noah Diffenbaugh and his students use supercomputers.

    Computer simulations reveal that if human emissions of greenhouse gases continue at their current pace, global warming over the next century is likely to occur faster than any global-scale shift recorded in the past 65 million years. This will increase the likelihood and severity of droughts, heat waves, heavy downpours and other extreme weather events.

    Climate scientists must incorporate into their predictions a growing number of data streams – including direct measurements as well as remote-sensing observations from satellites, aircraft-based sensors, and ground-based arrays.

    “That takes a lot of computing power, especially as we try to figure out how to use newer unstructured forms of data, such as from mobile sensors,” says Diffenbaugh, an associate professor of environmental Earth system science and a senior fellow at the Stanford Woods Institute for the Environment.

    Diffenbaugh’s team plans to use the increased computing resources available at SRCC to simulate air circulation patterns at the kilometer-scale over multiple decades. This has rarely been attempted before, and could help scientists answer questions such as how the recurring El Niño ocean circulation pattern interacts with elevated atmospheric carbon dioxide levels to affect the occurrence of tornadoes in the United States.

    “We plan to use the new computing cluster to run very large high-resolution simulations of climate over regions like the U.S. and India,” Diffenbaugh says. One of the most important benefits of SRCC, however, is not one that can be measured in computing power or cycles.

    “Perhaps most importantly, the new center is bringing together scholars from across campus who are using similar methodologies to figure out new solutions to existing problems, and hopefully to tackle new problems that we haven’t imagined yet.” —Ker Than //

    Q: How can we predict if climate solutions work?

    The capture and trapping of carbon dioxide gas deep underground is one of the most viable options for mitigating the effects of global warming, but only if we can understand how that stored gas interacts with the surrounding structures.

    Hamdi Tchelepi, a professor of energy resources engineering, uses supercomputers to study interactions between injected CO2 gas and the complex rock-fluid system in the subsurface.

    “Carbon sequestration is not a simple reversal of the technology that allows us to extract oil and gas. The physics involved is more complicated, ranging from the micro-scale of sand grains to extremely large geological formations that may extend hundreds of kilometers, and the timescales are on the order of centuries, not decades,” says Tchelepi, who is also the co-director of the Stanford Center for Computational Earth and Environmental Sciences (CEES).

    For example, modeling how a large plume of CO2 injected into the ground migrates and settles within the subsurface, and whether it might escape from the injection site to affect the air quality of a faraway city, can require the solving of tens of millions of equations simultaneously. SRCC will help augment the high computing power already available to Stanford Earth scientists and students through CEES, and will serve as a testing ground for custom algorithms developed by CEES researchers to simulate complex physical processes.

    Tchelepi, who is also affiliated with the Precourt Institute for Energy, says people are often surprised to learn the role that supercomputing plays in modern Earth sciences, but Earth scientists use more computer resources than almost anybody except the defense industry, and their computing needs can influence the designs of next-generation hardware.

    “Earth science is about understanding the complex and ever-changing dynamics of flowing air, water, oil, gas, CO2 and heat. That’s a lot of physics, requiring extensive computing resources to model.” —Ker Than //
    Q: How can we build more efficent energy networks?

    When folks crank their air conditioners during a heat wave, you can almost hear the electric grid moan. The sudden, larger-than-average demand for electricity can stress electric plants, and energy providers scramble to redistribute the load, or ask industrial users to temporarily shut down. To handle those sudden spikes in use more efficiently, Ram Rajagopal, an assistant professor of civil and environmental engineering, used supercomputers to analyze the energy usage patterns of 200,000 anonymous households and businesses in Northern California and from that develop a model that could tune consumer demand and lead to a more flexible “smart grid.”

    Today, utility companies base forecasts on a 24-hour cycle that aggregates millions of households. Not surprisingly, power use peaks in the morning and evening, when people are at home. But when Rajagopal looked at 1.6 billion hourly data points he plotted dramatic variations.

    Some households conformed to the norm and others didn’t. This forms the statistical underpinning for a new way to price and purchase power – by aggregating as few as a thousand customers into a unit with a predictable usage pattern. “If we want to thwart global warming we need to give this technology to communities,” says Rajagopal. Some consumers might want to pay whatever it costs to stay cool on hot days, others might conserve or defer demand to get price breaks. “I’m talking about neighborhood power that could be aligned to your beliefs,” says Rajagopal.

    Establishing a responsive smart grid and creative energy economies will become even more important as solar and wind energy – which face hourly supply limitations due to Mother Nature – become a larger slice of the energy pie. —Tom Abate //

    Know thyself. Let computation help.

    Ourselves

    Q: How does our DNA make us who we are?

    Our DNA is sometimes referred to as our body’s blueprint, but it’s really more of a sketch. Sure, it determines a lot of things, but so do the viruses and bacteria swarming our bodies, our encounters with environmental chemicals that lodge in our tissues and the chemical stew that ensues when our immune system responds to disease states.

    All of this taken together – our DNA, the chemicals, the antibodies coursing through our veins and so much more – determines our physical state at any point in time. And all that information makes for a lot of data if, like genetics professor Michael Snyder, you collected it 75 times over the course of four years.

    Snyder is a proponent of what he calls “personal omics profiling,” or the study of all that makes up our person, and he’s starting with himself. “What we’re collecting is a detailed molecular portrait of a person throughout time,” he says.

    So far, he’s turning out to be a pretty interesting test case. In one round of assessment he learned that he was becoming diabetic and was able to control the condition long before it would have been detected through a periodic medical exam.

    If personal omics profiling is going to go mainstream, serious computing will be required to tease out which of the myriad tests Snyder’s team currently runs give meaningful information and should be part of routine screening. Snyder’s sampling alone has already generated a half of a petabyte of data – roughly enough raw information to fill about a dishwasher-size rack of servers.

    Right now, that data and the computer power required to understand it reside on campus, but new servers will be located at SRCC. “I think you are going to see a lot more projects like this,” says Snyder, who is also a Stanford Bio-X affiliate and a member of the Stanford Cancer Center.

    “Computing is becoming increasingly important in medicine.” —Amy Adams //

    Q: How do we learn to read?

    A love letter, with all of its associated emotions, conveys its message with the same set of squiggly letters as a newspaper, novel or an instruction manual. How our brains learn to interpret a series of lines and curves into language that carries meaning or imparts knowledge is something psychology Professor Brian Wandell has been trying to understand.

    Wandell hopes to tease out differences between the brain scans of kids learning to read normally and those who are struggling, and use that information to find the right support for kids who need help. “As we acquire information about the outcome of different reading interventions we can go back to our database to understand whether there is some particular profile in the child that works better with intervention 1, and a second profile that works better with intervention 2,” says Wandell, a Stanford Bio-X member who is also the Isaac and Madeline Stein Family Professor and professor, by courtesy, of electrical engineering.

    His team developed a way of scanning kids’ brains with magnetic resonance imaging, then knitting the million collected samples together with complex algorithms that reveal how the nerve fibers connect different parts of the brain. “If you try to do this on your laptop, it will take half a day or more for each child,” he says. Instead, he uses powerful computers to reveal specific brain changes as kids learn to read.

    Wandell is associate director of the Stanford Neurosciences Institute, where he is leading the effort to develop a computing strategy – one that involves making use of SRCC rather than including computing space in their planned new building. He says one advantage of having faculty share computing space and systems is to speed scientific progress.

    “Our hope for the new facility is that it gives us the chance to set the standards for a better environment for sharing computations and data, spreading knowledge rapidly through the community,”

    Q: How do we work effectively together?

    There comes a time in every person’s life when it becomes easy to settle for the known relationship, for better or for worse, rather than seek out new ties with those who better inspire creativity and ensure success.

    Or so finds Daniel McFarland, professor of education and, by courtesy, of organizational behavior, who has studied how academic collaborations form and persist. McFarland and his own collaborators tracked signs of academic ties such as when Stanford faculty co-authored a paper, cited the same publications or got a grant together. Armed with 15 years of collaboration output on 3,000 faculty members, they developed a computer model of how networks form and strengthen over time.

    “Social networks are large, interdependent forms of data that quickly confront limits of computing power, and especially so when we study network evolution,” says McFarland.

    Their work has shown that once academic relationships have established, they tend to continue out of habit, regardless of whether they are the most productive fit. He argues that successful academic programs or businesses should work to bring new members into collaborations and also spark new ties to prevent more senior people from falling back on known but less effective relationships. At the same time, he comes down in favor of retreats and team building exercises to strengthen existing good collaborations.

    McFarland’s work has implications for Stanford’s many interdisciplinary programs. He has found that collaborations across disciplines often fall apart due in part to the distant ties between researchers. “To form and sustain these ties, pairs of colleagues must interact frequently to share knowledge,” he writes. “This is perhaps why interdisciplinary centers may be useful organizational means of corralling faculty and promoting continued distant collaborations.” —Amy Adams //

    Q: What can computers tell us about how our body works?

    As you sip your morning cup of coffee, the caffeine makes its way to your cells, slots into a receptor site on the cells’ surface and triggers a series of reactions that jolt you awake. A similar process takes place when Zantac provides relief for stomach ulcers, or when chemical signals produced in the brain travel cell-to-cell through your nervous system to your heart, telling it to beat.

    In each of these instances, a drug or natural chemical is activating a cell’s G-protein coupled receptor (GPCR), the cellular target of roughly half of all known drugs, says Vijay Pande, a professor of chemistry and, by courtesy, of structural biology and of computer science at Stanford. This exchange is a complex one, though. In order for caffeine or any other molecule to influence a cell, it must fit snugly into the receptor site, which consists of 4,000 atoms and transforms between an active and inactive configuration. Current imaging technologies are unable to view that transformation, so Pande has been simulating it using his Folding@Home distributed computer network.

    So far, Pande’s group has demonstrated a few hundred microseconds of the receptor’s transformation. Although that’s an extraordinarily long chunk of time compared to similar techniques, Pande is looking forward to accessing the SRCC to investigate the basic biophysics of GPCR and other proteins. Greater computing power, he says, will allow his team to simulate larger molecules in greater detail, simulate folding sequences for longer periods of time and visualize multiple molecules as they interact. It might even lead to atom-level simulations of processes at the scale of an entire cell. All of this knowledge could be applied to computationally design novel drugs and therapies.

    “Having more computer power can dramatically change every aspect of what we can do in my lab,” says Pande, who is also a Stanford Bio-X affiliate. “Much like having more powerful rockets could radically change NASA, access to greater computing power will let us go way beyond where we can go routinely today. —Bjorn Carey //

    See the full article here.

    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 5:44 pm on May 7, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Oak Ridge: “World’s Most Powerful Accelerator Comes to Titan with a High-Tech Scheduler” 

    i1

    Oak Ridge National Laboratory

    May 6, 2014
    Leo Williams

    The people who found the Higgs boson have serious data needs, and they’re meeting some of them on the Oak Ridge Leadership Computing Facility’s (OLCF’s) flagship Titan system.

    titan

    Researchers with the ATLAS experiment at Europe’s Large Hadron Collider (LHC) have been using Titan since December, according to Ken Read, a physicist at Oak Ridge National Laboratory and the University of Tennessee. Read, who works with another LHC experiment, known as ALICE, noted that much of the challenge has been in integrating ATLAS’s advanced scheduling and analysis tool, PanDA, with Titan.

    CERN ATLAS New
    ATLAS

    CERN LHC particles
    LHC

    PanDA (for Production and Distributed Analysis) manages all of ATLAS’s data tasks from a server located at CERN, the European Organization for Nuclear Research. The job is daunting, with the workflow including 1.8 million computing jobs each day distributed among 100 or so computing centers spread across the globe.

    PanDA is able to match ATLAS’s computing needs seamlessly with disparate systems in its network, making efficient use of resources as they become available.

    In all, PanDA manages 150 petabytes of data (enough to hold about 75 million hours of high-definition video), and its needs are growing rapidly—so rapidly that it needs access to a supercomputer with the muscle of Titan, the United States’ most powerful system.

    “For ATLAS, access to the leadership computing facilities will help it manage a hundredfold increase in the amount of data to be processed,” said ATLAS developer Alexei Klimentov of Brookhaven National Laboratory. PanDA was developed in the United States under the guidance of Kaushik De of the University of Texas at Arlington and Torre Wenaus from Brookhaven National Laboratory.

    “Our grid resources are overutilized,” Klimentov said. “It’s a question of where we can find resources and use them opportunistically. We cannot scale the grid 100 times.”

    In order to integrate with Titan, PanDA team developers Sergey Panitkin from BNL and Danila Oleynik from UTA redesigned parts of the PanDA system on Titan responsible for job submission on remote sites (known as “Pilot”) and gave PanDA new capability to collect information about unused worker nodes on Titan. This allows PanDA to precisely define the size and duration of jobs submitted to Titan according to available free resources. This work was done in collaboration with OLCF technical staff.

    The collaboration holds potential benefits for OLCF as well as for ATLAS.

    In the first place, PanDA’s ability to efficiently match available computing time with high-priority tasks holds great promise for a leadership system such as Titan. While the OLCF focuses on projects that can use most, if not all, of Titan’s 18,000-plus computing nodes, there are occasionally a relatively small numbers of nodes sitting idle for one or several hours. They sit idle because there are not enough of them—or they don’t have enough time—to handle a leadership computing job. A scheduler that can occupy those nodes with high-priority tasks would be very valuable.

    “Today, if we use 90 or 92 percent of available hours, we think that is high utilization,” said Jack Wells, director of science at the OLCF. “That’s because of inefficiencies in scheduling big jobs. If we have a flexible workflow to schedule jobs for backfill, it would mean higher utilization of Titan for science.”

    PanDA is also highly skilled at finding needles in haystacks, as it showed during the search for the Higgs boson.

    According to the Standard Model of particle physics, the field associated with the Higgs is necessary for other particles to have mass. The boson is also very massive itself and decays almost instantly; this means it can be created and detected only by a very high-energy facility. In fact, it has, so far, been found definitively only at the LHC, which is the world’s most powerful particle accelerator.

    sm
    The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    But while high energy was necessary for identifying the Higgs, it was not sufficient. The LHC creates 800 million collisions between protons each second, yet it creates a Higgs boson only once every one to two hours. In other words, it takes 4 trillion collisions, more or less, to create a Higgs. And it takes PanDA to manage ATLAS’s data processing workflow in sifting through the data and finding it.

    PanDA’s value to high-performance computing is widely recognized. The Department of Energy’s offices of Advanced Scientific Computing Research and High Energy Physics are, in fact, funding a project known as Big PanDA to expand the tool beyond high-energy physics to be used by other communities.

    See the full article here.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 1:04 pm on April 30, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From NERSC: “NERSC, Cray, Intel to Collaborate on Next-Generation Supercomputer 

    NERSC Logo
    NERSC

    April 29, 2014
    Contact: Jon Bashor, jbashor@lbl.gov, 510-486-5849

    The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced today that they have signed a contract for a next generation of supercomputer to enable scientific discovery at the DOE’s Office of Science (DOE SC).

    Lawrence Berkeley National Laboratory (Berkeley Lab), which manages NERSC, collaborated with Los Alamos National Laboratory and Sandia National Laboratories to develop the technical requirements for the system.

    The new, next-generation Cray XC supercomputer will use Intel’s next-generation Intel® Xeon Phi™ processor –- code-named “Knights Landing” — a self-hosted, manycore processor with on-package high bandwidth memory and delivers more than 3 teraFLOPS of double-precision peak performance per single socket node. Scheduled for delivery in mid-2016, the new system will deliver 10x the sustained computing capability of NERSC’s Hopper system, a Cray XE6 supercomputer.

    NERSC serves as the DOE SC’s primary high performance computing (HPC) facility, supporting more than 5,000 scientists annually on over 700 projects. The $70 million plus contract represents the DOE SC’s ongoing commitment to enabling extreme-scale science to address challenges such as developing new energy sources, improving energy efficiency, understanding climate change, developing new materials and analyzing massive data sets from experimental facilities around the world.

    “This agreement is a significant step in advancing supercomputing design toward the kinds of computing systems we expect to see in the next decade as we advance to exascale,” said Steve Binkley, Associate Director of the Office of Advanced Scientific Computing Research. “U.S. leadership in HPC, both in the technology and in the scientific research that can be accomplished with such powerful systems, is essential to maintaining economic and intellectual leadership. This project was strengthened by a great partnership with DOE’s National Nuclear Security Administration.”

    To highlight its commitment to advancing research, NERSC names its supercomputers after noted scientists. The new system will be named “Cori” in honor of bio-chemist and Nobel Laureate Gerty Cori, the first American woman to receive a Nobel Prize in science.

    Technical Highlights

    Cori the supercomputer will have over 9300 Knights Landing compute nodes and provide over 400 gigabytes per second of I/O bandwidth and 28 petabytes of disk space. The contract also includes an option for a “Burst Buffer,” a layer of NVRAM that would move data more quickly between processor and disk, allowing users to make the most efficient use of the system while saving energy. The Cray XC system features the Aries high-performance interconnect linking the processors, which also increases efficiency. Cori will be installed directly into the new Computational Research and Theory facility currently being constructed on the main Berkeley Lab campus.

    “NERSC is one of the premier high performance computing centers in the world, and we are proud that the close partnership we have built with NERSC over the years will continue with the delivery of Cori – the next-generation of our flagship Cray XC supercomputer,” said Peter Ungaro, president and CEO of Cray. “Accelerating scientific discovery lies at the foundation of the NERSC mission, and it’s also a key element of our own supercomputing roadmap and vision. Our focus is creating new, advanced supercomputing technologies that ultimately put more powerful tools in the hands of scientists and researchers. It is a focus we share with NERSC and its user community, and we are pleased our partnership is moving forward down this shared path.”

    The Knights Landing processor used in Cori will have over 60 cores, each with multiple hardware threads with improved single thread performance over the current generation Xeon Phi co-processor. The Knights Landing processor is “self-hosted,” meaning that it is not an accelerator or dependent on a host processor. With this model, users will be able to retain the MPI/OpenMP programming model they have been using on NERSC’s previous generation Hopper and Edison systems. The Knights Landing processor also features on-package high bandwidth memory that can be used either as a cache or explicitly managed by the user.

    “NERSC’s selection of Intel’s next-generation Intel® Xeon Phi™ product family – codenamed Knights Landing – as the compute engine for their next generation Cray system marks a significant milestone for the broad Office of Science community as well as for Intel Corporation,” said Raj Hazra, Vice President and General Manager of High Performance Computing at Intel. “Knights Landing is the first true manycore CPU that breaks through the memory wall while leveraging existing codes through existing programming models. This combination of performance and programmability in the Intel Xeon Phi product family enables breakthrough performance on a wide set of applications. The Knights Landing processor, memory and programming model advantages make it the first significant step to resolving the challenges of exascale.”

    Application Readiness

    To help users transition to the Knights Landing manycore processor, NERSC has created a robust Application Readiness program that will provide user training, access to early development systems and application kernel deep dives with Cray and Intel specialists.

    “We are excited to partner with Cray and Intel to ensure that Cori meets the computational needs of DOE’s science community,” said NERSC Director Sudip Dosanjh. “Cori will provide a significant increase in capability for our users and will provide a platform for transitioning our very broad user community to energy-efficient, manycore architectures. It will also let users analyze large quantities of data being transferred to NERSC from DOE’s experimental facilities.”

    As part of the Application Readiness effort, NERSC plans to create teams composed of NERSC principal investigators along with NERSC staff and newly hired postdoctoral researchers. Together they will ensure that applications and software running on Cori are ready to produce important research results for the Office of Science. NERSC also plans to work closely with Cray, Intel, DOE laboratories and other members of the HPC community who are facing the same transition to manycore architectures.

    “We are committed to helping our users, who represent the broad scientific workload of the DOE SC community, make the transition to manycore architectures so they can maintain their research momentum,” said Katie Antypas, NERSC’s Services Department Head. “We recognize some applications may need significant optimization to achieve high performance on the Knights Landing processor. Our goal is to enable performance that is portable across systems and will be sustained in future supercomputing architectures.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 9:08 pm on March 31, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing,   

    From Argonne Lab via PPPL: “Plasma Turbulence Simulations Reveal Promising Insight for Fusion Energy” 

    March 31, 2014
    By Argonne National Laboratory

    With the potential to provide clean, safe, and abundant energy, nuclear fusion has been called the “holy grail” of energy production. But harnessing energy from fusion, the process that powers the sun, has proven to be an extremely difficult challenge.

    turb
    Simulation of microturbulence in a tokamak fusion device. (Credit: Chad Jones and Kwan-Liu Ma, University of California, Davis; Stephane Ethier, Princeton Plasma Physics Laboratory)

    Scientists have been working to accomplish efficient, self-sustaining fusion reactions for decades, and significant research and development efforts continue in several countries today.

    For one such effort, researchers from the Princeton Plasma Physics Laboratory (PPPL), a DOE collaborative national center for fusion and plasma research in New Jersey, are running large-scale simulations at the Argonne Leadership Computing Facility (ALCF) to shed light on the complex physics of fusion energy. Their most recent simulations on Mira, the ALCF’s 10-petaflops Blue Gene/Q supercomputer, revealed that turbulent losses in the plasma are not as large as previously estimated.

    MIRA

    Good news

    This is good news for the fusion research community as plasma turbulence presents a major obstacle to attaining an efficient fusion reactor in which light atomic nuclei fuse together and produce energy. The balance between fusion energy production and the heat losses associated with plasma turbulence can ultimately determine the size and cost of an actual reactor.

    “Understanding and possibly controlling the underlying physical processes is key to achieving the efficiency needed to ensure the practicality of future fusion reactors,” said William Tang, PPPL principal research physicist and project lead.

    Tang’s work at the ALCF is focused on advancing the development of magnetically confined fusion energy systems, especially ITER, a multi-billion dollar international burning plasma experiment supported by seven governments including the United States.

    Currently under construction in France, ITER will be the world’s largest tokamak system, a device that uses strong magnetic fields to contain the burning plasma in a doughnut-shaped vacuum vessel. In tokamaks, unavoidable variations in the plasma’s ion temperature drive microturbulence, which can significantly increase the transport rate of heat, particles, and momentum across the confining magnetic field.

    “Simulating tokamaks of ITER’s physical size could not be done with sufficient accuracy until supercomputers as powerful as Mira became available,” said Tang.

    To prepare for the architecture and scale of Mira, Tim Williams of the ALCF worked with Tang and colleagues to benchmark and optimize their Gyrokinetic Toroidal Code – Princeton (GTC-P) on the ALCF’s new supercomputer. This allowed the research team to perform the first simulations of multiscale tokamak plasmas with very high phase-space resolution and long temporal duration. They are simulating a sequence of tokamak sizes up to and beyond the scale of ITER to validate the turbulent losses for large-scale fusion energy systems.

    Decades of experiments

    Decades of experimental measurements and theoretical estimates have shown turbulent losses to increase as the size of the experiment increases; this phenomenon occurs in the so-called Bohm regime. However, when tokamaks reach a certain size, it has been predicted that there will be a turnover point into a Gyro-Bohm regime, where the losses level off and become independent of size. For ITER and other future burning plasma experiments, it is important that the systems operate in this Gyro-Bohm regime.

    The recent simulations on Mira led the PPPL researchers to discover that the magnitude of turbulent losses in the Gyro-Bohm regime is up to 50% lower than indicated by earlier simulations carried out at much lower resolution and significantly shorter duration. The team also found that transition from the Bohm regime to the Gyro-Bohm regime is much more gradual as the plasma size increases. With a clearer picture of the shape of the transition curve, scientists can better understand the basic plasma physics involved in this phenomenon.

    “Determining how turbulent transport and associated confinement characteristics will scale to the much larger ITER-scale plasmas is of great interest to the fusion research community,” said Tang. “The results will help accelerate progress in worldwide efforts to harness the power of nuclear fusion as an alternative to fossil fuels.”

    This project has received computing time at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The effort was also awarded pre-production time on Mira through the ALCF’s Early Science Program, which allowed researchers to pursue science goals while preparing their GTC-P code for Mira.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 322 other followers

%d bloggers like this: