Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From isgtw: “Laying the groundwork for data-driven science” 


    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    data
    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:47 pm on October 22, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From BNL: “Brookhaven Lab Launches Computational Science Initiative” 

    Brookhaven Lab

    October 22, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Leveraging computational science expertise and investments across the Laboratory to tackle “big data” challenges

    Building on its capabilities in computational science and data management, the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory is embarking upon a major new Computational Science Initiative (CSI). This program will leverage computational science expertise and investments across multiple programs at the Laboratory—including the flagship facilities that attract thousands of scientific users each year—further establishing Brookhaven as a leader in tackling the “big data” challenges at the frontiers of scientific discovery. Key partners in this endeavor include nearby universities such as Columbia, Cornell, New York University, Stony Brook, and Yale, and IBM Research.

    blue
    Blue Gene/Q Supercomputer at Brookhaven National Laboratory

    “The CSI will bring together under one umbrella the expertise that drives [the success of Brookhaven's scientific programs] to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”
    — Robert Tribble

    “Advances in computational science and management of large-scale scientific data developed at Brookhaven Lab have been a key factor in the success of the scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, as well as our collaborative participation in international research endeavors, such as the ATLAS experiment at Europe’s Large Hadron Collider,” said Robert Tribble, Brookhaven Lab’s Deputy Director for Science and Technology, who is leading the development of the new initiative. “The CSI will bring together under one umbrella the expertise that drives this success to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”

    BNL RHIC
    BNL RHIC Campus
    RHIC at BNL

    BNL NSLS
    BNL NSLS Interior
    NSLS at BNL

    A centerpiece of the initiative will be a new Center for Data-Driven Discovery (C3D) that will serve as a focal point for this activity. Within the Laboratory it will drive the integration of intellectual, programmatic, and data/computational infrastructure with the goals of accelerating and expanding discovery by developing critical mass in key disciplines, enabling nimble response to new opportunities for discovery or collaboration, and ultimately integrating the tools and capabilities across the entire Laboratory into a single scientific resource. Outside the Laboratory C3D will serve as a focal point for recruiting, collaboration, and communication.

    The people and capabilities of C3D are also integral to the success of Brookhaven’s key scientific facilities, including those named above, the new National Synchrotron Light Source II (NSLS-II), and a possible future electron ion collider (EIC) at Brookhaven. Hundreds of scientists from Brookhaven and thousands of facility users from universities, industry, and other laboratories around the country and throughout the world will benefit from the capabilities developed by C3D personnel to make sense of the enormous volumes of data produced at these state-of-the-art research facilities.

    BNL NSLS II Photo
    BNL NSLS-II Interior
    NSLS II at BNL

    The CSI in conjunction with C3D will also host a series of workshops/conferences and training sessions in high-performance computing—including annual workshops on extreme-scale data and scientific knowledge discovery, extreme-scale networking, and extreme-scale workflow for integrated science. These workshops will explore topics at the frontier of data-centric, high-performance computing, such as the combination of efficient methodologies and innovative computer systems and concepts to manage and analyze scientific data generated at high volumes and rates.

    “The missions of C3D and the overall CSI are well aligned with the broad missions and goals of many agencies and industries, especially those of DOE’s Office of Science and its Advanced Scientific Computing Research (ASCR) program,” said Robert Harrison, who holds a joint appointment as director of Brookhaven Lab’s Computational Science Center (CSC) and Stony Brook University’s Institute for Advanced Computational Science (IACS) and is leading the creation of C3D.

    The CSI at Brookhaven will specifically address the challenge of developing new tools and techniques to deliver on the promise of exascale science—the ability to compute at a rate of 1018 floating point operations per second (exaFLOPS), to handle the copious amount of data created by computational models and simulations, and to employ exascale computation to interpret and analyze exascale data anticipated from experiments in the near future.

    “Without these tools, scientific results would remain hidden in the data generated by these simulations,” said Brookhaven computational scientist Michael McGuigan, who will be working on data visualization and simulation at C3D. “These tools will enable researchers to extract knowledge and share key findings.”

    Through the initiative, Brookhaven will establish partnerships with leading universities, including Columbia, Cornell, Stony Brook, and Yale to tackle “big data” challenges.

    “Many of these institutions are already focusing on data science as a key enabler to discovery,” Harrison said. “For example, Columbia University has formed the Institute for Data Sciences and Engineering with just that mission in mind.”

    Computational scientists at Brookhaven will also seek to establish partnerships with industry. “As an example, partnerships with IBM have been successful in the past with co-design of the QCDOC and BlueGene computer architectures,” McGuigan said. “We anticipate more success with data-centric computer designs in the future.”

    An area that may be of particular interest to industrial partners is how to interface big-data experimental problems (such as those that will be explored at NSLS-II, or in the fields of high-energy and nuclear physics) with high-performance computing using advanced network technologies. “The reality of ‘computing system on a chip’ technology opens the door to customizing high-performance network interface cards and application program interfaces (APIs) in amazing ways,” said Dantong Yu, a group leader and data scientist in the CSC.

    “In addition, the development of asynchronous data access and transports based on remote direct memory access (RDMA) techniques and improvements in quality of service for network traffic could significantly lower the energy footprint for data processing while enhancing processing performance. Projects in this area would be highly amenable to industrial collaboration and lead to an expansion of our contributions beyond system and application development and designing programming algorithms into the new arena of exascale technology development,” Yu said.

    “The overarching goal of this initiative will be to bring under one umbrella all the major data-centric activities of the Lab to greatly facilitate the sharing of ideas, leverage knowledge across disciplines, and attract the best data scientists to Brookhaven to help us advance data-centric, high-performance computing to support scientific discovery,” Tribble said. “This initiative will also greatly increase the visibility of the data science already being done at Brookhaven Lab and at its partner institutions.”

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , CO2 studies, , , Supercomputing   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:33 pm on August 25, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Livermore Lab: “Calculating conditions at the birth of the universe” 


    Lawrence Livermore National Laboratory

    08/25/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    Using a calculation originally proposed seven years ago to be performed on a petaflop computer, Lawrence Livermore researchers computed conditions that simulate the birth of the universe.

    When the universe was less than one microsecond old and more than one trillion degrees, it transformed from a plasma of quarks and gluons into bound states of quarks – also known as protons and neutrons, the fundamental building blocks of ordinary matter that make up most of the visible universe.

    The theory of quantum chromodynamics (QCD) governs the interactions of the strong nuclear force and predicts it should happen when such conditions occur.

    In a paper appearing in the Aug. 18 edition of Physical Review Letters, Lawrence Livermore scientists Chris Schroeder, Ron Soltz and Pavlos Vranas calculated the properties of the QCD phase transition using LLNL’s Vulcan, a five-petaflop machine. This work was done within the LLNL-led HotQCD Collaboration, involving Los Alamos National Laboratory, Institute for Nuclear Theory, Columbia University, Central China Normal University, Brookhaven National Laboratory and Universität Bielefed in Germany.

    vulcan
    A five Petaflop IBM Blue Gene/Q supercomputer named Vulcan

    This is the first time that this calculation has been performed in a way that preserves a certain fundamental symmetry of the QCD, in which the right and left-handed quarks (scientists call this chirality) can be interchanged without altering the equations. These important symmetries are easy to describe, but they are computationally very challenging to implement.

    “But with the invention of petaflop computing, we were able to calculate the properties with a theory proposed years ago when petaflop-scale computers weren’t even around yet,” Soltz said.

    The research has implications for our understanding of the evolution of the universe during the first microsecond after the Big Bang, when the universe expanded and cooled to a temperature below 10 trillion degrees.

    Below this temperature, quarks and gluons are confined, existing only in hadronic bound states such as the familiar proton and neutron. Above this temperature, these bound states cease to exist and quarks and gluons instead form plasma, which is strongly coupled near the transition and coupled more and more weakly as the temperature increases.

    “The result provides an important validation of our understanding of the strong interaction at high temperatures, and aids us in our interpretation of data collected at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN.” Soltz said.

    Brookhaven RHIC
    RHIC at Brookhaven

    CERN LHC Grand Tunnel
    LHC at CERN

    Soltz and Pavlos Vranas, along with former colleague Thomas Luu, wrote an essay predicting that if there were powerful enough computers, the QCD phase transition could be calculated. The essay was published in Computing in Science & Engineering in 2007, “back when a petaflop really did seem like a lot of computing,” Soltz said. “With the invention of petaflop computers, the calculation took us several months to complete, but the 2007 estimate turned out to be pretty close.”

    The extremely computationally intensive calculation was made possible through a Grand Challenge allocation of time on the Vulcan Blue Gene/Q Supercomputer at Lawrence Livermore National Laboratory.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:00 pm on August 19, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    From Livermore Lab: “New project is the ACME of addressing climate change” 


    Lawrence Livermore National Laboratory

    08/19/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    High performance computing (HPC) will be used to develop and apply the most complete climate and Earth system model to address the most challenging and demanding climate change issues.

    Eight national laboratories, including Lawrence Livermore, are combining forces with the National Center for Atmospheric Research, four academic institutions and one private-sector company in the new effort. Other participating national laboratories include Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest and Sandia.

    The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications. The plan is to exploit advanced software and new high performance computing machines as they become available.

    book

    The initial focus will be on three climate change science drivers and corresponding questions to be answered during the project’s initial phase:

    Water Cycle: How do the hydrological cycle and water resources interact with the climate system on local to global scales? How will more realistic portrayals of features important to the water cycle (resolution, clouds, aerosols, snowpack, river routing, land use) affect river flow and associated freshwater supplies at the watershed scale?
    Biogeochemistry: How do biogeochemical cycles interact with global climate change? How do carbon, nitrogen and phosphorus cycles regulate climate system feedbacks, and how sensitive are these feedbacks to model structural uncertainty?
    Cryosphere Systems: How do rapid changes in cryospheric systems, or areas of the earth where water exists as ice or snow, interact with the climate system? Could a dynamical instability in the Antarctic Ice Sheet be triggered within the next 40 years?

    Over a planned 10-year span, the project aim is to conduct simulations and modeling on the most sophisticated HPC machines as they become available, i.e., 100-plus petaflop machines and eventually exascale supercomputers. The team initially will use U.S. Department of Energy (DOE) Office of Science Leadership Computing Facilities at Oak Ridge and Argonne national laboratories.

    “The grand challenge simulations are not yet possible with current model and computing capabilities,” said David Bader, LLNL atmospheric scientist and chair of the ACME council. “But we developed a set of achievable experiments that make major advances toward answering the grand challenge questions using a modeling system, which we can construct to run on leading computing architectures over the next three years.”
    To address the water cycle, the project plan (link below) hypothesized that: 1) changes in river flow over the last 40 years have been dominated primarily by land management, water management and climate change associated with aerosol forcing; 2) during the next 40 years, greenhouse gas (GHG) emissions in a business as usual scenario may drive changes to river flow.

    “A goal of ACME is to simulate the changes in the hydrological cycle, with a specific focus on precipitation and surface water in orographically complex regions such as the western United States and the headwaters of the Amazon,” the report states.

    To address biogeochemistry, ACME researchers will examine how more complete treatments of nutrient cycles affect carbon-climate system feedbacks, with a focus on tropical systems, and investigate the influence of alternative model structures for below-ground reaction networks on global-scale biogeochemistry-climate feedbacks.

    For cryosphere, the team will examine the near-term risks of initiating the dynamic instability and onset of the collapse of the Antarctic Ice Sheet due to rapid melting by warming waters adjacent to the ice sheet grounding lines.

    The experiment would be the first fully-coupled global simulation to include dynamic ice shelf-ocean interactions for addressing the potential instability associated with grounding line dynamics in marine ice sheets around Antarctica.

    Other LLNL researchers involved in the program leadership are atmospheric scientist Peter Caldwell (co-leader of the atmospheric model and coupled model task teams) and computer scientists Dean Williams (council member and workflow task team leader) and Renata McCoy (project engineer).

    Initial funding for the effort has been provided by DOE’s Office of Science.

    More information can be found in the Accelerated Climate Modeling For Energy: Project Strategy and Initial Implementation Plan.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 12:36 pm on July 21, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing,   

    From Oak Ridge Lab: “‘Engine of Explosion’ Discovered at OLCF now Observed in Nearby Supernova Remnant’ 

    i1

    Oak Ridge National Laboratory

    May 6, 2014
    Katie Elyce Jones

    Data gathered with high-energy x-ray telescope support the SASI model—a decade later

    Back in 2003, researchers using the Oak Ridge Leadership Computing Facility’s (OLCF’s) first supercomputer, Phoenix, started out with a bang. Astrophysicists studying core-collapse [Type II]supernovae—dying massive stars that violently explode after running out of fuel—asked themselves what mechanism triggers explosion and a fusion chain reaction that releases all the elements found in the universe, including those that make up the matter around us?

    “This is really one of the most important problems in science because supernovae give us all the elements in nature,” said Tony Mezzacappa of the University of Tennessee–Knoxville.

    Leading up to the 2003 simulations on Phoenix, one-dimensional supernovae models simulated a shock wave that pushes stellar material outward, expanding to a certain radius before, ultimately, succumbing to gravity. The simulations did not predict that stellar material would push beyond the shock wave radius; instead, infalling matter from the fringes of the expanding star tamped the anticipated explosion. Yet, humans have recorded supernovae explosions throughout history.

    “There have been a lot of supernovae observations,” Mezzacappa said. “But these observations can’t really provide information on the engine of explosion because you need to observe what is emitted from deep within the supernova, such as gravitational waves or neutrinos. It’s hard to do this from Earth.”

    Then simulations on Phoenix offered a solution: the SASI, or standing accretion shock instability, a sloshing of stellar material that destabilizes the expanding shock and helps lead to an explosion.

    “Once we discovered the SASI, it became very much a part of core-collapse supernova theory,” Mezzacappa said. “People feel it is an important missing ingredient.”

    The SASI provided a logical answer supported by other validated physics models, but it was still theoretical because it had only been demonstrated computationally.

    Now, more than a decade later, researchers mapping radiation signatures from the Cassiopeia A supernova with NASA’s NuSTAR high-energy x-ray telescope array have published observational evidence that supports the SASI model.

    NASA NuSTAR
    NASA/NuStar

    Cass A
    Cas A
    A false color image off Cassiopeia using observations from both the Hubble and Spitzer telescopes as well as the Chandra X-ray Observatory (cropped).
    Courtesy NASA/JPL-Caltech

    “What they’re seeing are x-rays that come from the radioactive decay of Titanium-44 in Cas A,” Mezzacappa said.

    Because Cassiopeia A is only 11,000 light-years away within the Milky Way galaxy (relatively nearby in astronomical distances), NuSTAR is capable of detecting Ti-44 located deep in the supernova ejecta. Mapping the radiative signature of this titanium isotope provides information on the supernova’s engine of explosion.

    “The distribution of titanium is what suggests that the supernova ‘sloshes’ before it explodes, like the SASI predicts,” Mezzacappa said.

    This is a rare example of simulation predicting a physical phenomenon before it is observed experimentally.

    “Usually it’s the other way around. You observe something experimentally then try to model it,” said the OLCF’s Bronson Messer. “The SASI was discovered computationally and has now been confirmed observationally.”

    The authors of the Nature letter that discusses the NuSTAR results cite Mezzacappa’s 2003 paper introducing the SASI in The Astrophysical Journal, which was coauthored by John Blondin and Christine DeMarino, as a likely model to describe the Ti-44 distribution.

    Despite observational support for the SASI, researchers are uncertain whether the SASI is entirely responsible for triggering a supernova explosion or if it is just part of the explanation. To further explore the model, Mezzacappa’s team, including the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) project’s principal investigator Eric Lentz, are taking supernovae simulations to the next level on the OLCF’s 27-petaflop Titan supercomputer located at Oak Ridge National Laboratory.

    ORNL Titan Supercomputer
    Titan at ORNL

    “The role of the SASI in generating explosion and whether or not the models are sufficiently complete to predict the course of explosion is the important question now,” Mezzacappa said. “The NuSTAR observation suggests it does aid in generating the explosion.”

    Although the terascale runs that predicted the SASI in 2003 were in three dimensions, they did not include much of the physics that can now be solved on Titan. Today, the team is using 85 million core hours and scaling to more than 60,000 cores to simulate a supernova in three dimensions with a fully physics-based model. The petascale Titan simulation, which will be completed later this year, could be the most revealing supernova explosion yet—inside our solar system anyway.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    See the full article here.

    i2


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 2:58 pm on June 25, 2014 Permalink | Reply
    Tags: , Supercomputing,   

    From Fermilab: “Supercomputers help answer the big questions about the universe” 


    Fermilab is an enduring source of strength for the US contribution to scientific research world wide.

    Wednesday, June 25, 2014
    Jim Simone

    The proton is a complicated blob. It is composed of three particles, called quarks, which are surrounded by a roiling sea of gluons that “glue” the quarks together. In addition to interacting with its surrounding particles, each gluon can also turn itself temporarily into a quark-antiquark pair and then back into a gluon.

    proton
    Proton -2 up quarks, one down quark

    gluon
    Gluon, after Feynman

    This tremendously complicated subatomic dance affects measurements that are crucial to answering important questions about the universe, such as: What is the origin of mass in the universe? Why do the elementary particles we know come in three generations? Why is there so much more matter than antimatter in the universe?

    A large group of theoretical physicists at U.S. universities and DOE national laboratories, known as the USQCD collaboration, aims to help experimenters solve the mysteries of the universe by computing the effects of this tremendously complicated dance of quarks and gluons on experimental measurements. The collaboration members use powerful computers to solve the complex equations of the theory of quantum chromodynamics, or QCD, which govern the behavior of quarks and gluons.

    The USQCD computing needs are met through a combination of INCITE resources at the DOE Leadership Class Facilities at Argonne and Oak Ridge national laboratories; NSF facilities such as the NCSA Blue Waters; a small Blue Gene/Q supercomputer at Brookhaven National Laboratory; and dedicated computer clusters housed at Fermilab and Jefferson Lab. USQCD also exploits floating point accelerators such as Graphic Processing Units (GPUs) and Intel’s Xeon Phi architecture.

    With funding from the DOE Office of Science SCIDAC program, the USQCD collaboration coordinates and oversees the development of community software that benefits all lattice QCD groups, enabling scientists to make the most efficient use of the latest supercomputer architectures and GPU clusters. Efficiency gains are achieved through new computing algorithms and techniques, such as communication avoidance, data compression and the use of mixed precision to represent numbers.

    The nature of lattice QCD calculations is very conducive to cooperation among collaborations, even among groups that focus on different scientific applications of QCD effects. Why? The most time-consuming and expensive computing in lattice QCD—the generation of gauge configuration files—is the basis for all lattice QCD calculations. (Gauge configurations represent the sea of gluons and virtual quarks that represent the QCD vacuum.) They are most efficiently generated on the largest leadership-class supercomputers. The MILC collaboration, a subgroup of the larger USQCD collaboration, is well known for the calculation of state-of-the-art gauge configurations and freely shares them with researchers worldwide.

    Specific predictions require more specialized computations and rely on the gauge configurations as input. These calculations are usually performed on dedicated computer hardware at the labs, such as the clusters at Fermilab and Jefferson Lab and the small Blue Gene/Q at BNL, which are funded by the DOE Office of Science ‘s LQCD-ext Project for hardware infrastructure.
    With the powerful human and computer resources of USQCD, particle physicists working on many different experiments—from measurements at the Large Hadron Collider to neutrino experiments at Fermilab—have a chance to get to the bottom of the universe’s most pressing questions.

    See the full article here.

    Fermilab Campus

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:23 pm on June 23, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From DOE Pulse: “Supercomputer exposes enzyme’s secrets” 

    pulse

    June 23, 2014
    Heather Lammers, 303.275.4084,
    heather.lammers@nrel.gov

    Thanks to newer and faster supercomputers, today’s computer simulations are opening hidden vistas to researchers in all areas of science. These powerful machines are used for everything from understanding how proteins work to answering questions about how galaxies began. Sometimes the data they create manage to surprise the very researchers staring back at the computer screen—that’s what recently happened to a researcher at DOE’s National Renewable Energy Laboratory (NREL).

    “What I saw was completely unexpected,” NREL engineer Gregg Beckham said.

    two
    NREL Biochemist Michael Resch (left) and NREL Engineer Gregg Beckham discuss results of vials containing an enzymatic digestion assay of cellulose.
    Photo by Dennis Schroeder, NREL

    What startled Beckham was a computer simulation of an enzyme from the fungus Trichoderma reesei (Cel7A). The simulation showed that a part of an enzyme, the linker, may play a necessary role in breaking down biomass into the sugars used to make alternative transportation fuels.

    “A couple of years ago we decided to run a really long—well, really long being a microsecond—simulation of the entire enzyme on the surface of cellulose,” Beckham said. “We noticed the linker section of the enzyme started to bind to the cellulose—in fact, the entire linker binds to the surface of the cellulose.”

    The enzymes that the NREL researchers are examining have several different components that work together to break down biomass. The enzymes have a catalytic domain—which is the primary part of the enzyme that breaks down the material into the needed sugars. There is also a binding module, the sticky part that attaches the cellulose to the catalytic domain. The catalytic domain and the binding module are connected to each other by a linker.

    “For decades, many people have thought these linkers are quite boring,” Beckham said. “Indeed, we predicted that linkers alone act like wet noodles—they are really flexible, and unlike the catalytic domain or the binding module, they didn’t have a known, well-defined structure. But the computer simulation suggests that the linker has some function other than connecting the binding module to the catalytic domain; namely, it may have some cellulose binding function as well.”

    Cellulose is a long linear chain of glucose that makes up the main part of plant cell walls, but the bonds between the glucose molecules make it very tough to break apart. In fact, cellulose in fossilized leaves can remain intact for millions of years, but enzymes have evolved to break down this biomass into sugars by threading a single strand of cellulose up into the enzymes’ catalytic domain and cleaving the bonds that connect glucose molecules together. Scientists are interested in the enzymes in fungi like Trichoderma reesei because they are quite effective at breaking down biomass—and fungi can make a lot of protein, which is also important for biomass conversion.

    To make an alternative fuel like cellulosic ethanol or drop-in hydrocarbon fuels, biomass is pretreated with acid, hot water, or some other chemicals and heat to open up the plant cell wall. Next, enzymes are added to the biomass to break down the cellulose into glucose, which is then fermented and converted into fuel.

    While Beckham and his colleagues were excited by what the simulation showed, there was also some trepidation.

    “At first we didn’t believe it, and we thought that it must be wrong, so a colleague, Christina Payne [formerly at NREL, now an assistant professor in chemical and materials engineering at the University of Kentucky], ran another simulation on the second most abundant enzyme in Trichoderma reesei (Cel6A),” Beckham explained. “And we found exactly the same thing.

    “Many research teams have been engineering catalytic domains and binding modules, but this result perhaps suggests that we should also consider the functions of linkers. We now know they are important for binding, and we know binding is important for activity—but many unanswered questions remain that the team is working on now.”

    The NREL research team experimentally verified the computational predictions by working with researchers at the University of Colorado Boulder (CU Boulder), Swedish University of Agricultural Sciences, and Ghent University in Belgium. Using proteins made and characterized by the international project team, NREL’s Michael Resch showed that by measuring the binding affinity of the binding module and then comparing it to the binding module with the linker added, the linker imparted an order of magnitude in binding affinity to cellulose. These results were published in an article in the Proceedings of the National Academy of Sciences (PNAS). In addition to Beckham, Payne, and Resch, co-authors on the study include: Liqun Chen and Zhongping Tan (CU Boulder); Michael F. Crowley, Michael E. Himmel, and Larry E. Taylor II (NREL); Mats Sandgren and Jerry Ståhlberg (Swedish University of Agricultural Sciences); and Ingeborg Stals (University College Ghent).

    “In terms of fuels, if you make even a small improvement in these enzymes, you could then lower the enzyme loadings. On a commodities scale, there is potential for dramatic savings that will help make renewable fuels competitive with fuels derived from fossil resources,” Beckham said.

    According to Beckham, improving these enzymes is very challenging but incredibly important for cost-effective biofuels production, which the Energy Department has long recognized. “We are still unraveling a lot of the basic mechanisms about how they work. For instance, our recent paper suggests that this might be another facet of how these enzymes work and another target for improving them.”

    The research work at NREL is funded by the Energy Department’s Bioenergy Technologies Office, and the computer time was provided by both the Energy Department and the National Science Foundation (NSF). The original simulation was run on a supercomputer named Athena at the National Institute for Computational Sciences, part of the NSF Extreme Science and Engineering Discovery Environment (XSEDE). The Energy Department’s Red Mesa supercomputer at Sandia National Laboratories was used for the subsequent simulations.

    See the full article here.

    DOE Pulse highlights work being done at the Department of Energy’s national laboratories. DOE’s laboratories house world-class facilities where more than 30,000 scientists and engineers perform cutting-edge research spanning DOE’s science, energy, National security and environmental quality missions. DOE Pulse is distributed twice each month.

    DOE Banner


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 12:48 pm on June 18, 2014 Permalink | Reply
    Tags: , , , Supercomputing,   

    From Princeton: “Familiar yet strange: Water’s ‘split personality’ revealed by computer model” 

    Princeton University
    Princeton University

    June 18, 2014
    Catherine Zandonella, Office of the Dean for Research

    Seemingly ordinary, water has quite puzzling behavior. Why, for example, does ice float when most liquids crystallize into dense solids that sink?

    Using a computer model to explore water as it freezes, a team at Princeton University has found that water’s weird behaviors may arise from a sort of split personality: at very cold temperatures and above a certain pressure, water may spontaneously split into two liquid forms.

    The team’s findings were reported in the journal Nature.

    “Our results suggest that at low enough temperatures water can coexist as two different liquid phases of different densities,” said Pablo Debenedetti, the Class of 1950 Professor in Engineering and Applied Science and Princeton’s dean for research, and a professor of chemical and biological engineering.

    The two forms coexist a bit like oil and vinegar in salad dressing, except that the water separates from itself rather than from a different liquid. “Some of the molecules want to go into one phase and some of them want to go into the other phase,” said Jeremy Palmer, a postdoctoral researcher in the Debenedetti lab.

    The finding that water has this dual nature, if it can be replicated in experiments, could lead to better understanding of how water behaves at the cold temperatures found in high-altitude clouds where liquid water can exist below the freezing point in a “supercooled” state before forming hail or snow, Debenedetti said. Understanding how water behaves in clouds could improve the predictive ability of current weather and climate models, he said.

    chart
    Pressure–temperature phase diagram, including an illustration of the liquid–liquid transition line proposed for several polyamorphous materials. This liquid–liquid phase transition would be a first order, discontinuous transition between low and high density liquids (labelled 1 and 2). This is analogous to polymorphism of crystalline materials, where different stable crystalline states (solid 1, 2 in diagram) of the same substance can exist (e.g. diamond and graphite are two polymorphs of carbon). Like the ordinary liquid–gas transition, the liquid–liquid transition is expected to end in a critical point. At temperatures beyond these critical points there is a continuous range of fluid states, i.e. the distinction between liquids and gasses is lost. If crystallisation is avoided the liquid–liquid transition can be extended into the metastable supercooled liquid regime.

    The new finding serves as evidence for the “liquid-liquid transition” hypothesis, first suggested in 1992 by Eugene Stanley and co-workers at Boston University and the subject of recent debate. The hypothesis states that the existence of two forms of water could explain many of water’s odd properties — not just floating ice but also water’s high capacity to absorb heat and the fact that water becomes more compressible as it gets colder.

    deb
    Princeton University researchers conducted computer simulations to explore what happens to water as it is cooled to temperatures below freezing and found that the supercooled liquid separated into two liquids with different densities. The finding agrees with a two-decade-old hypothesis to explain water’s peculiar behaviors, such as becoming more compressible and less dense as it is cooled. The X axis above indicates the range of crystallinity (Q6) from liquid water (less than 0.1) to ice (greater than 0.5) plotted against density (ρ) on the Y axis. The figure is a two-dimensional projection of water’s calculated “free energy surface,” a measure of the relative stability of different phases, with orange indicating high free energy and blue indicating low free energy. The two large circles in the orange region reveal a high-density liquid at 1.15 g/cm3 and low-density liquid at 0.90 g/cm3. The blue area represents cubic ice, which in this model forms at a density of about 0.88 g/cm3. (Image courtesy of Jeremy Palmer)

    At cold temperatures, the molecules in most liquids slow to a sedate pace, eventually settling into a dense and orderly solid that sinks if placed in liquid. Ice, however, floats in water due to the unusual behavior of its molecules, which as they get colder begin to push away from each other. The result is regions of lower density — that is, regions with fewer molecules crammed into a given volume — amid other regions of higher density. As the temperature falls further, the low-density regions win out, becoming so prevalent that they take over the mixture and freeze into a solid that is less dense than the original liquid.

    The work by the Princeton team suggests that these low-density and high-density regions are remnants of the two liquid phases that can coexist in a fragile, or “metastable” state, at very low temperatures and high pressures. “The existence of these two forms could provide a unifying theory for how water behaves at temperatures ranging from those we experience in everyday life all the way to the supercooled regime,” Palmer said.

    Since the proposal of the liquid-liquid transition hypothesis, researchers have argued over whether it really describes how water behaves. Experiments would settle the debate, but capturing the short-lived, two-liquid state at such cold temperatures and under pressure has proved challenging to accomplish in the lab.

    Instead, the Princeton researchers used supercomputers to simulate the behavior of water molecules — the two hydrogens and the oxygen that make up “H2O” — as the temperature dipped below the freezing point.

    The team used computer code to represent several hundred water molecules confined to a box, surrounded by an infinite number of similar boxes. As they lowered the temperature in this virtual world, the computer tracked how the molecules behaved.

    The team found that under certain conditions — about minus 45 degrees Celsius and about 2,400-times normal atmospheric pressure — the virtual water molecules separated into two liquids that differed in density.

    The pattern of molecules in each liquid also was different, Palmer said. Although most other liquids are a jumbled mix of molecules, water has a fair amount of order to it. The molecules link to their neighbors via hydrogen bonds, which form between the oxygen of one molecule and a hydrogen of another. These molecules can link — and later unlink — in a constantly changing network. On average, each H2O links to four other molecules in what is known as a tetrahedral arrangement.

    The researchers found that the molecules in the low-density liquid also contained tetrahedral order, but that the high-density liquid was different. “In the high-density liquid, a fifth neighbor molecule was trying to squeeze into the pattern,” Palmer said.

    image
    Normal ice (left) contains water molecules linked into ring-like structures via hydrogen bonds (dashed blue lines) between the oxygen atoms (red beads) and hydrogen atoms (white beads) of neighboring molecules, with six water molecules per ring. Each water molecule in ice also has four neighbors that form a tetrahedron (right), with a center molecule linked via hydrogen bonds to four neighboring molecules. The green lines indicate the edges of the tetrahedron. Water molecules in liquid water form distorted tetrahedrons and ring structures that can contain more or less than six molecules per ring. (Image courtesy of Jeremy Palmer)

    The researchers also looked at another facet of the two liquids: the tendency of the water molecules to form rings via hydrogen bonds. Ice consists of six water molecules per ring. Calculations by Fausto Martelli, a postdoctoral research associate advised by Roberto Car, the Ralph W. *31 Dornte Professor in Chemistry, found that in this computer model the average number of molecules per ring decreased from about seven in the high-density liquid to just above six in the low-density liquid, but then climbed slightly before declining again to six molecules per ring as ice, suggesting that there is more to be discovered about how water molecules behave during supercooling.

    A better understanding of water’s behavior at supercooled temperatures could lead to improvements in modeling the effect of high-altitude clouds on climate, Debenedetti said. Because water droplets reflect and scatter the sunlight coming into the atmosphere, clouds play a role in whether the sun’s energy is reflected away from the planet or is able to enter the atmosphere and contribute to warming. Additionally, because water goes through a supercooled phase before forming hail or snow, such research may aid strategies for preventing ice from forming on airplane wings.

    “The research is a tour de force of computational physics and provides a splendid academic look at a very difficult problem and a scholarly controversy,” said C. Austen Angell, professor of chemistry and biochemistry at Arizona State University, who was not involved in the research. “Using a particular computer model, the Debenedetti group has provided strong support for one of the theories that can explain the outstanding properties of real water in the supercooled region.”

    In their computer simulations, the team used an updated version of a model noted for its ability to capture many of water’s unusual behaviors first developed in 1974 by Frank Stillinger, then at Bell Laboratories in Murray Hill, N.J., and now a senior chemist at Princeton; and Aneesur Rahman, then at the U.S. Argonne National Laboratory. The same model was used to develop the liquid-liquid transition hypothesis.

    Collectively, the work took several million computer hours, which would take several human lifetimes using a typical desktop computer, Palmer said. In addition to the initial simulations, the team verified the results using six calculation methods. The computations were performed at Princeton’s High-Performance Computing Research Center’s Terascale Infrastructure for Groundbreaking Research in Science and Engineering (TIGRESS).

    The team included Yang Liu, who earned her doctorate at Princeton in 2012, and Athanassios Panagiotopoulos, the Susan Dod Brown Professor of Chemical and Biological Engineering.

    Support for the research was provided by the National Science Foundation (CHE 1213343) and the U.S. Department of Energy (DE-SC0002128 and DE-SC0008626).

    The article, Metastable liquid-liquid transition in a molecular model of water, by Jeremy C. Palmer, Fausto Martelli, Yang Liu, Roberto Car, Athanassios Z. Panagiotopoulos and Pablo G. Debenedetti, appeared in the journal Nature.

    See the full article here.

    About Princeton: Overview

    Princeton University is a vibrant community of scholarship and learning that stands in the nation’s service and in the service of all nations. Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering.

    As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching.

    Today, more than 1,100 faculty members instruct approximately 5,200 undergraduate students and 2,600 graduate students. The University’s generous financial aid program ensures that talented students from all economic backgrounds can afford a Princeton education.

    Princeton Shield

    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 5:49 am on June 3, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From The Kavli Institute at Stanford: “Solving big questions requires big computation” 

    KavliFoundation

    The Kavli Foundation

    Understanding the origins of our solar system, the future of our planet or humanity requires complex calculations run on high-power computers.

    A common thread among research efforts across Stanford’s many disciplines is the growing use of sophisticated algorithms, run by brute computing power, to solve big questions.

    In Earth sciences, computer models of climate change or carbon sequestration help drive policy decisions, and in medicine computation is helping unravel the complex relationship between our DNA and disease risk. Even in the social sciences, computation is being used to identify relationships between social networks and behaviors, work that could influence educational programs.

    dell sc

    “There’s really very little research that isn’t dependent on computing,” says Ann Arvin, vice provost and dean of research. Arvin helped support the recently opened Stanford Research Computing Center (SRCC) located at SLAC National Accelerator Laboratory, which expands the available research computing space at Stanford. The building’s green technology also reduces the energy used to cool the servers, lowering the environmental costs of carrying out research.

    “Everyone we’re hiring is computational, and not at a trivial level,” says Stanford Provost John Etchemendy, who provided an initial set of servers at the facility. “It is time that we have this facility to support those faculty.”

    Here are just a few examples of how Stanford faculty are putting computers to work to crack the mysteries of our origins, our planet and ourselves.

    Myths once explained our origins. Now we have algorithms.

    Our Origins

    Q: How did the universe form?

    For thousands of years, humans have looked to the night sky and created myths to explain the origins of the planets and stars. The real answer could soon come from the elegant computer simulations conducted by Tom Abel, an associate professor of physics at Stanford.

    Cosmologists face an ironic conundrum. By studying the current universe, we have gained a tremendous understanding of what occurred in the fractions of a second after the Big Bang, and how the first 400,000 years created the ingredients – gases, energy, etc. – that would eventually become the stars, planets and everything else. But we still don’t know what happened after those early years to create what we see in the night sky.

    “It’s the perfect problem for a physicist, because we know the initial conditions very well,” says Abel, who is also director of the Kavli Institute for Particle Astrophysics and Cosmology at SLAC. “If you know the laws of physics correctly, you should be able to exactly calculate what will happen next.”

    Easier said than done. Abel’s calculations must incorporate the laws of chemistry, atomic physics, gravity, how atoms and molecules radiate, gas and fluid dynamics and interactions, the forces associated with dark matter and so on. Those processes must then be simulated out over the course of hundreds of millions, and eventually billions, of years. Further complicating matters, a single galaxy holds one billion moving stars, and the simulation needs to consider their interactions in order to create an accurate prediction of how the universe came to be.

    “Any of the advances we make will come from writing smarter algorithms,” Abel says. “The key point of the new facility is it will allow for rapid turnaround, which will allow us to constantly develop and refine and validate new algorithms. And this will help us understand how the very first things were formed in the universe.” —Bjorn Carey //

    Q: How did we evolve?

    The human genome is essentially a gigantic data set. Deep within each person’s six billion data points are minute variations that tell the story of human evolution, and provide clues to how scientists can combat modern-day diseases.

    To better understand the causes and consequences of these genetic variations, Jonathan Pritchard, a professor of genetics and of biology, writes computer programs that can investigate those links. “Genetic variation affects how cells work, both in healthy variation and in response to disease,” Pritchard says. How that variation displays itself – in appearance or how cells work – and whether natural selection favors those changes within a population drives evolution.

    Consider, for example, variation in the gene that codes for lactase, an enzyme that allows mammals to digest milk. Most mammals turn off the lactase gene after they’ve been weaned from their mother’s milk. In populations that have historically revolved around dairy farming, however, Pritchard’s algorithms have helped to elucidate signals of strong selection since the advent of agriculture to enable people to process milk active throughout life. There has been similarly strong selection on skin pigmentation in non-Africans that allow better synthesis of vitamin D in regions where people are exposed to less sunlight.

    The algorithms and machine learning methods Pritchard used have the potential to yield powerful medical insights. Studying variations in how genes are regulated within a population could reveal how and where particular proteins bind to DNA, or which genes are turned on in different cell types­ – information that could help design novel therapies. These inquiries can generate hundreds of thousands of data sets and can only be parsed with up to tens of thousands of hours of computer work.

    Pritchard is bracing for an even bigger explosion of data; as genome sequencing technologies become less expensive, he expects the number of individually sequenced genomes to jump by as much as a hundredfold in the next few years. “Storing and analyzing vast amounts of data is a fundamental challenge that all genomics groups are dealing with,” says Pritchard, who is a member of Stanford Bio-X.

    “Having access to SRCC will make our inquiries go easier and more quickly, and we can move on faster to making the next discovery.” —Bjorn Carey //
    7 billion people live on Earth. Computers might help us survive ourselves.

    Our Planet
    Q: How can we predict future climates?

    There is no lab large enough to conduct experiments on the global-scale interactions between air, water and land that control Earth’s climate, so Stanford’s Noah Diffenbaugh and his students use supercomputers.

    Computer simulations reveal that if human emissions of greenhouse gases continue at their current pace, global warming over the next century is likely to occur faster than any global-scale shift recorded in the past 65 million years. This will increase the likelihood and severity of droughts, heat waves, heavy downpours and other extreme weather events.

    Climate scientists must incorporate into their predictions a growing number of data streams – including direct measurements as well as remote-sensing observations from satellites, aircraft-based sensors, and ground-based arrays.

    “That takes a lot of computing power, especially as we try to figure out how to use newer unstructured forms of data, such as from mobile sensors,” says Diffenbaugh, an associate professor of environmental Earth system science and a senior fellow at the Stanford Woods Institute for the Environment.

    Diffenbaugh’s team plans to use the increased computing resources available at SRCC to simulate air circulation patterns at the kilometer-scale over multiple decades. This has rarely been attempted before, and could help scientists answer questions such as how the recurring El Niño ocean circulation pattern interacts with elevated atmospheric carbon dioxide levels to affect the occurrence of tornadoes in the United States.

    “We plan to use the new computing cluster to run very large high-resolution simulations of climate over regions like the U.S. and India,” Diffenbaugh says. One of the most important benefits of SRCC, however, is not one that can be measured in computing power or cycles.

    “Perhaps most importantly, the new center is bringing together scholars from across campus who are using similar methodologies to figure out new solutions to existing problems, and hopefully to tackle new problems that we haven’t imagined yet.” —Ker Than //

    Q: How can we predict if climate solutions work?

    The capture and trapping of carbon dioxide gas deep underground is one of the most viable options for mitigating the effects of global warming, but only if we can understand how that stored gas interacts with the surrounding structures.

    Hamdi Tchelepi, a professor of energy resources engineering, uses supercomputers to study interactions between injected CO2 gas and the complex rock-fluid system in the subsurface.

    “Carbon sequestration is not a simple reversal of the technology that allows us to extract oil and gas. The physics involved is more complicated, ranging from the micro-scale of sand grains to extremely large geological formations that may extend hundreds of kilometers, and the timescales are on the order of centuries, not decades,” says Tchelepi, who is also the co-director of the Stanford Center for Computational Earth and Environmental Sciences (CEES).

    For example, modeling how a large plume of CO2 injected into the ground migrates and settles within the subsurface, and whether it might escape from the injection site to affect the air quality of a faraway city, can require the solving of tens of millions of equations simultaneously. SRCC will help augment the high computing power already available to Stanford Earth scientists and students through CEES, and will serve as a testing ground for custom algorithms developed by CEES researchers to simulate complex physical processes.

    Tchelepi, who is also affiliated with the Precourt Institute for Energy, says people are often surprised to learn the role that supercomputing plays in modern Earth sciences, but Earth scientists use more computer resources than almost anybody except the defense industry, and their computing needs can influence the designs of next-generation hardware.

    “Earth science is about understanding the complex and ever-changing dynamics of flowing air, water, oil, gas, CO2 and heat. That’s a lot of physics, requiring extensive computing resources to model.” —Ker Than //
    Q: How can we build more efficent energy networks?

    When folks crank their air conditioners during a heat wave, you can almost hear the electric grid moan. The sudden, larger-than-average demand for electricity can stress electric plants, and energy providers scramble to redistribute the load, or ask industrial users to temporarily shut down. To handle those sudden spikes in use more efficiently, Ram Rajagopal, an assistant professor of civil and environmental engineering, used supercomputers to analyze the energy usage patterns of 200,000 anonymous households and businesses in Northern California and from that develop a model that could tune consumer demand and lead to a more flexible “smart grid.”

    Today, utility companies base forecasts on a 24-hour cycle that aggregates millions of households. Not surprisingly, power use peaks in the morning and evening, when people are at home. But when Rajagopal looked at 1.6 billion hourly data points he plotted dramatic variations.

    Some households conformed to the norm and others didn’t. This forms the statistical underpinning for a new way to price and purchase power – by aggregating as few as a thousand customers into a unit with a predictable usage pattern. “If we want to thwart global warming we need to give this technology to communities,” says Rajagopal. Some consumers might want to pay whatever it costs to stay cool on hot days, others might conserve or defer demand to get price breaks. “I’m talking about neighborhood power that could be aligned to your beliefs,” says Rajagopal.

    Establishing a responsive smart grid and creative energy economies will become even more important as solar and wind energy – which face hourly supply limitations due to Mother Nature – become a larger slice of the energy pie. —Tom Abate //

    Know thyself. Let computation help.

    Ourselves

    Q: How does our DNA make us who we are?

    Our DNA is sometimes referred to as our body’s blueprint, but it’s really more of a sketch. Sure, it determines a lot of things, but so do the viruses and bacteria swarming our bodies, our encounters with environmental chemicals that lodge in our tissues and the chemical stew that ensues when our immune system responds to disease states.

    All of this taken together – our DNA, the chemicals, the antibodies coursing through our veins and so much more – determines our physical state at any point in time. And all that information makes for a lot of data if, like genetics professor Michael Snyder, you collected it 75 times over the course of four years.

    Snyder is a proponent of what he calls “personal omics profiling,” or the study of all that makes up our person, and he’s starting with himself. “What we’re collecting is a detailed molecular portrait of a person throughout time,” he says.

    So far, he’s turning out to be a pretty interesting test case. In one round of assessment he learned that he was becoming diabetic and was able to control the condition long before it would have been detected through a periodic medical exam.

    If personal omics profiling is going to go mainstream, serious computing will be required to tease out which of the myriad tests Snyder’s team currently runs give meaningful information and should be part of routine screening. Snyder’s sampling alone has already generated a half of a petabyte of data – roughly enough raw information to fill about a dishwasher-size rack of servers.

    Right now, that data and the computer power required to understand it reside on campus, but new servers will be located at SRCC. “I think you are going to see a lot more projects like this,” says Snyder, who is also a Stanford Bio-X affiliate and a member of the Stanford Cancer Center.

    “Computing is becoming increasingly important in medicine.” —Amy Adams //

    Q: How do we learn to read?

    A love letter, with all of its associated emotions, conveys its message with the same set of squiggly letters as a newspaper, novel or an instruction manual. How our brains learn to interpret a series of lines and curves into language that carries meaning or imparts knowledge is something psychology Professor Brian Wandell has been trying to understand.

    Wandell hopes to tease out differences between the brain scans of kids learning to read normally and those who are struggling, and use that information to find the right support for kids who need help. “As we acquire information about the outcome of different reading interventions we can go back to our database to understand whether there is some particular profile in the child that works better with intervention 1, and a second profile that works better with intervention 2,” says Wandell, a Stanford Bio-X member who is also the Isaac and Madeline Stein Family Professor and professor, by courtesy, of electrical engineering.

    His team developed a way of scanning kids’ brains with magnetic resonance imaging, then knitting the million collected samples together with complex algorithms that reveal how the nerve fibers connect different parts of the brain. “If you try to do this on your laptop, it will take half a day or more for each child,” he says. Instead, he uses powerful computers to reveal specific brain changes as kids learn to read.

    Wandell is associate director of the Stanford Neurosciences Institute, where he is leading the effort to develop a computing strategy – one that involves making use of SRCC rather than including computing space in their planned new building. He says one advantage of having faculty share computing space and systems is to speed scientific progress.

    “Our hope for the new facility is that it gives us the chance to set the standards for a better environment for sharing computations and data, spreading knowledge rapidly through the community,”

    Q: How do we work effectively together?

    There comes a time in every person’s life when it becomes easy to settle for the known relationship, for better or for worse, rather than seek out new ties with those who better inspire creativity and ensure success.

    Or so finds Daniel McFarland, professor of education and, by courtesy, of organizational behavior, who has studied how academic collaborations form and persist. McFarland and his own collaborators tracked signs of academic ties such as when Stanford faculty co-authored a paper, cited the same publications or got a grant together. Armed with 15 years of collaboration output on 3,000 faculty members, they developed a computer model of how networks form and strengthen over time.

    “Social networks are large, interdependent forms of data that quickly confront limits of computing power, and especially so when we study network evolution,” says McFarland.

    Their work has shown that once academic relationships have established, they tend to continue out of habit, regardless of whether they are the most productive fit. He argues that successful academic programs or businesses should work to bring new members into collaborations and also spark new ties to prevent more senior people from falling back on known but less effective relationships. At the same time, he comes down in favor of retreats and team building exercises to strengthen existing good collaborations.

    McFarland’s work has implications for Stanford’s many interdisciplinary programs. He has found that collaborations across disciplines often fall apart due in part to the distant ties between researchers. “To form and sustain these ties, pairs of colleagues must interact frequently to share knowledge,” he writes. “This is perhaps why interdisciplinary centers may be useful organizational means of corralling faculty and promoting continued distant collaborations.” —Amy Adams //

    Q: What can computers tell us about how our body works?

    As you sip your morning cup of coffee, the caffeine makes its way to your cells, slots into a receptor site on the cells’ surface and triggers a series of reactions that jolt you awake. A similar process takes place when Zantac provides relief for stomach ulcers, or when chemical signals produced in the brain travel cell-to-cell through your nervous system to your heart, telling it to beat.

    In each of these instances, a drug or natural chemical is activating a cell’s G-protein coupled receptor (GPCR), the cellular target of roughly half of all known drugs, says Vijay Pande, a professor of chemistry and, by courtesy, of structural biology and of computer science at Stanford. This exchange is a complex one, though. In order for caffeine or any other molecule to influence a cell, it must fit snugly into the receptor site, which consists of 4,000 atoms and transforms between an active and inactive configuration. Current imaging technologies are unable to view that transformation, so Pande has been simulating it using his Folding@Home distributed computer network.

    So far, Pande’s group has demonstrated a few hundred microseconds of the receptor’s transformation. Although that’s an extraordinarily long chunk of time compared to similar techniques, Pande is looking forward to accessing the SRCC to investigate the basic biophysics of GPCR and other proteins. Greater computing power, he says, will allow his team to simulate larger molecules in greater detail, simulate folding sequences for longer periods of time and visualize multiple molecules as they interact. It might even lead to atom-level simulations of processes at the scale of an entire cell. All of this knowledge could be applied to computationally design novel drugs and therapies.

    “Having more computer power can dramatically change every aspect of what we can do in my lab,” says Pande, who is also a Stanford Bio-X affiliate. “Much like having more powerful rockets could radically change NASA, access to greater computing power will let us go way beyond where we can go routinely today. —Bjorn Carey //

    See the full article here.

    The Kavli Foundation, based in Oxnard, California, is dedicated to the goals of advancing science for the benefit of humanity and promoting increased public understanding and support for scientists and their work.

    The Foundation’s mission is implemented through an international program of research institutes, professorships, and symposia in the fields of astrophysics, nanoscience, neuroscience, and theoretical physics as well as prizes in the fields of astrophysics, nanoscience, and neuroscience.


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 347 other followers

%d bloggers like this: