Tagged: NERSC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:37 pm on July 3, 2017 Permalink | Reply
    Tags: NERSC, , , Record-breaking 45-qubit Quantum Computing Simulation Run at NERSC on Cori   

    From NERSC: “Record-breaking 45-qubit Quantum Computing Simulation Run at NERSC on Cori” 

    NERSC Logo
    NERSC

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6supercomputer

    June 1, 2017
    Kathy Kincade
    kkincade@lbl.gov
    +1 510 495 2124

    When two researchers from the Swiss Federal Institute of Technology (ETH Zurich) announced in April that they had successfully simulated a 45-qubit quantum circuit, the science community took notice: it was the largest ever simulation of a quantum computer, and another step closer to simulating “quantum supremacy”—the point at which quantum computers become more powerful than ordinary computers.

    1
    A multi-qubit chip developed in the Quantum Nanoelectronics Laboratory at Lawrence Berkeley National Laboratory.

    The computations were performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. Researchers Thomas Häner and Damien Steiger, both Ph.D. students at ETH, used 8,192 of 9,688 Intel Xeon Phi processors on NERSC’s newest supercomputer, Cori, to support this simulation, the largest in a series they ran at NERSC for the project.

    “Quantum computing” has been the subject of dedicated research for decades, and with good reason: quantum computers have the potential to break common cryptography techniques and simulate quantum systems in a fraction of the time it would take on current “classical” computers. They do this by leveraging the quantum states of particles to store information in qubits (quantum bits), a unit of quantum information akin to a regular bit in classical computing. Better yet, qubits have a secret power: they can perform more than one calculation at a time. One qubit can perform two calculations in a quantum superposition, two can perform four, three eight, and so forth, with a corresponding exponential increase in quantum parallelism. Yet harnessing this quantum parallelism is difficult, as observing the quantum state causes the system to collapse to just one answer.

    So how close are we to realizing a true working prototype? It is generally thought that a quantum computer deploying 49 qubits—a unit of quantum information—will be able to match the computing power of today’s most powerful supercomputers. Toward this end, Häner and Steiger’s simulations will aid in benchmarking and calibrating near-term quantum computers by carrying out quantum supremacy experiments with these early devices and comparing them to their simulation results. In the mean time, we are seeing a surge in investments in quantum computing technology from the likes of Google, IBM and other leading tech companies—even Volkswagen—which could dramatically accelerate the development process.

    Simulation and Emulation of Quantum Computers

    Both emulation and simulation are important for calibrating, validating and benchmarking emerging quantum computing hardware and architectures. In a paper [ACM=DL]presented at SC16, Häner and Steiger wrote: “While large-scale quantum computers are not yet available, their performance can be inferred using quantum compilation frameworks and estimates of potential hardware specifications. However, without testing and debugging quantum programs on small scale problems, their correctness cannot be taken for granted. Simulators and emulators … are essential to address this need.”

    That paper discussed emulating quantum circuits—a common representation of quantum programs—while the 45-qubit paper focuses on simulating quantum circuits. Emulation is only possible for certain types of quantum subroutines, while the simulation of quantum circuits is a general method that also allows the inclusion of the effects of noise. Such simulations can be very challenging even on today’s fastest supercomputers, Häner and Steiger explained. For the 45-qubit simulation, for example, they used most of the available memory on each of the 8,192 nodes. “This increases the probability of node failure significantly, and we could not expect to run on the full system for more than an hour without failure,” they said. “We thus had to reduce time-to-solution at all scales (node-level as well as cluster-level) to achieve this simulation.”

    Optimizing the quantum circuit simulator was key. Häner and Steiger employed automatic code generation, optimized the compute kernels and applied a scheduling algorithm to the quantum supremacy circuits, thus reducing the required node-to-node communication. During the optimization process they worked with NERSC staff and used Berkeley Lab’s Roofline Model to identify potential areas where performance could be boosted.

    In addition to the 45-qubit simulation, which used 0.5 petabytes of memory on Cori and achieved a performance of 0.428 petaflops, they also simulated 30-, 36- and 42-qubit quantum circuits. When they compared the results with simulations of 30- and 36-qubit circuits run on NERSC’s Edison system, they found that the Edison simulations also ran faster.

    “Our optimizations improved the performance – the number of floating-point operations per time – by 10x for Edison and between 10x and 20x for Cori (depending on the circuit to simulate and the size per node),” Häner and Steiger said. “The time-to-solution decreased by over 12x when compared to the times of a similar simulation reported in a recent paper on quantum supremacy by Boixo and collaborators, which made the 45-qubit simulation possible.”

    Looking ahead, the duo is interested in performing more quantum circuit simulations at NERSC to determine the performance of near-term quantum computers solving quantum chemistry problems. They are also hoping to use solid-state drives to store larger wave functions and thus try to simulate even more qubits.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    Grace Hopper

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

    Advertisements
     
  • richardmitnick 10:26 am on March 23, 2017 Permalink | Reply
    Tags: , , NERSC, Towards Super-Efficient Ultra-Thin Silicon Solar Cells   

    From LBNL via Ames Lab: “Towards Super-Efficient, Ultra-Thin Silicon Solar Cells” 

    AmesLabII
    Ames Laboratory

    LBNL


    NERSC

    March 16, 2017
    Kathy Kincade
    kkincade@lbl.gov
    +1 510 495 2124

    Ames Researchers Use NERSC Supercomputers to Help Optimize Nanophotonic Light Trapping

    Despite a surge in solar cell R&D in recent years involving emerging materials such as organics and perovskites, the solar cell industry continues to favor inorganic crystalline silicon photovoltaics. While thin-film solar cells offer several advantages—including lower manufacturing costs—long-term stability of crystalline silicon solar cells, which are typically thicker, tips the scale in their favor, according to Rana Biswas, a senior scientist at Ames Laboratory, who has been studying solar cell materials and architectures for two decades.

    “Crystalline silicon solar cells today account for more than 90 percent of all installations worldwide,” said Biswas, co-author of a new study that used supercomputers at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a Department of Energy Office of Science User Facility, to evaluate a novel approach for creating more energy-efficient ultra-thin crystalline silicon solar cells. “The industry is very skeptical that any other material could be as stable as silicon.”


    LBL NERSC Cray XC30 Edison supercomputer


    NERSC CRAY Cori supercomputer

    Thin-film solar cells typically fabricated from semiconductor materials such as amorphous silicon are only a micron thick. While this makes them less expensive to manufacture than crystalline silicon solar cells, which are around 180 microns thick, it also makes them less efficient—12 to 14 percent energy conversion, versus nearly 25 percent for silicon solar cells (which translates into 15-21 percent for large area panels, depending on the size). This is because if the wavelength of incoming light is longer than the solar cell is thick, the light won’t be absorbed.

    Nanocone Arrays

    This challenge prompted Biswas and colleagues at Ames to look for ways to improve ultra-thin silicon cell architectures and efficiencies. In a paper published in Nanomaterials, they describe their efforts to develop a highly absorbing ultra-thin crystalline silicon solar cell architecture with enhanced light trapping capabilities.

    “We were able to design a solar cell with a very thin amount of silicon that could still provide high performance, almost as high performance as the thick silicon being used today,” Biswas said.

    2
    Proposed crystalline silicon solar cell architecture developed by Ames Laboratory researchers Prathap Pathi, Akshit Peer and Rana Biswas.

    The key lies in the wavelength of light that is trapped and the nanocone arrays used to trap it. Their proposed solar architecture comprises thin flat spacer titanium dioxide layers on the front and rear surfaces of silicon, nanocone gratings on both sides with optimized pitch and height and rear cones surrounded by a metallic reflector made of silver. They then set up a scattering matrix code to simulate light passing through the different layers and study how the light is reflected and transmitted at different wavelengths by each layer.

    “This is a light-trapping approach that keeps the light, especially the red and long-wavelength infrared light, trapped within the crystalline silicon cell,” Biswas explained. “We did something similar to this with our amorphous silicon cells, but crystalline behaves a little differently.”

    For example, it is critical not to affect the crystalline silicon wafer—the interface of the wafer—in any way, he emphasized. “You want the interface to be completely flat to begin with, then work around that when building the solar cell,” he said. “If you try to pattern it in some way, it will introduce a lot of defects at the interface, which are not good for solar cells. So our approach ensures we don’t disturb that in any way.”

    Homegrown Code

    In addition to the cell’s unique architecture, the simulations the researchers ran on NERSC’s Edison system utilized “homegrown” code developed at Ames to model the light via the cell’s electric and magnetic fields—a “classical physics approach,” Biswas noted. This allowed them to test multiple wavelengths to determine which was most optimum for light trapping. To optimize the absorption of light by the crystalline silicon based upon the wavelength, the team sent light waves of different wavelengths into a designed solar cell and then calculated the absorption of light in that solar cell’s architecture. The Ames researchers had previously studied the trapping of light in other thin film solar cells made of organic and amorphous silicon in previous studies.

    “One very nice thing about NERSC is that once you set up the problem for light, you can actually send each incoming light wavelength to a different processor (in the supercomputer),” Biswas said. “We were typically using 128 or 256 wavelengths and could send each of them to a separate processor.”

    Looking ahead, given that this research is focused on crystalline silicon solar cells, this new design could make its way into the commercial sector in the not-too-distant future—although manufacturing scalability could pose some initial challenges, Biswas noted.

    “It is possible to do this in a rather inexpensive way using soft lithography or nanoimprint lithography processes,” he said. “It is not that much work, but you need to set up a template or a master to do that. In terms of real-world applications, these panels are quite large, so that is a challenge to do something like this over such a large area. But we are working with some groups that have the ability to do roll to roll processing, which would be something they could get into more easily.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

    For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

    Ames Laboratory shares a close working relationship with Iowa State University’s Institute for Physical Research and Technology, or IPRT, a network of scientific research centers at Iowa State University, Ames, Iowa.

    DOE Banner

     
  • richardmitnick 3:21 pm on December 26, 2016 Permalink | Reply
    Tags: , , , NERSC, Researchers Use World's Smallest Diamonds to Make Wires Three Atoms Wide,   

    From SLAC: “Researchers Use World’s Smallest Diamonds to Make Wires Three Atoms Wide” 


    SLAC Lab

    December 26, 2016

    LEGO-style Building Method Has Potential for Making One-Dimensional Materials with Extraordinary Properties

    1
    Fuzzy white clusters of nanowires on a lab bench, with a penny for scale. Assembled with the help of diamondoids, the microscopic nanowires can be seen with the naked eye because the strong mutual attraction between their diamondoid shells makes them clump together, in this case by the millions. At top right, an image made with a scanning electron microscope shows nanowire clusters magnified 10,000 times. (SEM image by Hao Yan/SIMES; photo by SLAC National Accelerator Laboratory)

    Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have discovered a way to use diamondoids – the smallest possible bits of diamond – to assemble atoms into the thinnest possible electrical wires, just three atoms wide.

    By grabbing various types of atoms and putting them together LEGO-style, the new technique could potentially be used to build tiny wires for a wide range of applications, including fabrics that generate electricity, optoelectronic devices that employ both electricity and light, and superconducting materials that conduct electricity without any loss. The scientists reported their results today in Nature Materials.

    “What we have shown here is that we can make tiny, conductive wires of the smallest possible size that essentially assemble themselves,” said Hao Yan, a Stanford postdoctoral researcher and lead author of the paper. “The process is a simple, one-pot synthesis. You dump the ingredients together and you can get results in half an hour. It’s almost as if the diamondoids know where they want to go.”

    2
    This animation shows molecular building blocks joining the tip of a growing nanowire. Each block consists of a diamondoid – the smallest possible bit of diamond – attached to sulfur and copper atoms (yellow and brown spheres). Like LEGO blocks, they only fit together in certain ways that are determined by their size and shape. The copper and sulfur atoms form a conductive wire in the middle, and the diamondoids form an insulating outer shell. (SLAC National Accelerator Laboratory)

    The Smaller the Better

    3

    Illustration of a cluster of nanowires assembled by diamondoids
    An illustration shows a hexagonal cluster of seven nanowires assembled by diamondoids. Each wire has an electrically conductive core made of copper and sulfur atoms (brown and yellow spheres) surrounded by an insulating diamondoid shell. The natural attraction between diamondoids drives the assembly process. (H. Yan et al., Nature Materials)

    Although there are other ways to get materials to self-assemble, this is the first one shown to make a nanowire with a solid, crystalline core that has good electronic properties, said study co-author Nicholas Melosh, an associate professor at SLAC and Stanford and investigator with SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC.

    The needle-like wires have a semiconducting core – a combination of copper and sulfur known as a chalcogenide – surrounded by the attached diamondoids, which form an insulating shell.

    Their minuscule size is important, Melosh said, because a material that exists in just one or two dimensions – as atomic-scale dots, wires or sheets – can have very different, extraordinary properties compared to the same material made in bulk. The new method allows researchers to assemble those materials with atom-by-atom precision and control.

    The diamondoids they used as assembly tools are tiny, interlocking cages of carbon and hydrogen. Found naturally in petroleum fluids, they are extracted and separated by size and geometry in a SLAC laboratory. Over the past decade, a SIMES research program led by Melosh and SLAC/Stanford Professor Zhi-Xun Shen has found a number of potential uses for the little diamonds, including improving electron microscope images and making tiny electronic gadgets.

    4
    Stanford graduate student Fei Hua Li, left, and postdoctoral researcher Hao Yan in one of the SIMES labs where diamondoids – the tiniest bits of diamond – were used to assemble the thinnest possible nanowires. (SLAC National Accelerator Laboratory)

    Constructive Attraction

    5
    Ball-and-stick models of diamondoid atomic structures in the SIMES lab at SLAC. SIMES researchers used the smallest possible diamondoid – adamantane, a tiny cage made of 10 carbon atoms – to assemble the smallest possible nanowires, with conductive cores just three atoms wide. (SLAC National Accelerator Laboratory)

    For this study, the research team took advantage of the fact that diamondoids are strongly attracted to each other, through what are known as van der Waals forces. (This attraction is what makes the microscopic diamondoids clump together into sugar-like crystals, which is the only reason you can see them with the naked eye.)

    They started with the smallest possible diamondoids – single cages that contain just 10 carbon atoms – and attached a sulfur atom to each. Floating in a solution, each sulfur atom bonded with a single copper ion. This created the basic nanowire building block.

    The building blocks then drifted toward each other, drawn by the van der Waals attraction between the diamondoids, and attached to the growing tip of the nanowire.

    “Much like LEGO blocks, they only fit together in certain ways that are determined by their size and shape,” said Stanford graduate student Fei Hua Li, who played a critical role in synthesizing the tiny wires and figuring out how they grew. “The copper and sulfur atoms of each building block wound up in the middle, forming the conductive core of the wire, and the bulkier diamondoids wound up on the outside, forming the insulating shell.”

    A Versatile Toolkit for Creating Novel Materials

    The team has already used diamondoids to make one-dimensional nanowires based on cadmium, zinc, iron and silver, including some that grew long enough to see without a microscope, and they have experimented with carrying out the reactions in different solvents and with other types of rigid, cage-like molecules, such as carboranes.

    The cadmium-based wires are similar to materials used in optoelectronics, such as light-emitting diodes (LEDs), and the zinc-based ones are like those used in solar applications and in piezoelectric energy generators, which convert motion into electricity.

    “You can imagine weaving those into fabrics to generate energy,” Melosh said. “This method gives us a versatile toolkit where we can tinker with a number of ingredients and experimental conditions to create new materials with finely tuned electronic properties and interesting physics.”

    Theorists led by SIMES Director Thomas Devereaux modeled and predicted the electronic properties of the nanowires, which were examined with X-rays at SLAC’s Stanford Synchrotron Radiation Lightsource, a DOE Office of Science User Facility, to determine their structure and other characteristics.

    The team also included researchers from the Stanford Department of Materials Science and Engineering, Lawrence Berkeley National Laboratory, the National Autonomous University of Mexico (UNAM) and Justus-Liebig University in Germany. Parts of the research were carried out at Berkeley Lab’s Advanced Light Source (ALS)

    LBNL ALS interior
    LBNL ALS

    and National Energy Research Scientific Computing Center (NERSC),

    NERSC CRAY Cori supercomputer
    NERSC

    both DOE Office of Science User Facilities. The work was funded by the DOE Office of Science and the German Research Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 8:55 am on August 1, 2016 Permalink | Reply
    Tags: , Big bang nucleosynthesis, NERSC,   

    From NERSC: “A peek inside the earliest moments of the universe” 

    NERSC Logo
    NERSC

    August 1, 2016
    Kathy Kincade
    kkincade@lbl.gov
    1 510 495 2124

    1
    The MuSun experiment at the Paul Scherrer Institute is measuring the rate for muon capture on the deuteron to better than 1.5% precision. This process is the simplest weak interaction on a nucleus that can be measured to a high degree of precision. Credit: Lawrence Berkeley National Laboratory

    The Big Bang. That spontaneous explosion some 14 billion years ago that created our universe and, in the process, all matter as we know it today.

    In the first few minutes following “the bang,” the universe quickly began expanding and cooling, allowing the formation of subatomic particles that joined forces to become protons and neutrons. These particles then began interacting with one another to create the first simple atoms. A little more time, a little more expansion, a lot more cooling—along with ever-present gravitational pull—and clouds of these elements began to morph into stars and galaxies.

    For William Detmold, an assistant professor of physics at MIT who uses lattice quantum chromodynamics (LQCD) to study subatomic particles, one of the most interesting aspects of the formation of the early universe is what happened in those first few minutes—a period known as the “big bang nucleosynthesis.”

    “You start off with very high-energy particles that cool down as the universe expands, and eventually you are left with a soup of quarks and gluons, which are strongly interacting particles, and they form into protons and neutrons,” he said. “Once you have protons and neutrons, the next stage is for those protons and neutrons to come together and start making more complicated things—primarily deuterons, which interact with other neutrons and protons and start forming heavier elements, such as Helium-4, the alpha particle.”

    One of the most critical aspects of big bang nucleosynthesis is the radiative capture process, in which a proton captures a neutron and fuses to produce a deuteron and a photon. In a paper published in Physical Review Letters, Detmold and his co-authors—all members of the NPLQCD Collaboration, which studies the properties, structures and interactions of fundamental particles—describe how they used LQCD calculations to better understand this process and precisely measure the nuclear reaction rate that occurs when a neutron and proton form a deuteron. While physicists have been able to experimentally measure these phenomena in the laboratory, they haven’t been able to do the same, with certainty, using calculations alone—until now.

    “One of the things that is very interesting about the strong interaction that takes place in the radiative capture process is that you get very complicated structures forming, not just protons and neutrons,” Detmold said. “The strong interaction has this ability to have these very different structures coming out of it, and if these primordial reactions didn’t happen the way they happened, we wouldn’t have formed enough deuterium to form enough helium that then goes ahead and forms carbon. And if we don’t have carbon, we don’t have life.”

    Calculations Mirror Experiments

    For the Physical Review Letters paper, the team used the Chroma LQCD code developed at Jefferson Lab to run a series of calculations with quark masses that were 10-20 times the physical value of those masses. Using heavier values rather than the actual physical values reduced the cost of the calculations tremendously, Detmold noted. They then used their understanding of how the calculations should depend on mass to get to the physical value of the quark mass.

    “When we do an LQCD calculation, we have to tell the computer what the masses of the quarks we want to work with are, and if we use the values that the quark masses have in nature it is very computationally expensive,” he explained. “For simple things like calculating the mass of the proton, we just put in the physical values of the quark masses and go from there. But this reaction is much more complicated, so we can’t currently do the entire thing using the actual physical values of the quark masses.

    While this is the first LQCD calculation of an inelastic nuclear reaction, Detmold is particularly excited by the fact that being able to reproduce this process through calculations means researchers can now calculate other things that are similar but that haven’t been measured as precisely experimentally—such as the proton-proton fusion process that powers the sun—or measured at all.

    “The rate of the radiative capture reaction, which is really what we are calculating here, is very, very close to the experimentally measured one, which shows that we actually understand pretty well how to do this calculation, and we’ve now done it, and it is consistent with what is experimentally known,” Detmold said. “This opens up a whole range of possibilities for other nuclear interactions that we can try and calculate where we don’t know what the answer is because we haven’t, or can’t, measure them experimentally. Until this calculation, I think it is fair to say that most people were wary of thinking you could go from quark and gluon degrees of freedom to doing nuclear reactions. This research demonstrates that yes, we can.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 7:32 am on February 23, 2016 Permalink | Reply
    Tags: , , , NERSC   

    From LBL: “Updated Workflows for New LHC” 

    Berkeley Logo

    Berkeley Lab

    February 22, 2016
    Linda Vu 510-495-2402
    lvu@lbl.gov

    After a massive upgrade, the Large Hadron Collider (LHC), the world’s most powerful particle collider is now smashing particles at an unprecedented 13 tera-electron-volts (TeV)—nearly double the energy of its previous run from 2010-2012. In just one second, the LHC can now produce up to 1 billion collisions and generate up to 10 gigabytes of data in its quest to push the boundaries of known physics. And over the next decade, the LHC will be further upgraded to generate about 10 times more collisions and data.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    To deal with the new data deluge, researchers working on one of the LHC’s largest experiments—ATLAS—are relying on updated workflow management tools developed primarily by a group of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab). Papers highlighting these tools were recently published in the Journal of Physics: Conference Series.

    CERN ATLAS Higgs Event
    CERN ATLAS New
    ATLAS

    “The issue with High Luminosity LHC is that we are producing ever-increasing amounts of data, faster than Moore’s Law and cannot actually see how we can do all of the computing that we need to do with the current software that we have,” says Paolo Calafiura, a scientist in Berkeley Lab’s Computational Research Division (CRD). “If we don’t either find new hardware to run our software or new technologies to make our software run faster in ways we can’t anticipate, the only choice that we have left is to be more selective in the collision events that we record. But, this decision will of course impact the science and nobody wants to do that.”

    To tackle this problem, Calafiura and his colleagues of the Berkeley Lab ATLAS Software group are developing new software tools called Yoda and AthenaMP to speed up the analysis of the data by leveraging the capabilities of next-generation Department of Energy (DOE) supercomputers like the National Energy Research Scientific Computing Center’s (NERSC’s) Cori system, as well as DOE’s current Leadership Computing Facilities, to analyze ATLAS data.

    NERSC CRAY Cori supercomputer
    NERSC Cray Cori supercomputer

    Yoda: Treating Single Supercomputers like the LHC Computing Grid

    Around the world, researchers rely on the LHC Computing Grid to process the petabytes of data collected by LHC detectors every year. The grid comprises 170 networked computing centers in 36 countries. CERN’s computing center, where the LHC is located, is ‘Tier 0’ of the grid. It processes the raw LHC data, and then divides it into chunks for the other Tiers. Twelve ‘Tier 1’ computing centers then accept the data directly from CERN’s computers, further process the information and then break it down into even more chunks for the hundreds of computing centers further down the grid. Once a computer finishes its analysis, it sends the findings to a centralized computer and accepts a new chunk of data.

    Like air traffic controllers, special software manages workflow on the computing grid for each of the LHC experiments. The software is responsible for breaking down the data, directing the data to its destination, telling systems on the grid when to execute an analysis and when to store information. To deal with the added deluge of data from the LHC’s upgraded ATLAS experiment, Vakhtang Tsulaia from the Berkeley Lab’s ATLAS Software group added another layer of software to the grid called Yoda Event Service system.

    The researchers note that the idea with Yoda is to replicate the LHC Computing Grid workflow on a supercomputer. So as soon as a job arrives at the supercomputer, Yoda will breakdown the data chunk into even smaller units, representing individual events or event ranges, and then assign those jobs to different compute nodes. Because only the portion of the job that will be processed is sent to the compute node, computing resources no longer need to stage the entire file before executing a job, so processing happens relatively quickly.

    To efficiently take advantage of available HPC resources, Yoda is also flexible enough to adapt to a variety of scheduling options—from back filling to large time allocations. After processing the individual events or event ranges, Yoda saves the output to the supercomputer’s shared file system so that these jobs can be terminated at anytime with minimal data losses. This means that Yoda jobs can now be submitted to the HPC batch queue in back filling mode. So if the supercomputer is not utilizing all of its cores for a certain amount of time, Yoda can automatically detect that and submit a properly sized job to the batch queue to utilize those resources.

    “Yoda acts like a daemon that is constantly submitting jobs to take advantage of available resources, this is what we call opportunistic computing,” says Calafiura.

    In early 2015 the team tested Yoda’s performance by running ATLAS jobs from the previous LHC run on NERSC’s Edison supercomputer and successfully scaled up to 50,000 computer processor cores.

    LBL NERSC Edison supercomputer
    NERSC Cray Edison supercomputer

    AthenaMP: Adapting ATLAS Workloads for Massively Parallel Systems

    In addition to Yoda, the Berkeley Lab ATLAS software group also developed the AthenaMP software that allows the ATLAS reconstruction, simulation and data analysis framework to run efficiently on massively parallel systems.

    “Memory has always been a scare resource for ATLAS reconstruction jobs. In order to optimally exploit all available CPU-cores on a given compute node, we needed to have a mechanism that would allow the sharing of memory pages between processes or threads,” says Calafiura.

    AthenaMP addresses the memory problem by leveraging the Linux fork and copy-on-write mechanisms. So when a node receives a task to process, the job is initialized on one core and sub-processes are forked to other cores, which then process all of the events assigned to the initial task. This strategy allows for the sharing of memory pages between event processors running on the same compute node.

    By running ATLAS reconstruction in one AthenaMP job with several worker processes, the team notes that they achieved a significantly reduced overall memory footprint when compared to running the same number of independent serial jobs. And, for certain configurations of the ATLAS production jobs they’ve managed to reduce the memory usage by a factor of two.

    “Our goal is to get onto more hardware and these tools help us do that. The massive scale of many high performance systems means that even a small fraction of computing power can yield large returns in processing throughput for high energy physics,” says Calafiura.

    This work was supported by DOE’s Office of Science.

    Read the papers:

    Fine grained event processing on HPCs with the ATLAS Yoda system
    http://iopscience.iop.org/article/10.1088/1742-6596/664/9/092025

    Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP): http://iopscience.iop.org/article/10.1088/1742-6596/664/7/072050

    About Computing Sciences at Berkeley Lab

    The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

    ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab’s Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 2:51 pm on August 27, 2015 Permalink | Reply
    Tags: , , NERSC,   

    From NERSC: “NERSC, Cray Move Forward With Next-Generation Scientific Computing” 

    NERSC Logo
    NERSC

    April 22, 2015
    Jon Bashor, jbashor@lbl.gov, 510-486-5849

    1
    The Cori Phase 1 system will be the first supercomputer installed in the new Computational Research and Theory Facility now in the final stages of construction at Lawrence Berkeley National Laboratory.

    The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced today that they have finalized a new contract for a Cray XC40 supercomputer that will be the first NERSC system installed in the newly built Computational Research and Theory facility at Lawrence Berkeley National Laboratory.

    1

    This supercomputer will be used as Phase 1 of NERSC’s next-generation system named “Cori” in honor of bio-chemist and Nobel Laureate Gerty Cori. Expected to be delivered this summer, the Cray XC40 supercomputer will feature the Intel Haswell processor. The second phase, the previously announced Cori system, will be delivered in mid-2016 and will feature the next-generation Intel Xeon Phi™ processor “Knights Landing,” a self-hosted, manycore processor with on-package high bandwidth memory that offers more than 3 teraflop/s of double-precision peak performance per single socket node.

    NERSC serves as the primary high performance computing facility for the Department of Energy’s Office of Science, supporting some 6,000 scientists annually on more than 700 projects. This latest contract represents the Office of Science’s ongoing commitment to supporting computing to address challenges such as developing new energy sources, improving energy efficiency, understanding climate change and analyzing massive data sets from observations and experimental facilities around the world.

    “This is an exciting year for NERSC and for NERSC users,” said Sudip Dosanjh, director of NERSC. “We are unveiling a brand new, state-of-the-art computing center and our next-generation supercomputer, designed to help our users begin the transition to exascale computing. Cori will allow our users to take their science to a level beyond what our current systems can do.”

    “NERSC and Cray share a common vision around the convergence of supercomputing and big data, and Cori will embody that overarching technical direction with a number of unique, new technologies,” said Peter Ungaro, president and CEO of Cray. “We are honored that the first supercomputer in NERSC’s new center will be our flagship Cray XC40 system, and we are also proud to be continuing and expanding our longstanding partnership with NERSC and the U.S. Department of Energy as we chart our course to exascale computing.”
    Support for Data-Intensive Science

    A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.

    However, Cori Phase 1 will have twice as much memory per node than NERSC’s current Edison supercomputer (a Cray XC30 system) and will include a number of advanced features designed to accelerate data-intensive applications:

    Large number of login/interactive nodes to support applications with advanced workflows
    Immediate access queues for jobs requiring real-time data ingestion or analysis
    High-throughput and serial queues can handle a large number of jobs for screening, uncertainty qualification, genomic data processing, image processing and similar parallel analysis
    Network connectivity that allows compute nodes to interact with external databases and workflow controllers
    The first half of an approximately 1.5 terabytes/sec NVRAM-based Burst Buffer for high bandwidth low-latency I/O
    A Cray Lustre-based file system with over 28 petabytes of capacity and 700 gigabytes/second I/O bandwidth

    In addition, NERSC is collaborating with Cray on two ongoing R&D efforts to maximize Cori’s data potential by enabling higher bandwidth transfers in and out of the compute node, high-transaction rate data base access, and Linux container virtualization functionality on Cray compute nodes to allow custom software stack deployment.

    “The goal is to give users as familiar a system as possible, while also allowing them the flexibility to explore new workflows and paths to computation,” said Jay Srinivasan, the Computational Systems Group lead. “The Phase 1 system is designed to enable users to start running their workload on Cori immediately, while giving data-intensive workloads from other NERSC systems the ability to run on a Cray platform.”
    Burst Buffer Enhances I/O

    A key element of Cori Phase 1 is Cray’s new DataWarp technology, which accelerates application I/O and addresses the growing performance gap between compute resources and disk-based storage. This capability, often referred to as a “Burst Buffer,” is a layer of NVRAM designed to move data more quickly between processor and disk and allow users to make the most efficient use of the system. Cori Phase 1 will feature approximately 750 terabytes of capacity and approximately 750 gigabytes/second of I/O bandwidth. NERSC, Sandia and Los Alamos national laboratories and Cray are collaborating to define use cases and test early software that will provide the following capabilities:

    Improve application reliability (checkpoint-restart)
    Accelerate application I/O performance for small blocksize I/O and analysis files
    Enhance quality of service by providing dedicated I/O acceleration resources
    Provide fast temporary storage for out-of-core applications
    Serve as a staging area for jobs requiring large input files or persistent fast storage between coupled simulations
    Support post-processing analysis of large simulation data as well asin situandin transitvisualization and analysis using the Burst Buffer nodes

    Combining Extreme Scale Data Analysis and HPC on the Road to Exascale

    As previously announced, Phase 2 of Cori will be delivered in mid-2016 and will be combined with Phase 1 on the same high speed network, providing a unique resource. When fully deployed, Cori will contain more than 9,300 Knights Landing compute nodes and more than 1,900 Haswell nodes, along with the file system and a 2X increase in the applications I/O acceleration.

    “In the scientific computing community, the line between large scale data analysis and simulation and modeling is really very blurred,” said Katie Antypas, head of NERSC’s Scientific Computing and Data Services Department. “The combined Cori system is the first system to be specifically designed to handle the full spectrum of computational needs of DOE researchers, as well as emerging needs in which data- and compute-intensive work are part of a single workflow. For example, a scientist will be able to run a simulation on the highly parallel Knights Landing nodes while simultaneously performing data analysis using the Burst Buffer on the Haswell nodes. This is a model that we expect to be important on exascale-era machines.”

    NERSC is funded by the Office of Advanced Scientific Computing Research in the DOE’s Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 4:31 am on February 18, 2015 Permalink | Reply
    Tags: , , NERSC,   

    From LBL: “Bigger steps: Berkeley Lab researchers develop algorithm to make simulation of ultrafast processes possible” 

    Berkeley Logo

    Berkeley Lab

    February 17, 2015
    Rachel Berkowitz

    When electronic states in materials are excited during dynamic processes, interesting phenomena such as electrical charge transfer can take place on quadrillionth-of-a-second, or femtosecond, timescales. Numerical simulations in real-time provide the best way to study these processes, but such simulations can be extremely expensive. For example, it can take a supercomputer several weeks to simulate a 10 femtosecond process. One reason for the high cost is that real-time simulations of ultrafast phenomena require “small time steps” to describe the movement of an electron, which takes place on the attosecond timescale – a thousand times faster than the femtosecond timescale.

    1
    Model of ion (Cl) collision with atomically thin semiconductor (MoSe2). Collision region is shown in blue and zoomed in; red points show initial positions of Cl. The simulation calculates the energy loss of the ion based on the incident and emergent velocities of the Cl.

    To combat the high cost associated with the small-time steps, Lin-Wang Wang, senior staff scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab), and visiting scholar Zhi Wang from the Chinese Academy of Sciences, have developed a new algorithm which increases the small time step from about one attosecond to about half a femtosecond. This allows them to simulate ultrafast phenomena for systems of around 100 atoms.

    “We demonstrated a collision of an ion [Cl] with a 2D material [MoSe2] for 100 femtoseconds. We used supercomputing systems for ten hours to simulate the problem – a great increase in speed,” says L.W. Wang. That represents a reduction from 100,000 time steps down to only 500. The results of the study were reported in a Physical Review Letters paper titled Efficient real-time time-dependent DFT method and its application to a collision of an ion with a 2D material.

    Conventional computational methods cannot be used to study systems in which electrons have been excited from the ground state, as is the case for ultrafast processes involving charge transfer. But using real-time simulations, an excited system can be modeled with time-dependent quantum mechanical equations that describe the movement of electrons.

    The traditional algorithms work by directly manipulating these equations. Wang’s new approach is to expand the equations into individual terms, based on which states are excited at a given time. The trick, which he has solved, is to figure out the time evolution of the individual terms. The advantage is that some terms in the expanded equations can be eliminated.

    2
    Zhi Wang (left) and Berkeley Lab’s Lin-Wang Wang (right).

    “By eliminating higher energy terms, you significantly reduce the dimension of your problem, and you can also use a bigger time step,” explains Wang, describing the key to the algorithm’s success. Solving the equations in bigger timesteps reduces the computational cost and increases the speed of the simulations

    Comparing the new algorithm with the old, slower algorithm yields similar results, e.g., the predicted energies and velocities of an atom passing through a layer of material are the same for both models. This new algorithm opens the door for efficient real-time simulations of ultrafast processes and electron dynamics, such as excitation in photovoltaic materials and ultrafast demagnetization following an optical excitation.

    The work was supported by the Department of Energy’s Office of Science and used the resources of the National Energy Research Scientific Computing center (NERSC).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 5:39 pm on November 12, 2014 Permalink | Reply
    Tags: , NERSC, ,   

    From LBL: “Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2014
    Julie Chao (510) 486-6491

    Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

    What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1, has been published online in the Journal of Advances in Modeling Earth Systems.

    “I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

    mw
    Michael Wehner, Berkeley Lab climate scientist

    Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

    One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

    sc

    The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

    “High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

    The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

    The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

    Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

    “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

    The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , , NERSC,   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:43 am on July 12, 2014 Permalink | Reply
    Tags: , , , , NERSC   

    From NERSC: “Hot Plasma Partial to Bootstrap Current” 

    NERSC Logo
    NERSC

    July 9, 2014
    Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov

    Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy.

    In a fusion reaction, energy is released when two hydrogen isotopes are fused together to form a heavier nucleus, helium. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures—more than 100 million degrees Celsius—which transforms it into hot plasma. Another key requirement of this process is magnetic confinement, the use of strong magnetic fields to keep the plasma from touching the vessel walls (and cooling) and compressing the plasma to fuse the isotopes.

    react
    A calculation of the self-generated plasma current in the W7-X reactor, performed using the SFINCS code on Edison. The colors represent the amount of electric current along the magnetic field, and the black lines show magnetic field lines. Image: Matt Landreman

    So there’s a lot going on inside the plasma as it heats up, not all of it good. Driven by electric and magnetic forces, charged particles swirl around and collide into one another, and the central temperature and density are constantly evolving. In addition, plasma instabilities disrupt the reactor’s ability to produce sustainable energy by increasing the rate of heat loss.

    Fortunately, research has shown that other, more beneficial forces are also at play within the plasma. For example, if the pressure of the plasma varies across the radius of the vessel, a self-generated current will spontaneously arise within the plasma—a phenomenon known as the “bootstrap” current.

    Now an international team of researchers has used NERSC supercomputers to further study the bootstrap current, which could help reduce or eliminate the need for an external current driver and pave the way to a more cost-effective fusion reactor. Matt Landreman, research associate at the University of Maryland’s Institute for Research in Electronics and Applied Physics, collaborated with two research groups to develop and run new codes at NERSC that more accurately calculate this self-generated current. Their findings appear in Plasma Physics and Controlled Fusion and Physics of Plasmas.

    “The codes in these two papers are looking at the average plasma flow and average rate at which particles escape from the confinement, and it turns out that plasma in a curved magnetic field will generate some average electric current on its own,” Landreman said. “Even if you aren’t trying to drive a current, if you take the hydrogen and heat it up and confine it in a curved magnetic field, it creates this current that turns out to be very important. If we ever want to make a tokamak fusion plant down the road, for economic reasons the plasma will have to supply a lot of its own current.”

    One of the unique things about plasmas is that there is often a complicated interaction between where particles are in space and their velocity, Landreman added.

    “To understand some of their interesting and complex behaviors, we have to solve an equation that takes into account both the position and the velocity of the particle,” he said. “That is the core of what these computations are designed to do.”

    Evolving Plasma Behavior

    int
    Interior of the Alcator C-Mod tokamak at the Massachusetts Institute of Technology’s Plasma Science and Fusion Center. Image: Mike Garrett

    The Plasma Physics and Controlled Fusion paper focuses on plasma behavior in tokamak reactors using PERFECT, a code Landreman wrote. Tokamak reactors, first introduced in the 1950s, are today considered by many to be the best candidate for producing controlled thermonuclear fusion power. A tokamak features a torus (doughnut-shaped) vessel and a combination of external magnets and a current driven in the plasma required to create a stable confinement system.

    In particular, PERFECT was designed to examine the plasma edge, a region of the tokamak where “lots of interesting things happen,” Landreman said. Before PERFECT, other codes were used to predict the flows and bootstrap current in the central plasma and solve equations that assume the gradients of density and temperature are gradual.

    “The problem with the plasma edge is that the gradients are very strong, so these previous codes are not necessarily valid in the edge, where we must solve a more complicated equation,” he said. “PERFECT was built to solve such an equation.”

    For example, in most of the inner part of the tokamak there is a fairly gradual gradient of the density and temperature. “But at the edge there is a fairly big jump in density and temperature—what people call the edge pedestal. What is different about PERFECT is that we are trying to account for some of this very strong radial variation,” Landreman explained.

    These findings are important because researchers are concerned that the bootstrap current may affect edge stability. PERFECT is also used to calculate plasma flow, which also may affect edge stability.

    “My co-authors had previously done some analytic calculations to predict how the plasma flow and heat flux would change in the pedestal region compared to places where radial gradients aren’t as strong,” Landreman said. “We used PERFECT to test these calculations with a brute force numerical calculation at NERSC and found that they agreed really well. The analytic calculations provide insight into how the plasma flow and heat flux will be affected by these strong radial gradients.”

    From Tokamak to Stellarator

    In the Physics of Plasmas study, the researchers used a second code, SFINCS, to focus on related calculations in a different kind of confinement concept: a stellarator. In a stellarator the magnetic field is not axisymmetric, meaning that it looks different as you circle around the donut hole. As Landreman put it, “A tokamak is to a stellarator as a standard donut is to a cruller.”

    hxt
    HSX stellarator

    First introduced in the 1950s, stellarators have played a central role in the German and Japanese fusion programs and were popular in the U.S. until the 1970s when many fusion scientists began favoring the tokamak design. In recent years several new stellarators have appeared, including the Wendelstein 7-X (W7-X) in Germany, the Helically Symmetric Experiment in the U.S. and the Large Helical Device in Japan. Two of Landreman’s coauthors on the Physics of Plasmas paper are physicists from the Max Planck Institute for Plasma Physics, where W7-X is being constructed.

    “In the W7-X design, the amount of plasma current has a strong effect on where the heat is exhausted to the wall,” Landreman explained. “So at Max Planck they are very concerned about exactly how much self-generated current there will be when they turn on their machine. Based on a prediction for this current, a set of components called the ‘divertor’ was located inside the vacuum vessel to accept the large heat exhaust. But if the plasma makes more current than expected, the heat will come out in a different location, and you don’t want to be surprised.”

    Their concerns stemmed from the fact that the previous code was developed when computers were too slow to solve the “real” 4D equation, he added.

    “The previous code made an approximation that you could basically ignore all the dynamics in one of the dimensions (particle speed), thereby reducing 4D to 3D,” Landreman said. “Now that computers are faster, we can test how good this approximation was. And what we found was that basically the old code was pretty darn accurate and that the predictions made for this bootstrap current are about right.”

    The calculations for both studies were run on Hopper and Edison using some additional NERSC resources, Landreman noted.

    “I really like running on NERSC systems because if you have a problem, you ask a consultant and they get back to you quickly,” Landreman said. “Also knowing that all the software is up to date and it works. I’ve been using NX lately to speed up the graphics. It’s great because you can plot results quickly without having to download any data files to your local computer.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: