Tagged: NERSC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , CO2 studies, , NERSC,   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:43 am on July 12, 2014 Permalink | Reply
    Tags: , , , , NERSC   

    From NERSC: “Hot Plasma Partial to Bootstrap Current” 

    NERSC Logo
    NERSC

    July 9, 2014
    Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov

    Supercomputers at NERSC are helping plasma physicists “bootstrap” a potentially more affordable and sustainable fusion reaction. If successful, fusion reactors could provide almost limitless clean energy.

    In a fusion reaction, energy is released when two hydrogen isotopes are fused together to form a heavier nucleus, helium. To achieve high enough reaction rates to make fusion a useful energy source, hydrogen contained inside the reactor core must be heated to extremely high temperatures—more than 100 million degrees Celsius—which transforms it into hot plasma. Another key requirement of this process is magnetic confinement, the use of strong magnetic fields to keep the plasma from touching the vessel walls (and cooling) and compressing the plasma to fuse the isotopes.

    react
    A calculation of the self-generated plasma current in the W7-X reactor, performed using the SFINCS code on Edison. The colors represent the amount of electric current along the magnetic field, and the black lines show magnetic field lines. Image: Matt Landreman

    So there’s a lot going on inside the plasma as it heats up, not all of it good. Driven by electric and magnetic forces, charged particles swirl around and collide into one another, and the central temperature and density are constantly evolving. In addition, plasma instabilities disrupt the reactor’s ability to produce sustainable energy by increasing the rate of heat loss.

    Fortunately, research has shown that other, more beneficial forces are also at play within the plasma. For example, if the pressure of the plasma varies across the radius of the vessel, a self-generated current will spontaneously arise within the plasma—a phenomenon known as the “bootstrap” current.

    Now an international team of researchers has used NERSC supercomputers to further study the bootstrap current, which could help reduce or eliminate the need for an external current driver and pave the way to a more cost-effective fusion reactor. Matt Landreman, research associate at the University of Maryland’s Institute for Research in Electronics and Applied Physics, collaborated with two research groups to develop and run new codes at NERSC that more accurately calculate this self-generated current. Their findings appear in Plasma Physics and Controlled Fusion and Physics of Plasmas.

    “The codes in these two papers are looking at the average plasma flow and average rate at which particles escape from the confinement, and it turns out that plasma in a curved magnetic field will generate some average electric current on its own,” Landreman said. “Even if you aren’t trying to drive a current, if you take the hydrogen and heat it up and confine it in a curved magnetic field, it creates this current that turns out to be very important. If we ever want to make a tokamak fusion plant down the road, for economic reasons the plasma will have to supply a lot of its own current.”

    One of the unique things about plasmas is that there is often a complicated interaction between where particles are in space and their velocity, Landreman added.

    “To understand some of their interesting and complex behaviors, we have to solve an equation that takes into account both the position and the velocity of the particle,” he said. “That is the core of what these computations are designed to do.”

    Evolving Plasma Behavior

    int
    Interior of the Alcator C-Mod tokamak at the Massachusetts Institute of Technology’s Plasma Science and Fusion Center. Image: Mike Garrett

    The Plasma Physics and Controlled Fusion paper focuses on plasma behavior in tokamak reactors using PERFECT, a code Landreman wrote. Tokamak reactors, first introduced in the 1950s, are today considered by many to be the best candidate for producing controlled thermonuclear fusion power. A tokamak features a torus (doughnut-shaped) vessel and a combination of external magnets and a current driven in the plasma required to create a stable confinement system.

    In particular, PERFECT was designed to examine the plasma edge, a region of the tokamak where “lots of interesting things happen,” Landreman said. Before PERFECT, other codes were used to predict the flows and bootstrap current in the central plasma and solve equations that assume the gradients of density and temperature are gradual.

    “The problem with the plasma edge is that the gradients are very strong, so these previous codes are not necessarily valid in the edge, where we must solve a more complicated equation,” he said. “PERFECT was built to solve such an equation.”

    For example, in most of the inner part of the tokamak there is a fairly gradual gradient of the density and temperature. “But at the edge there is a fairly big jump in density and temperature—what people call the edge pedestal. What is different about PERFECT is that we are trying to account for some of this very strong radial variation,” Landreman explained.

    These findings are important because researchers are concerned that the bootstrap current may affect edge stability. PERFECT is also used to calculate plasma flow, which also may affect edge stability.

    “My co-authors had previously done some analytic calculations to predict how the plasma flow and heat flux would change in the pedestal region compared to places where radial gradients aren’t as strong,” Landreman said. “We used PERFECT to test these calculations with a brute force numerical calculation at NERSC and found that they agreed really well. The analytic calculations provide insight into how the plasma flow and heat flux will be affected by these strong radial gradients.”

    From Tokamak to Stellarator

    In the Physics of Plasmas study, the researchers used a second code, SFINCS, to focus on related calculations in a different kind of confinement concept: a stellarator. In a stellarator the magnetic field is not axisymmetric, meaning that it looks different as you circle around the donut hole. As Landreman put it, “A tokamak is to a stellarator as a standard donut is to a cruller.”

    hxt
    HSX stellarator

    First introduced in the 1950s, stellarators have played a central role in the German and Japanese fusion programs and were popular in the U.S. until the 1970s when many fusion scientists began favoring the tokamak design. In recent years several new stellarators have appeared, including the Wendelstein 7-X (W7-X) in Germany, the Helically Symmetric Experiment in the U.S. and the Large Helical Device in Japan. Two of Landreman’s coauthors on the Physics of Plasmas paper are physicists from the Max Planck Institute for Plasma Physics, where W7-X is being constructed.

    “In the W7-X design, the amount of plasma current has a strong effect on where the heat is exhausted to the wall,” Landreman explained. “So at Max Planck they are very concerned about exactly how much self-generated current there will be when they turn on their machine. Based on a prediction for this current, a set of components called the ‘divertor’ was located inside the vacuum vessel to accept the large heat exhaust. But if the plasma makes more current than expected, the heat will come out in a different location, and you don’t want to be surprised.”

    Their concerns stemmed from the fact that the previous code was developed when computers were too slow to solve the “real” 4D equation, he added.

    “The previous code made an approximation that you could basically ignore all the dynamics in one of the dimensions (particle speed), thereby reducing 4D to 3D,” Landreman said. “Now that computers are faster, we can test how good this approximation was. And what we found was that basically the old code was pretty darn accurate and that the predictions made for this bootstrap current are about right.”

    The calculations for both studies were run on Hopper and Edison using some additional NERSC resources, Landreman noted.

    “I really like running on NERSC systems because if you have a problem, you ask a consultant and they get back to you quickly,” Landreman said. “Also knowing that all the software is up to date and it works. I’ve been using NX lately to speed up the graphics. It’s great because you can plot results quickly without having to download any data files to your local computer.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 1:57 pm on May 28, 2014 Permalink | Reply
    Tags: , , , NERSC,   

    From Berkeley Lab: “A Path Toward More Powerful Tabletop Accelerators” 

    Berkeley Logo

    Berkeley Lab

    May 28, 2014
    Kate Greene kgreene@lbl.gov

    Making a tabletop particle accelerator just got easier. A new study shows that certain requirements for the lasers used in an emerging type of small-area particle accelerator can be significantly relaxed. Researchers hope the finding could bring about a new era of accelerators that would need just a few meters to bring particles to great speeds, rather than the many kilometers required of traditional accelerators. The research, from scientists at the U.S. Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab), is presented as the cover story in the May special issue of Physics of Plasmas.

    Traditional accelerators, like the Large Hadron Collider where the Higgs boson was recently discovered, rely on high-power radio-frequency waves to energize electrons. The new type of accelerator, known as a laser-plasma accelerator, uses pulses of laser light that blast through a soup of charged particles known as a plasma; the resulting plasma motion, which resemble waves in water, accelerates electrons riding atop the waves to high speeds.

    http://newscenter.lbl.gov/wp-content/uploads/figure_wake_02_wArrow_wLabels.png
    3D Map of the longitudinal wakefield generated by the incoherent combination of 208 low-energy laser beamlets. In the region behind the driver, the wakefield is regular. Credit: Carlo Benedetti, Berkeley Lab

    The problem, however, is creating a laser pulse that’s powerful enough to compete with the big accelerators. In particular, lasers need to have the capability to fire a high-energy pulse thousands of times a second. Today’s lasers can only manage one pulse per second at the needed energy levels.

    “If you want to make a device that’s of use for particle physics, of use for medical applications, of use for light source applications, you need repetition rate,” explains Wim Leemans, physicist at Berkeley Lab. In January of 2013, the DOE held a workshop on laser technology for accelerators. At the time, says Leemans, the big question was how to get from the current technology to the scaled up version.

    Conventional wisdom holds that many smaller lasers, combined in a particular way, could essentially create one ultra powerful pulse. In theory, this sounds fine, but the practical requirements to build such a system have seemed daunting. For instance, it was believed that the light from the smaller lasers would need to be precisely matched in color, phase, and other properties in order to produce the electron-accelerating motion within the plasma. “We thought this was really challenging,” says Leemans, “We thought, you need this nice laser pulse, and everything needs to be done properly to control the laser pulse.”

    But the new Berkeley Lab study has found this isn’t the case. Paper co-authors Carlo Benedetti, Carl Schroeder, Eric Esarey and Leemans wanted to see what an erratic laser pulse would actually do inside a plasma. Guided by theory and using computer simulations to test various scenarios, the researchers looked at how beams of various colors and phases—basically a hodgepodge of laser light—affected the plasma. They soon discovered, no matter the beam, the plasma didn’t care.

    “The plasma is a medium that responds to a laser, but it doesn’t respond immediately,” says Benedetti, a physicist at Berkeley Lab. The light is just operating on a faster time scale and a smaller length scale, he explains. All of the various interference patterns and various electromagnetic fields average out in the slow-responding plasma medium. In other words, once laser light gets inside the plasma, many of the problems disappear.

    “As an experimentalist for all these years we’re trying to make these perfect laser pulses, and maybe we didn’t need to worry so much,” says Leemans. “I think this will have a big impact on the laser community and laser builders because all of a sudden, they’ll think of approaches where before hand all of us said, ‘No, no, no. You can’t do that.’ This new result says, well maybe you don’t have to be all that careful.”

    Leemans says the ball is back in the experimentalists’ and laser builders’ court to prove that the idea can work. In 2006, he and his team demonstrated a three-centimeter long plasma accelerator. Where a traditional accelerator can take kilometers to drive an electron to 50 giga-electron volts (GeV), Leemans and team showed that a mini-laser plasma accelerator could get electrons to 1 GeV in just three centimeters with a laser pulse of about 40 terawatt. To go to higher electron energies, in 2012, a larger more powerful laser was installed at the Berkeley Lab Laser Accelerator (BELLA) facility with a petawatt pulse (1 quadrillion watts) that lasts 40 femtoseconds, which is now being used in experiments that aim at generating a 10 GeV beam.

    Still, the goal of a high-repetition rate, 10-GeV laser-plasma accelerator that fires a thousand pulses or more per second, is at least five to ten years away, says Leemans. But a new project called k-BELLA (k is for kilohertz) is in the works that will use the principles of combined, messy laser light sources to produce fast, more powerful laser pulses. “Once we synthesize a pulse at higher repetition rates,” says Leemans, “we will be on our way towards a kilohertz GeV laser plasma accelerator.”

    This work was supported by the DOE Office of Science and used the facilities of the National Energy Research Scientific Computing Center (NERSC) located at Berkeley Lab.

    1. # #

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 1:04 pm on April 30, 2014 Permalink | Reply
    Tags: , , NERSC,   

    From NERSC: “NERSC, Cray, Intel to Collaborate on Next-Generation Supercomputer 

    NERSC Logo
    NERSC

    April 29, 2014
    Contact: Jon Bashor, jbashor@lbl.gov, 510-486-5849

    The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced today that they have signed a contract for a next generation of supercomputer to enable scientific discovery at the DOE’s Office of Science (DOE SC).

    Lawrence Berkeley National Laboratory (Berkeley Lab), which manages NERSC, collaborated with Los Alamos National Laboratory and Sandia National Laboratories to develop the technical requirements for the system.

    The new, next-generation Cray XC supercomputer will use Intel’s next-generation Intel® Xeon Phi™ processor –- code-named “Knights Landing” — a self-hosted, manycore processor with on-package high bandwidth memory and delivers more than 3 teraFLOPS of double-precision peak performance per single socket node. Scheduled for delivery in mid-2016, the new system will deliver 10x the sustained computing capability of NERSC’s Hopper system, a Cray XE6 supercomputer.

    NERSC serves as the DOE SC’s primary high performance computing (HPC) facility, supporting more than 5,000 scientists annually on over 700 projects. The $70 million plus contract represents the DOE SC’s ongoing commitment to enabling extreme-scale science to address challenges such as developing new energy sources, improving energy efficiency, understanding climate change, developing new materials and analyzing massive data sets from experimental facilities around the world.

    “This agreement is a significant step in advancing supercomputing design toward the kinds of computing systems we expect to see in the next decade as we advance to exascale,” said Steve Binkley, Associate Director of the Office of Advanced Scientific Computing Research. “U.S. leadership in HPC, both in the technology and in the scientific research that can be accomplished with such powerful systems, is essential to maintaining economic and intellectual leadership. This project was strengthened by a great partnership with DOE’s National Nuclear Security Administration.”

    To highlight its commitment to advancing research, NERSC names its supercomputers after noted scientists. The new system will be named “Cori” in honor of bio-chemist and Nobel Laureate Gerty Cori, the first American woman to receive a Nobel Prize in science.

    Technical Highlights

    Cori the supercomputer will have over 9300 Knights Landing compute nodes and provide over 400 gigabytes per second of I/O bandwidth and 28 petabytes of disk space. The contract also includes an option for a “Burst Buffer,” a layer of NVRAM that would move data more quickly between processor and disk, allowing users to make the most efficient use of the system while saving energy. The Cray XC system features the Aries high-performance interconnect linking the processors, which also increases efficiency. Cori will be installed directly into the new Computational Research and Theory facility currently being constructed on the main Berkeley Lab campus.

    “NERSC is one of the premier high performance computing centers in the world, and we are proud that the close partnership we have built with NERSC over the years will continue with the delivery of Cori – the next-generation of our flagship Cray XC supercomputer,” said Peter Ungaro, president and CEO of Cray. “Accelerating scientific discovery lies at the foundation of the NERSC mission, and it’s also a key element of our own supercomputing roadmap and vision. Our focus is creating new, advanced supercomputing technologies that ultimately put more powerful tools in the hands of scientists and researchers. It is a focus we share with NERSC and its user community, and we are pleased our partnership is moving forward down this shared path.”

    The Knights Landing processor used in Cori will have over 60 cores, each with multiple hardware threads with improved single thread performance over the current generation Xeon Phi co-processor. The Knights Landing processor is “self-hosted,” meaning that it is not an accelerator or dependent on a host processor. With this model, users will be able to retain the MPI/OpenMP programming model they have been using on NERSC’s previous generation Hopper and Edison systems. The Knights Landing processor also features on-package high bandwidth memory that can be used either as a cache or explicitly managed by the user.

    “NERSC’s selection of Intel’s next-generation Intel® Xeon Phi™ product family – codenamed Knights Landing – as the compute engine for their next generation Cray system marks a significant milestone for the broad Office of Science community as well as for Intel Corporation,” said Raj Hazra, Vice President and General Manager of High Performance Computing at Intel. “Knights Landing is the first true manycore CPU that breaks through the memory wall while leveraging existing codes through existing programming models. This combination of performance and programmability in the Intel Xeon Phi product family enables breakthrough performance on a wide set of applications. The Knights Landing processor, memory and programming model advantages make it the first significant step to resolving the challenges of exascale.”

    Application Readiness

    To help users transition to the Knights Landing manycore processor, NERSC has created a robust Application Readiness program that will provide user training, access to early development systems and application kernel deep dives with Cray and Intel specialists.

    “We are excited to partner with Cray and Intel to ensure that Cori meets the computational needs of DOE’s science community,” said NERSC Director Sudip Dosanjh. “Cori will provide a significant increase in capability for our users and will provide a platform for transitioning our very broad user community to energy-efficient, manycore architectures. It will also let users analyze large quantities of data being transferred to NERSC from DOE’s experimental facilities.”

    As part of the Application Readiness effort, NERSC plans to create teams composed of NERSC principal investigators along with NERSC staff and newly hired postdoctoral researchers. Together they will ensure that applications and software running on Cori are ready to produce important research results for the Office of Science. NERSC also plans to work closely with Cray, Intel, DOE laboratories and other members of the HPC community who are facing the same transition to manycore architectures.

    “We are committed to helping our users, who represent the broad scientific workload of the DOE SC community, make the transition to manycore architectures so they can maintain their research momentum,” said Katie Antypas, NERSC’s Services Department Head. “We recognize some applications may need significant optimization to achieve high performance on the Knights Landing processor. Our goal is to enable performance that is portable across systems and will be sustained in future supercomputing architectures.”

    See the full article here.

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:20 pm on March 14, 2013 Permalink | Reply
    Tags: , , , NERSC, ,   

    From Berkeley Lab: “Building the Massive Simulation Sets Essential to Planck Results” 


    Berkeley Lab

    Using NERSC supercomputers, Berkeley Lab scientists generate thousands of simulations to analyze the flood of data from the Planck mission

    March 14, 2013
    Paul Preuss

    “To make the most precise measurement yet of the cosmic microwave background (CMB) – the remnant radiation from the big bang – the European Space Agency’s (ESA’s) Planck satellite mission has been collecting trillions of observations of the sky since the summer of 2009. On March 21, 2013, ESA and NASA, a major partner in Planck, will release preliminary cosmology results based on Planck’s first 15 months of data. The results have required the intense creative efforts of a large international collaboration, with significant participation by the U.S. Planck Team based at NASA’s Jet Propulsion Laboratory (JPL).

    four men
    From left, Reijo Keskitalo, Aaron Collier, Julian Borrill, and Ted Kisner of the Computational Cosmology Center with some of the many thousands of simulations for Planck Full Focal Plane 6. (Photo by Roy Kaltschmidt)

    ‘NERSC supports the entire international Planck effort,’ says Julian Borrill of the Computational Research Division (CRD) , who cofounded C3 in 2007 to bring together scientists from CRD and the Lab’s Physics Division. ‘Planck was given an unprecedented multi-year allocation of computational resources in a 2007 agreement between DOE and NASA, which has so far amounted to tens of millions of hours of massively parallel processing, plus the necessary data-storage and data-transfer resources.’

    JPL’s Charles Lawrence, Planck Project Scientist and leader of the U.S. team, says that ‘without the exemplary interagency cooperation between NASA and DOE, Planck would not be doing the science it’s doing today.’”

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    DOE Seal

    i2


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 345 other followers

%d bloggers like this: