Tagged: NERSC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:33 pm on November 22, 2017 Permalink | Reply
    Tags: , , , , , NERSC, Neutrino astronomy, , ,   

    From LBNL: “How the Earth Stops High-Energy Neutrinos in Their Tracks” 

    Berkeley Logo

    Berkeley Lab

    November 22, 2017
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 520-0843

    Efforts of Berkeley Lab scientists are key in new analysis of data from Antarctic experiment.

    3
    Illustration of how a muon interacts in the IceCube detector array. (Credit: IceCube Collaboration)


    IceCube has measured for the first time the probability that neutrinos are absorbed by Earth as a function of their energy and the amount of matter that they go through. This measurement of the neutrino cross section using Earth absorption has confirmed predictions from the Standard Model to energies up to 980 TeV. A detailed understanding of how high-energy neutrinos interact with Earth’s matter will allow using these particles to investigate the composition of Earth’s core and mantel. (Credit: IceCube Collaboration)


    U Wisconsin ICECUBE neutrino detector at the South Pole

    Neutrinos are abundant subatomic particles that are famous for passing through anything and everything, only very rarely interacting with matter. About 100 trillion neutrinos pass through your body every second.

    Now, scientists have demonstrated that the Earth stops energetic neutrinos—they do not go through everything. These high-energy neutrino interactions were seen by the IceCube detector, an array of 5,160 basketball-sized optical sensors deeply encased within a cubic kilometer of very clear Antarctic ice near the South Pole.

    IceCube’s sensors do not directly observe neutrinos, but instead measure flashes of blue light, known as Cherenkov radiation, emitted by muons and other fast-moving charged particles, which are created when neutrinos interact with the ice, and by the charged particles produced when the muons interact as they move through the ice. By measuring the light patterns from these interactions in or near the detector array, IceCube can estimate the neutrinos’ directions and energies.

    The study, published in the Nov. 22 issue of the journal Nature, was led by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley.

    Spencer Klein, who leads Berkeley Lab’s IceCube research team, commented “This analysis is important because it shows that IceCube can make real contributions to particle and nuclear physics, at energies above the reach of current accelerators.”

    Sandra Miarecki, who performed much of the data analysis while working toward her PhD as an IceCube researcher at Berkeley Lab and UC Berkeley, said, “It’s a multidisciplinary idea.” The analysis required input from geologists who have created models of the Earth’s interior from seismic studies. Physicists have used these models to help predict how neutrinos are absorbed in the Earth.

    “You create ‘pretend’ muons that simulate the response of the sensors,” Miarecki said. “You have to simulate their behavior, there has to be an ice model to simulate the ice’s behavior, you also have to have cosmic ray simulations, and you have to simulate the Earth using equations. Then you have to predict, probability-wise, how often a particular muon would come through the Earth.”

    The study’s results are based on one year of data from about 10,800 neutrino-related interactions, stemming from a natural supply of very energetic neutrinos from space that go through a thick and dense absorber: the Earth. The energy of the neutrinos was critical to the study, as higher energy neutrinos are more likely to interact with matter and be absorbed by the Earth.

    Scientists found that there were fewer energetic neutrinos making it all the way through the Earth to the IceCube detector than from less obstructed paths, such as those coming in at near-horizontal trajectories. The probability of neutrinos being absorbed by the Earth was consistent with expectations from the Standard Model of particle physics, which scientists use to explain the fundamental forces and particles in the universe. This probability—that neutrinos of a given energy will interact with matter—is what physicists refer to as a “cross section.”

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    “Understanding how neutrinos interact is key to the operation of IceCube,” explained Francis Halzen, principal investigator for the IceCube Neutrino Observatory and a University of Wisconsin–Madison professor of physics. Precision measurements at the HERA accelerator in Hamburg, Germany, allow us to compute the neutrino cross section with great accuracy within the Standard Model—which would apply to IceCube neutrinos of much higher energies if the Standard Model is valid at these energies.

    UC Berkeley Hydrogen Epoch of Reionization Array (HERA)

    “We were of course hoping for some new physics to appear, but we unfortunately find that the Standard Model, as usual, withstands the test,” added Halzen.

    James Whitmore, program director in the National Science Foundation’s physics division, said, “IceCube was built to both explore the frontiers of physics and, in doing so, possibly challenge existing perceptions of the nature of universe. This new finding and others yet to come are in that spirit of scientific discovery.”

    2
    In this study, researchers measured the flux of muon neutrinos as a function of their energy and their incoming direction. Neutrinos with higher energies and with incoming directions closer to the North Pole are more likely to interact with matter on their way through Earth. (Credit: IceCube Collaboration)

    This study provides the first cross-section measurements for a neutrino energy range that is up to 1,000 times higher than previous measurements at particle accelerators. Most of the neutrinos selected for this study were more than a million times more energetic than the neutrinos produced by more familiar sources, like the sun or nuclear power plants. Researchers took care to ensure that the measurements were not distorted by detector problems or other uncertainties.

    “Neutrinos have quite a well-earned reputation of surprising us with their behavior,” said Darren Grant, spokesperson for the IceCube Collaboration and a professor of physics at the University of Alberta in Canada. “It is incredibly exciting to see this first measurement and the potential it holds for future precision tests.”

    In addition to providing the first measurement of the Earth’s absorption of neutrinos, the analysis shows that IceCube’s scientific reach is extending beyond its core focus on particle physics discoveries and the emerging field of neutrino astronomy into the fields of planetary science and nuclear physics. This analysis will also interest geophysicists who would like to use neutrinos to image the Earth’s interior, although this will require more data than was used in the current study.

    The neutrinos used in this analysis were mostly produced when hydrogen or heavier nuclei from high-energy cosmic rays, created outside the solar system, interacted with nitrogen or oxygen nuclei in the Earth’s atmosphere. This creates a cascade of particles, including several types of subatomic particles that decay, producing neutrinos. These particles rain down on the Earth’s surface from all directions.

    The analysis also included a small number of astrophysical neutrinos, which are produced outside of the Earth’s atmosphere, from cosmic accelerators unidentified to date, perhaps associated with supermassive black holes.

    The neutrino-interaction events that were selected for the study have energies of at least one trillion electron volts, or a teraelectronvolt (TeV), roughly the kinetic energy of a flying mosquito. At this energy, the Earth’s absorption of neutrinos is relatively small, and the lowest energy neutrinos in the study largely served as an absorption-free baseline. The analysis was sensitive to absorption in the energy range from 6.3 TeV to 980 TeV, limited at the high-energy end by a shortage of sufficiently energetic neutrinos.

    At these energies, each individual proton or neutron in a nucleus acts independently, so the absorption depends on the number of protons or neutrons that each neutrino encounters. The Earth’s core is particularly dense, so absorption is largest there. By comparison, the most energetic neutrinos that have been studied at human-built particle accelerators were at energies below 0.4 TeV. Researchers have used these accelerators to aim beams containing an enormous number of these lower energy neutrinos at massive detectors, but only a very tiny fraction yield interactions.

    IceCube researchers used data collected from May 2010 to May 2011, from a partial array of 79 “strings,” each containing 60 sensors embedded more than a mile deep in the ice.

    Gary Binder, a UC Berkeley graduate student affiliated with Berkeley Lab’s Nuclear Science Division, developed the software that was used to fit IceCube’s data to a model describing how neutrinos propagate through the Earth.

    From this, the software determined the cross-section that best fit the data. University of Wisconsin – Madison student Chris Weaver developed the code for selecting the detection events that Miarecki used.

    Simulations to support the analysis have been conducted using supercomputers at the University of Wisconsin–Madison and at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC).

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Physicists now hope to repeat the study using an expanded, multiyear analysis of data from the full 86-string IceCube array, which was completed in December 2010, and to look at higher ranges of neutrino energies for any hints of new physics beyond the Standard Model.

    IceCube Gen-2 DeepCore


    IceCube Gen-2 DeepCore PINGU

    IceCube has already detected multiple ultra-high-energy neutrinos, in the range of petaelectronvolts (PeV), which have a 1,000-times-higher energy than those detected in the TeV range.

    Klein said, “Once we can reduce the uncertainties and can look at slightly higher energies, we can look at things like nuclear effects in the Earth, and collective electromagnetic effects.”

    Binder added, “We can also study how much energy a neutrino transfers to a nucleus when it interacts, giving us another probe of nuclear structure and physics beyond the Standard Model.”

    A longer term goal is to build a larger detector, which would enable scientists to study neutrinos of even higher energies. The proposed IceCube-Gen2 would be 10 times larger than IceCube. Its larger size would enable the detector to collect more data from neutrinos at very high energies.

    Some scientists are looking to build an even larger detector, 100 cubic kilometers or more, using a new approach that searches for pulses of radio waves produced when very high energy neutrinos interact in the ice. Measurements of neutrino absorption by a radio-based detector could be used to search for new phenomena that go well beyond the physics accounted for in the Standard Model and could scrutinize the structure of atomic nuclei in greater detail than those of other experiments.

    Miarecki said, “This is pretty exciting – I couldn’t have thought of a more interesting project.”

    Berkeley Lab’s National Energy Research Scientific Computing Center is a DOE Office of Science User Facility.

    The work was supported by the U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, Grid Laboratory of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin–Madison, Open Science Grid (OSG) grid infrastructure, National Energy Research Scientific Computing Center, Louisiana Optical Network Initiative (LONI) grid computing resources, U.S. Department of Energy Office of Nuclear Physics, and United States Air Force Academy; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Initiative and Networking Fund of the Helmholtz Association, Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Villum Fonden, Danish National Research Foundation (DNRF), Denmark.

    The IceCube Neutrino Observatory was built under a National Science Foundation (NSF) Major Research Equipment and Facilities Construction grant, with assistance from partner funding agencies around the world. The NSF Office of Polar Programs and NSF Physics Division support the project with a Maintenance and Operations (M&O) grant. The University of Wisconsin–Madison is the lead institution for the IceCube Collaboration, coordinating data-taking and M&O activities.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    Advertisements
     
  • richardmitnick 1:37 pm on July 3, 2017 Permalink | Reply
    Tags: NERSC, , , Record-breaking 45-qubit Quantum Computing Simulation Run at NERSC on Cori   

    From NERSC: “Record-breaking 45-qubit Quantum Computing Simulation Run at NERSC on Cori” 

    NERSC Logo
    NERSC

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    NERSC Hopper Cray XE6supercomputer

    June 1, 2017
    Kathy Kincade
    kkincade@lbl.gov
    +1 510 495 2124

    When two researchers from the Swiss Federal Institute of Technology (ETH Zurich) announced in April that they had successfully simulated a 45-qubit quantum circuit, the science community took notice: it was the largest ever simulation of a quantum computer, and another step closer to simulating “quantum supremacy”—the point at which quantum computers become more powerful than ordinary computers.

    1
    A multi-qubit chip developed in the Quantum Nanoelectronics Laboratory at Lawrence Berkeley National Laboratory.

    The computations were performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. Researchers Thomas Häner and Damien Steiger, both Ph.D. students at ETH, used 8,192 of 9,688 Intel Xeon Phi processors on NERSC’s newest supercomputer, Cori, to support this simulation, the largest in a series they ran at NERSC for the project.

    “Quantum computing” has been the subject of dedicated research for decades, and with good reason: quantum computers have the potential to break common cryptography techniques and simulate quantum systems in a fraction of the time it would take on current “classical” computers. They do this by leveraging the quantum states of particles to store information in qubits (quantum bits), a unit of quantum information akin to a regular bit in classical computing. Better yet, qubits have a secret power: they can perform more than one calculation at a time. One qubit can perform two calculations in a quantum superposition, two can perform four, three eight, and so forth, with a corresponding exponential increase in quantum parallelism. Yet harnessing this quantum parallelism is difficult, as observing the quantum state causes the system to collapse to just one answer.

    So how close are we to realizing a true working prototype? It is generally thought that a quantum computer deploying 49 qubits—a unit of quantum information—will be able to match the computing power of today’s most powerful supercomputers. Toward this end, Häner and Steiger’s simulations will aid in benchmarking and calibrating near-term quantum computers by carrying out quantum supremacy experiments with these early devices and comparing them to their simulation results. In the mean time, we are seeing a surge in investments in quantum computing technology from the likes of Google, IBM and other leading tech companies—even Volkswagen—which could dramatically accelerate the development process.

    Simulation and Emulation of Quantum Computers

    Both emulation and simulation are important for calibrating, validating and benchmarking emerging quantum computing hardware and architectures. In a paper [ACM=DL]presented at SC16, Häner and Steiger wrote: “While large-scale quantum computers are not yet available, their performance can be inferred using quantum compilation frameworks and estimates of potential hardware specifications. However, without testing and debugging quantum programs on small scale problems, their correctness cannot be taken for granted. Simulators and emulators … are essential to address this need.”

    That paper discussed emulating quantum circuits—a common representation of quantum programs—while the 45-qubit paper focuses on simulating quantum circuits. Emulation is only possible for certain types of quantum subroutines, while the simulation of quantum circuits is a general method that also allows the inclusion of the effects of noise. Such simulations can be very challenging even on today’s fastest supercomputers, Häner and Steiger explained. For the 45-qubit simulation, for example, they used most of the available memory on each of the 8,192 nodes. “This increases the probability of node failure significantly, and we could not expect to run on the full system for more than an hour without failure,” they said. “We thus had to reduce time-to-solution at all scales (node-level as well as cluster-level) to achieve this simulation.”

    Optimizing the quantum circuit simulator was key. Häner and Steiger employed automatic code generation, optimized the compute kernels and applied a scheduling algorithm to the quantum supremacy circuits, thus reducing the required node-to-node communication. During the optimization process they worked with NERSC staff and used Berkeley Lab’s Roofline Model to identify potential areas where performance could be boosted.

    In addition to the 45-qubit simulation, which used 0.5 petabytes of memory on Cori and achieved a performance of 0.428 petaflops, they also simulated 30-, 36- and 42-qubit quantum circuits. When they compared the results with simulations of 30- and 36-qubit circuits run on NERSC’s Edison system, they found that the Edison simulations also ran faster.

    “Our optimizations improved the performance – the number of floating-point operations per time – by 10x for Edison and between 10x and 20x for Cori (depending on the circuit to simulate and the size per node),” Häner and Steiger said. “The time-to-solution decreased by over 12x when compared to the times of a similar simulation reported in a recent paper on quantum supremacy by Boixo and collaborators, which made the 45-qubit simulation possible.”

    Looking ahead, the duo is interested in performing more quantum circuit simulations at NERSC to determine the performance of near-term quantum computers solving quantum chemistry problems. They are also hoping to use solid-state drives to store larger wave functions and thus try to simulate even more qubits.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    Grace Hopper

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 10:26 am on March 23, 2017 Permalink | Reply
    Tags: , , NERSC, Towards Super-Efficient Ultra-Thin Silicon Solar Cells   

    From LBNL via Ames Lab: “Towards Super-Efficient, Ultra-Thin Silicon Solar Cells” 

    AmesLabII
    Ames Laboratory

    LBNL


    NERSC

    March 16, 2017
    Kathy Kincade
    kkincade@lbl.gov
    +1 510 495 2124

    Ames Researchers Use NERSC Supercomputers to Help Optimize Nanophotonic Light Trapping

    Despite a surge in solar cell R&D in recent years involving emerging materials such as organics and perovskites, the solar cell industry continues to favor inorganic crystalline silicon photovoltaics. While thin-film solar cells offer several advantages—including lower manufacturing costs—long-term stability of crystalline silicon solar cells, which are typically thicker, tips the scale in their favor, according to Rana Biswas, a senior scientist at Ames Laboratory, who has been studying solar cell materials and architectures for two decades.

    “Crystalline silicon solar cells today account for more than 90 percent of all installations worldwide,” said Biswas, co-author of a new study that used supercomputers at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a Department of Energy Office of Science User Facility, to evaluate a novel approach for creating more energy-efficient ultra-thin crystalline silicon solar cells. “The industry is very skeptical that any other material could be as stable as silicon.”


    LBL NERSC Cray XC30 Edison supercomputer


    NERSC CRAY Cori supercomputer

    Thin-film solar cells typically fabricated from semiconductor materials such as amorphous silicon are only a micron thick. While this makes them less expensive to manufacture than crystalline silicon solar cells, which are around 180 microns thick, it also makes them less efficient—12 to 14 percent energy conversion, versus nearly 25 percent for silicon solar cells (which translates into 15-21 percent for large area panels, depending on the size). This is because if the wavelength of incoming light is longer than the solar cell is thick, the light won’t be absorbed.

    Nanocone Arrays

    This challenge prompted Biswas and colleagues at Ames to look for ways to improve ultra-thin silicon cell architectures and efficiencies. In a paper published in Nanomaterials, they describe their efforts to develop a highly absorbing ultra-thin crystalline silicon solar cell architecture with enhanced light trapping capabilities.

    “We were able to design a solar cell with a very thin amount of silicon that could still provide high performance, almost as high performance as the thick silicon being used today,” Biswas said.

    2
    Proposed crystalline silicon solar cell architecture developed by Ames Laboratory researchers Prathap Pathi, Akshit Peer and Rana Biswas.

    The key lies in the wavelength of light that is trapped and the nanocone arrays used to trap it. Their proposed solar architecture comprises thin flat spacer titanium dioxide layers on the front and rear surfaces of silicon, nanocone gratings on both sides with optimized pitch and height and rear cones surrounded by a metallic reflector made of silver. They then set up a scattering matrix code to simulate light passing through the different layers and study how the light is reflected and transmitted at different wavelengths by each layer.

    “This is a light-trapping approach that keeps the light, especially the red and long-wavelength infrared light, trapped within the crystalline silicon cell,” Biswas explained. “We did something similar to this with our amorphous silicon cells, but crystalline behaves a little differently.”

    For example, it is critical not to affect the crystalline silicon wafer—the interface of the wafer—in any way, he emphasized. “You want the interface to be completely flat to begin with, then work around that when building the solar cell,” he said. “If you try to pattern it in some way, it will introduce a lot of defects at the interface, which are not good for solar cells. So our approach ensures we don’t disturb that in any way.”

    Homegrown Code

    In addition to the cell’s unique architecture, the simulations the researchers ran on NERSC’s Edison system utilized “homegrown” code developed at Ames to model the light via the cell’s electric and magnetic fields—a “classical physics approach,” Biswas noted. This allowed them to test multiple wavelengths to determine which was most optimum for light trapping. To optimize the absorption of light by the crystalline silicon based upon the wavelength, the team sent light waves of different wavelengths into a designed solar cell and then calculated the absorption of light in that solar cell’s architecture. The Ames researchers had previously studied the trapping of light in other thin film solar cells made of organic and amorphous silicon in previous studies.

    “One very nice thing about NERSC is that once you set up the problem for light, you can actually send each incoming light wavelength to a different processor (in the supercomputer),” Biswas said. “We were typically using 128 or 256 wavelengths and could send each of them to a separate processor.”

    Looking ahead, given that this research is focused on crystalline silicon solar cells, this new design could make its way into the commercial sector in the not-too-distant future—although manufacturing scalability could pose some initial challenges, Biswas noted.

    “It is possible to do this in a rather inexpensive way using soft lithography or nanoimprint lithography processes,” he said. “It is not that much work, but you need to set up a template or a master to do that. In terms of real-world applications, these panels are quite large, so that is a challenge to do something like this over such a large area. But we are working with some groups that have the ability to do roll to roll processing, which would be something they could get into more easily.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

    For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

    Ames Laboratory shares a close working relationship with Iowa State University’s Institute for Physical Research and Technology, or IPRT, a network of scientific research centers at Iowa State University, Ames, Iowa.

    DOE Banner

     
  • richardmitnick 3:21 pm on December 26, 2016 Permalink | Reply
    Tags: , , , NERSC, Researchers Use World's Smallest Diamonds to Make Wires Three Atoms Wide,   

    From SLAC: “Researchers Use World’s Smallest Diamonds to Make Wires Three Atoms Wide” 


    SLAC Lab

    December 26, 2016

    LEGO-style Building Method Has Potential for Making One-Dimensional Materials with Extraordinary Properties

    1
    Fuzzy white clusters of nanowires on a lab bench, with a penny for scale. Assembled with the help of diamondoids, the microscopic nanowires can be seen with the naked eye because the strong mutual attraction between their diamondoid shells makes them clump together, in this case by the millions. At top right, an image made with a scanning electron microscope shows nanowire clusters magnified 10,000 times. (SEM image by Hao Yan/SIMES; photo by SLAC National Accelerator Laboratory)

    Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have discovered a way to use diamondoids – the smallest possible bits of diamond – to assemble atoms into the thinnest possible electrical wires, just three atoms wide.

    By grabbing various types of atoms and putting them together LEGO-style, the new technique could potentially be used to build tiny wires for a wide range of applications, including fabrics that generate electricity, optoelectronic devices that employ both electricity and light, and superconducting materials that conduct electricity without any loss. The scientists reported their results today in Nature Materials.

    “What we have shown here is that we can make tiny, conductive wires of the smallest possible size that essentially assemble themselves,” said Hao Yan, a Stanford postdoctoral researcher and lead author of the paper. “The process is a simple, one-pot synthesis. You dump the ingredients together and you can get results in half an hour. It’s almost as if the diamondoids know where they want to go.”

    2
    This animation shows molecular building blocks joining the tip of a growing nanowire. Each block consists of a diamondoid – the smallest possible bit of diamond – attached to sulfur and copper atoms (yellow and brown spheres). Like LEGO blocks, they only fit together in certain ways that are determined by their size and shape. The copper and sulfur atoms form a conductive wire in the middle, and the diamondoids form an insulating outer shell. (SLAC National Accelerator Laboratory)

    The Smaller the Better

    3

    Illustration of a cluster of nanowires assembled by diamondoids
    An illustration shows a hexagonal cluster of seven nanowires assembled by diamondoids. Each wire has an electrically conductive core made of copper and sulfur atoms (brown and yellow spheres) surrounded by an insulating diamondoid shell. The natural attraction between diamondoids drives the assembly process. (H. Yan et al., Nature Materials)

    Although there are other ways to get materials to self-assemble, this is the first one shown to make a nanowire with a solid, crystalline core that has good electronic properties, said study co-author Nicholas Melosh, an associate professor at SLAC and Stanford and investigator with SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC.

    The needle-like wires have a semiconducting core – a combination of copper and sulfur known as a chalcogenide – surrounded by the attached diamondoids, which form an insulating shell.

    Their minuscule size is important, Melosh said, because a material that exists in just one or two dimensions – as atomic-scale dots, wires or sheets – can have very different, extraordinary properties compared to the same material made in bulk. The new method allows researchers to assemble those materials with atom-by-atom precision and control.

    The diamondoids they used as assembly tools are tiny, interlocking cages of carbon and hydrogen. Found naturally in petroleum fluids, they are extracted and separated by size and geometry in a SLAC laboratory. Over the past decade, a SIMES research program led by Melosh and SLAC/Stanford Professor Zhi-Xun Shen has found a number of potential uses for the little diamonds, including improving electron microscope images and making tiny electronic gadgets.

    4
    Stanford graduate student Fei Hua Li, left, and postdoctoral researcher Hao Yan in one of the SIMES labs where diamondoids – the tiniest bits of diamond – were used to assemble the thinnest possible nanowires. (SLAC National Accelerator Laboratory)

    Constructive Attraction

    5
    Ball-and-stick models of diamondoid atomic structures in the SIMES lab at SLAC. SIMES researchers used the smallest possible diamondoid – adamantane, a tiny cage made of 10 carbon atoms – to assemble the smallest possible nanowires, with conductive cores just three atoms wide. (SLAC National Accelerator Laboratory)

    For this study, the research team took advantage of the fact that diamondoids are strongly attracted to each other, through what are known as van der Waals forces. (This attraction is what makes the microscopic diamondoids clump together into sugar-like crystals, which is the only reason you can see them with the naked eye.)

    They started with the smallest possible diamondoids – single cages that contain just 10 carbon atoms – and attached a sulfur atom to each. Floating in a solution, each sulfur atom bonded with a single copper ion. This created the basic nanowire building block.

    The building blocks then drifted toward each other, drawn by the van der Waals attraction between the diamondoids, and attached to the growing tip of the nanowire.

    “Much like LEGO blocks, they only fit together in certain ways that are determined by their size and shape,” said Stanford graduate student Fei Hua Li, who played a critical role in synthesizing the tiny wires and figuring out how they grew. “The copper and sulfur atoms of each building block wound up in the middle, forming the conductive core of the wire, and the bulkier diamondoids wound up on the outside, forming the insulating shell.”

    A Versatile Toolkit for Creating Novel Materials

    The team has already used diamondoids to make one-dimensional nanowires based on cadmium, zinc, iron and silver, including some that grew long enough to see without a microscope, and they have experimented with carrying out the reactions in different solvents and with other types of rigid, cage-like molecules, such as carboranes.

    The cadmium-based wires are similar to materials used in optoelectronics, such as light-emitting diodes (LEDs), and the zinc-based ones are like those used in solar applications and in piezoelectric energy generators, which convert motion into electricity.

    “You can imagine weaving those into fabrics to generate energy,” Melosh said. “This method gives us a versatile toolkit where we can tinker with a number of ingredients and experimental conditions to create new materials with finely tuned electronic properties and interesting physics.”

    Theorists led by SIMES Director Thomas Devereaux modeled and predicted the electronic properties of the nanowires, which were examined with X-rays at SLAC’s Stanford Synchrotron Radiation Lightsource, a DOE Office of Science User Facility, to determine their structure and other characteristics.

    The team also included researchers from the Stanford Department of Materials Science and Engineering, Lawrence Berkeley National Laboratory, the National Autonomous University of Mexico (UNAM) and Justus-Liebig University in Germany. Parts of the research were carried out at Berkeley Lab’s Advanced Light Source (ALS)

    LBNL ALS interior
    LBNL ALS

    and National Energy Research Scientific Computing Center (NERSC),

    NERSC CRAY Cori supercomputer
    NERSC

    both DOE Office of Science User Facilities. The work was funded by the DOE Office of Science and the German Research Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 8:55 am on August 1, 2016 Permalink | Reply
    Tags: , Big bang nucleosynthesis, NERSC,   

    From NERSC: “A peek inside the earliest moments of the universe” 

    NERSC Logo
    NERSC

    August 1, 2016
    Kathy Kincade
    kkincade@lbl.gov
    1 510 495 2124

    1
    The MuSun experiment at the Paul Scherrer Institute is measuring the rate for muon capture on the deuteron to better than 1.5% precision. This process is the simplest weak interaction on a nucleus that can be measured to a high degree of precision. Credit: Lawrence Berkeley National Laboratory

    The Big Bang. That spontaneous explosion some 14 billion years ago that created our universe and, in the process, all matter as we know it today.

    In the first few minutes following “the bang,” the universe quickly began expanding and cooling, allowing the formation of subatomic particles that joined forces to become protons and neutrons. These particles then began interacting with one another to create the first simple atoms. A little more time, a little more expansion, a lot more cooling—along with ever-present gravitational pull—and clouds of these elements began to morph into stars and galaxies.

    For William Detmold, an assistant professor of physics at MIT who uses lattice quantum chromodynamics (LQCD) to study subatomic particles, one of the most interesting aspects of the formation of the early universe is what happened in those first few minutes—a period known as the “big bang nucleosynthesis.”

    “You start off with very high-energy particles that cool down as the universe expands, and eventually you are left with a soup of quarks and gluons, which are strongly interacting particles, and they form into protons and neutrons,” he said. “Once you have protons and neutrons, the next stage is for those protons and neutrons to come together and start making more complicated things—primarily deuterons, which interact with other neutrons and protons and start forming heavier elements, such as Helium-4, the alpha particle.”

    One of the most critical aspects of big bang nucleosynthesis is the radiative capture process, in which a proton captures a neutron and fuses to produce a deuteron and a photon. In a paper published in Physical Review Letters, Detmold and his co-authors—all members of the NPLQCD Collaboration, which studies the properties, structures and interactions of fundamental particles—describe how they used LQCD calculations to better understand this process and precisely measure the nuclear reaction rate that occurs when a neutron and proton form a deuteron. While physicists have been able to experimentally measure these phenomena in the laboratory, they haven’t been able to do the same, with certainty, using calculations alone—until now.

    “One of the things that is very interesting about the strong interaction that takes place in the radiative capture process is that you get very complicated structures forming, not just protons and neutrons,” Detmold said. “The strong interaction has this ability to have these very different structures coming out of it, and if these primordial reactions didn’t happen the way they happened, we wouldn’t have formed enough deuterium to form enough helium that then goes ahead and forms carbon. And if we don’t have carbon, we don’t have life.”

    Calculations Mirror Experiments

    For the Physical Review Letters paper, the team used the Chroma LQCD code developed at Jefferson Lab to run a series of calculations with quark masses that were 10-20 times the physical value of those masses. Using heavier values rather than the actual physical values reduced the cost of the calculations tremendously, Detmold noted. They then used their understanding of how the calculations should depend on mass to get to the physical value of the quark mass.

    “When we do an LQCD calculation, we have to tell the computer what the masses of the quarks we want to work with are, and if we use the values that the quark masses have in nature it is very computationally expensive,” he explained. “For simple things like calculating the mass of the proton, we just put in the physical values of the quark masses and go from there. But this reaction is much more complicated, so we can’t currently do the entire thing using the actual physical values of the quark masses.

    While this is the first LQCD calculation of an inelastic nuclear reaction, Detmold is particularly excited by the fact that being able to reproduce this process through calculations means researchers can now calculate other things that are similar but that haven’t been measured as precisely experimentally—such as the proton-proton fusion process that powers the sun—or measured at all.

    “The rate of the radiative capture reaction, which is really what we are calculating here, is very, very close to the experimentally measured one, which shows that we actually understand pretty well how to do this calculation, and we’ve now done it, and it is consistent with what is experimentally known,” Detmold said. “This opens up a whole range of possibilities for other nuclear interactions that we can try and calculate where we don’t know what the answer is because we haven’t, or can’t, measure them experimentally. Until this calculation, I think it is fair to say that most people were wary of thinking you could go from quark and gluon degrees of freedom to doing nuclear reactions. This research demonstrates that yes, we can.”

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 7:32 am on February 23, 2016 Permalink | Reply
    Tags: , , , NERSC   

    From LBL: “Updated Workflows for New LHC” 

    Berkeley Logo

    Berkeley Lab

    February 22, 2016
    Linda Vu 510-495-2402
    lvu@lbl.gov

    After a massive upgrade, the Large Hadron Collider (LHC), the world’s most powerful particle collider is now smashing particles at an unprecedented 13 tera-electron-volts (TeV)—nearly double the energy of its previous run from 2010-2012. In just one second, the LHC can now produce up to 1 billion collisions and generate up to 10 gigabytes of data in its quest to push the boundaries of known physics. And over the next decade, the LHC will be further upgraded to generate about 10 times more collisions and data.

    CERN LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    To deal with the new data deluge, researchers working on one of the LHC’s largest experiments—ATLAS—are relying on updated workflow management tools developed primarily by a group of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab). Papers highlighting these tools were recently published in the Journal of Physics: Conference Series.

    CERN ATLAS Higgs Event
    CERN ATLAS New
    ATLAS

    “The issue with High Luminosity LHC is that we are producing ever-increasing amounts of data, faster than Moore’s Law and cannot actually see how we can do all of the computing that we need to do with the current software that we have,” says Paolo Calafiura, a scientist in Berkeley Lab’s Computational Research Division (CRD). “If we don’t either find new hardware to run our software or new technologies to make our software run faster in ways we can’t anticipate, the only choice that we have left is to be more selective in the collision events that we record. But, this decision will of course impact the science and nobody wants to do that.”

    To tackle this problem, Calafiura and his colleagues of the Berkeley Lab ATLAS Software group are developing new software tools called Yoda and AthenaMP to speed up the analysis of the data by leveraging the capabilities of next-generation Department of Energy (DOE) supercomputers like the National Energy Research Scientific Computing Center’s (NERSC’s) Cori system, as well as DOE’s current Leadership Computing Facilities, to analyze ATLAS data.

    NERSC CRAY Cori supercomputer
    NERSC Cray Cori supercomputer

    Yoda: Treating Single Supercomputers like the LHC Computing Grid

    Around the world, researchers rely on the LHC Computing Grid to process the petabytes of data collected by LHC detectors every year. The grid comprises 170 networked computing centers in 36 countries. CERN’s computing center, where the LHC is located, is ‘Tier 0’ of the grid. It processes the raw LHC data, and then divides it into chunks for the other Tiers. Twelve ‘Tier 1’ computing centers then accept the data directly from CERN’s computers, further process the information and then break it down into even more chunks for the hundreds of computing centers further down the grid. Once a computer finishes its analysis, it sends the findings to a centralized computer and accepts a new chunk of data.

    Like air traffic controllers, special software manages workflow on the computing grid for each of the LHC experiments. The software is responsible for breaking down the data, directing the data to its destination, telling systems on the grid when to execute an analysis and when to store information. To deal with the added deluge of data from the LHC’s upgraded ATLAS experiment, Vakhtang Tsulaia from the Berkeley Lab’s ATLAS Software group added another layer of software to the grid called Yoda Event Service system.

    The researchers note that the idea with Yoda is to replicate the LHC Computing Grid workflow on a supercomputer. So as soon as a job arrives at the supercomputer, Yoda will breakdown the data chunk into even smaller units, representing individual events or event ranges, and then assign those jobs to different compute nodes. Because only the portion of the job that will be processed is sent to the compute node, computing resources no longer need to stage the entire file before executing a job, so processing happens relatively quickly.

    To efficiently take advantage of available HPC resources, Yoda is also flexible enough to adapt to a variety of scheduling options—from back filling to large time allocations. After processing the individual events or event ranges, Yoda saves the output to the supercomputer’s shared file system so that these jobs can be terminated at anytime with minimal data losses. This means that Yoda jobs can now be submitted to the HPC batch queue in back filling mode. So if the supercomputer is not utilizing all of its cores for a certain amount of time, Yoda can automatically detect that and submit a properly sized job to the batch queue to utilize those resources.

    “Yoda acts like a daemon that is constantly submitting jobs to take advantage of available resources, this is what we call opportunistic computing,” says Calafiura.

    In early 2015 the team tested Yoda’s performance by running ATLAS jobs from the previous LHC run on NERSC’s Edison supercomputer and successfully scaled up to 50,000 computer processor cores.

    LBL NERSC Edison supercomputer
    NERSC Cray Edison supercomputer

    AthenaMP: Adapting ATLAS Workloads for Massively Parallel Systems

    In addition to Yoda, the Berkeley Lab ATLAS software group also developed the AthenaMP software that allows the ATLAS reconstruction, simulation and data analysis framework to run efficiently on massively parallel systems.

    “Memory has always been a scare resource for ATLAS reconstruction jobs. In order to optimally exploit all available CPU-cores on a given compute node, we needed to have a mechanism that would allow the sharing of memory pages between processes or threads,” says Calafiura.

    AthenaMP addresses the memory problem by leveraging the Linux fork and copy-on-write mechanisms. So when a node receives a task to process, the job is initialized on one core and sub-processes are forked to other cores, which then process all of the events assigned to the initial task. This strategy allows for the sharing of memory pages between event processors running on the same compute node.

    By running ATLAS reconstruction in one AthenaMP job with several worker processes, the team notes that they achieved a significantly reduced overall memory footprint when compared to running the same number of independent serial jobs. And, for certain configurations of the ATLAS production jobs they’ve managed to reduce the memory usage by a factor of two.

    “Our goal is to get onto more hardware and these tools help us do that. The massive scale of many high performance systems means that even a small fraction of computing power can yield large returns in processing throughput for high energy physics,” says Calafiura.

    This work was supported by DOE’s Office of Science.

    Read the papers:

    Fine grained event processing on HPCs with the ATLAS Yoda system
    http://iopscience.iop.org/article/10.1088/1742-6596/664/9/092025

    Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP): http://iopscience.iop.org/article/10.1088/1742-6596/664/7/072050

    About Computing Sciences at Berkeley Lab

    The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe.

    ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities, including those at Berkeley Lab’s Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 2:51 pm on August 27, 2015 Permalink | Reply
    Tags: , , NERSC,   

    From NERSC: “NERSC, Cray Move Forward With Next-Generation Scientific Computing” 

    NERSC Logo
    NERSC

    April 22, 2015
    Jon Bashor, jbashor@lbl.gov, 510-486-5849

    1
    The Cori Phase 1 system will be the first supercomputer installed in the new Computational Research and Theory Facility now in the final stages of construction at Lawrence Berkeley National Laboratory.

    The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced today that they have finalized a new contract for a Cray XC40 supercomputer that will be the first NERSC system installed in the newly built Computational Research and Theory facility at Lawrence Berkeley National Laboratory.

    1

    This supercomputer will be used as Phase 1 of NERSC’s next-generation system named “Cori” in honor of bio-chemist and Nobel Laureate Gerty Cori. Expected to be delivered this summer, the Cray XC40 supercomputer will feature the Intel Haswell processor. The second phase, the previously announced Cori system, will be delivered in mid-2016 and will feature the next-generation Intel Xeon Phi™ processor “Knights Landing,” a self-hosted, manycore processor with on-package high bandwidth memory that offers more than 3 teraflop/s of double-precision peak performance per single socket node.

    NERSC serves as the primary high performance computing facility for the Department of Energy’s Office of Science, supporting some 6,000 scientists annually on more than 700 projects. This latest contract represents the Office of Science’s ongoing commitment to supporting computing to address challenges such as developing new energy sources, improving energy efficiency, understanding climate change and analyzing massive data sets from observations and experimental facilities around the world.

    “This is an exciting year for NERSC and for NERSC users,” said Sudip Dosanjh, director of NERSC. “We are unveiling a brand new, state-of-the-art computing center and our next-generation supercomputer, designed to help our users begin the transition to exascale computing. Cori will allow our users to take their science to a level beyond what our current systems can do.”

    “NERSC and Cray share a common vision around the convergence of supercomputing and big data, and Cori will embody that overarching technical direction with a number of unique, new technologies,” said Peter Ungaro, president and CEO of Cray. “We are honored that the first supercomputer in NERSC’s new center will be our flagship Cray XC40 system, and we are also proud to be continuing and expanding our longstanding partnership with NERSC and the U.S. Department of Energy as we chart our course to exascale computing.”
    Support for Data-Intensive Science

    A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.

    However, Cori Phase 1 will have twice as much memory per node than NERSC’s current Edison supercomputer (a Cray XC30 system) and will include a number of advanced features designed to accelerate data-intensive applications:

    Large number of login/interactive nodes to support applications with advanced workflows
    Immediate access queues for jobs requiring real-time data ingestion or analysis
    High-throughput and serial queues can handle a large number of jobs for screening, uncertainty qualification, genomic data processing, image processing and similar parallel analysis
    Network connectivity that allows compute nodes to interact with external databases and workflow controllers
    The first half of an approximately 1.5 terabytes/sec NVRAM-based Burst Buffer for high bandwidth low-latency I/O
    A Cray Lustre-based file system with over 28 petabytes of capacity and 700 gigabytes/second I/O bandwidth

    In addition, NERSC is collaborating with Cray on two ongoing R&D efforts to maximize Cori’s data potential by enabling higher bandwidth transfers in and out of the compute node, high-transaction rate data base access, and Linux container virtualization functionality on Cray compute nodes to allow custom software stack deployment.

    “The goal is to give users as familiar a system as possible, while also allowing them the flexibility to explore new workflows and paths to computation,” said Jay Srinivasan, the Computational Systems Group lead. “The Phase 1 system is designed to enable users to start running their workload on Cori immediately, while giving data-intensive workloads from other NERSC systems the ability to run on a Cray platform.”
    Burst Buffer Enhances I/O

    A key element of Cori Phase 1 is Cray’s new DataWarp technology, which accelerates application I/O and addresses the growing performance gap between compute resources and disk-based storage. This capability, often referred to as a “Burst Buffer,” is a layer of NVRAM designed to move data more quickly between processor and disk and allow users to make the most efficient use of the system. Cori Phase 1 will feature approximately 750 terabytes of capacity and approximately 750 gigabytes/second of I/O bandwidth. NERSC, Sandia and Los Alamos national laboratories and Cray are collaborating to define use cases and test early software that will provide the following capabilities:

    Improve application reliability (checkpoint-restart)
    Accelerate application I/O performance for small blocksize I/O and analysis files
    Enhance quality of service by providing dedicated I/O acceleration resources
    Provide fast temporary storage for out-of-core applications
    Serve as a staging area for jobs requiring large input files or persistent fast storage between coupled simulations
    Support post-processing analysis of large simulation data as well asin situandin transitvisualization and analysis using the Burst Buffer nodes

    Combining Extreme Scale Data Analysis and HPC on the Road to Exascale

    As previously announced, Phase 2 of Cori will be delivered in mid-2016 and will be combined with Phase 1 on the same high speed network, providing a unique resource. When fully deployed, Cori will contain more than 9,300 Knights Landing compute nodes and more than 1,900 Haswell nodes, along with the file system and a 2X increase in the applications I/O acceleration.

    “In the scientific computing community, the line between large scale data analysis and simulation and modeling is really very blurred,” said Katie Antypas, head of NERSC’s Scientific Computing and Data Services Department. “The combined Cori system is the first system to be specifically designed to handle the full spectrum of computational needs of DOE researchers, as well as emerging needs in which data- and compute-intensive work are part of a single workflow. For example, a scientist will be able to run a simulation on the highly parallel Knights Landing nodes while simultaneously performing data analysis using the Burst Buffer on the Haswell nodes. This is a model that we expect to be important on exascale-era machines.”

    NERSC is funded by the Office of Advanced Scientific Computing Research in the DOE’s Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    gh
    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 4:31 am on February 18, 2015 Permalink | Reply
    Tags: , , NERSC,   

    From LBL: “Bigger steps: Berkeley Lab researchers develop algorithm to make simulation of ultrafast processes possible” 

    Berkeley Logo

    Berkeley Lab

    February 17, 2015
    Rachel Berkowitz

    When electronic states in materials are excited during dynamic processes, interesting phenomena such as electrical charge transfer can take place on quadrillionth-of-a-second, or femtosecond, timescales. Numerical simulations in real-time provide the best way to study these processes, but such simulations can be extremely expensive. For example, it can take a supercomputer several weeks to simulate a 10 femtosecond process. One reason for the high cost is that real-time simulations of ultrafast phenomena require “small time steps” to describe the movement of an electron, which takes place on the attosecond timescale – a thousand times faster than the femtosecond timescale.

    1
    Model of ion (Cl) collision with atomically thin semiconductor (MoSe2). Collision region is shown in blue and zoomed in; red points show initial positions of Cl. The simulation calculates the energy loss of the ion based on the incident and emergent velocities of the Cl.

    To combat the high cost associated with the small-time steps, Lin-Wang Wang, senior staff scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab), and visiting scholar Zhi Wang from the Chinese Academy of Sciences, have developed a new algorithm which increases the small time step from about one attosecond to about half a femtosecond. This allows them to simulate ultrafast phenomena for systems of around 100 atoms.

    “We demonstrated a collision of an ion [Cl] with a 2D material [MoSe2] for 100 femtoseconds. We used supercomputing systems for ten hours to simulate the problem – a great increase in speed,” says L.W. Wang. That represents a reduction from 100,000 time steps down to only 500. The results of the study were reported in a Physical Review Letters paper titled Efficient real-time time-dependent DFT method and its application to a collision of an ion with a 2D material.

    Conventional computational methods cannot be used to study systems in which electrons have been excited from the ground state, as is the case for ultrafast processes involving charge transfer. But using real-time simulations, an excited system can be modeled with time-dependent quantum mechanical equations that describe the movement of electrons.

    The traditional algorithms work by directly manipulating these equations. Wang’s new approach is to expand the equations into individual terms, based on which states are excited at a given time. The trick, which he has solved, is to figure out the time evolution of the individual terms. The advantage is that some terms in the expanded equations can be eliminated.

    2
    Zhi Wang (left) and Berkeley Lab’s Lin-Wang Wang (right).

    “By eliminating higher energy terms, you significantly reduce the dimension of your problem, and you can also use a bigger time step,” explains Wang, describing the key to the algorithm’s success. Solving the equations in bigger timesteps reduces the computational cost and increases the speed of the simulations

    Comparing the new algorithm with the old, slower algorithm yields similar results, e.g., the predicted energies and velocities of an atom passing through a layer of material are the same for both models. This new algorithm opens the door for efficient real-time simulations of ultrafast processes and electron dynamics, such as excitation in photovoltaic materials and ultrafast demagnetization following an optical excitation.

    The work was supported by the Department of Energy’s Office of Science and used the resources of the National Energy Research Scientific Computing center (NERSC).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 5:39 pm on November 12, 2014 Permalink | Reply
    Tags: , NERSC, ,   

    From LBL: “Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2014
    Julie Chao (510) 486-6491

    Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

    What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1, has been published online in the Journal of Advances in Modeling Earth Systems.

    “I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

    mw
    Michael Wehner, Berkeley Lab climate scientist

    Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

    One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

    sc

    The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

    “High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

    The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

    The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

    Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

    “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

    The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , , NERSC,   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: