From insideHPC: “Moving Mountains of Data at NERSC”

From insideHPC

1
NERSC’s Wayne Hurlbert (left) and Damian Hazen (right) are overseeing the transfer of 43 years’ worth of NERSC data to new tape libraries at Shyh Wang Hall. Image: Peter DaSilva

NERSC

NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science


LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

Researchers at NERSC face the daunting task of moving 43 years worth of archival data across the network to new tape libraries, a whopping 120 Petabytes!

When NERSC relocated from its former home in Oakland to LBNL in 2015, not everything came with. The last remaining task in the heroic effort of moving the supercomputing facility and all of its resources is 43 years of archival data that’s stored on thousands of high performance tapes in Oakland. Those 120 petabytes of experimental and simulation data have to be electronically transferred to new tape libraries at Berkeley Lab, a process that will take up to two years—even with an ESnet 400 Gigabit “superchannel” now in place between the two sites.

Tape for archival storage makes sense for NERSC and is a key component of NERSC’s storage hierarchy. Tape provides long-term, high capacity stable storage and only consumes power when reading or writing, making it a cost-effective and environmentally friendly storage solution. And the storage capacity of a tape cartridge exceeds that of more commonly known hard disk drives.

“Increasing hard disk drive capacity has become more and more difficult as manufacturers need to pack more data into a smaller and smaller area, whereas today’s tape drives are leveraging technology developed for disks a decade ago,” said Damian Hazen, NERSC’s storage systems group lead. Hazen points out that tape storage technology has receded from the public eye in part because of capacity online data storage services provided by the likes of Google, Microsoft, and Amazon, and that disk does work well for those with moderate storage needs. But, unbeknownst to most users, these large storage providers also include tape in their storage strategy.”

“Moderate” does not describe NERSC’s storage requirements. With data from simulations run on NERSC’s Cori supercomputer, and experimental and observational data coming from facilities all over the U.S. and abroad, the NERSC users send approximately 1.5 petabytes of data each month to the archive.

“Demands on the NERSC archive grow every year,” Hazen said. “With the delivery of NERSC’s next supercomputer Perlmutter in 2020, and tighter integration of computational facilities like NERSC with local and remote experimental facilities like ALS and LCLS-II, this trend will continue.”

[Perlmutter supercomputer honors Nobel laureate Saul Perlmutter for providing evidence that the expansion of the universe is accelerating.]

3

LBNL/ALS

SLAC LCLS-II

Environmental Challenges

To keep up with the data challenges, the NERSC storage group continuously refreshes the technology used in the archive. But in relocating the archive to the environmentally efficient Shyh Wang Hall, there was an additional challenge. The environmental characteristics of the new energy-efficient building meant that NERSC needed to deal with more substantial changes in temperature and humidity in the computer room. This wasn’t good news for tape, which requires a tightly controlled operating environment, and meant that the libraries in Oakland could not just be picked up and moved to Berkeley Lab. New technology emerged at just the right time in the form of an environmentally isolated tape library, which uses built-in environmental controls to maintain an ideal internal environment for the tapes. NERSC deployed two full-sized, environmentally self-contained libraries last fall. Manufactured by IBM, the new NERSC libraries are currently the largest of this technology in the world.

3
NERSC’s new environmentally self-contained tape libraries use a specialized robot to retrieve archival data tapes. Image: Peter DaSilva

“The new libraries solved two problems: the environmental problem, which allowed us to put the tape library right on the computer room floor, and increasing capacity to keep up with growth as we move forward,” said Wayne Hurlbert, a staff engineer in NERSC’s storage systems group. Tape cartridge capacity doubles roughly every two to three years, with 20 terabyte cartridges available as of December 2018.”

The newer tape cartridges have three times the capacity of the old, and the archive libraries can store a petabyte per square foot. With the new system in place, data from the tape drives in Oakland is now streaming over to the tape archive libraries in the Shyh Wang Hall computer room via the 400 Gigabit link that ESnet built three years ago to connect the two data centers together. It was successfully used to transfer file systems between the two sites without any disruption to users. As with the file system move, the archive data transfer will be largely transparent to users.

Even with all of this in place, it will still take about two years to move 43 years’ worth of NERSC data. Several factors contribute to this lengthy copy operation, including the extreme amount of data to be moved and the need to balance user access to the archive.

“We’re very cautious about this,” Hurlbert said. “We need to preserve this data; it’s not infrequent for researchers to need to go back to their data sets, often to use modern techniques to reanalyze. The archive allows us to safeguard irreplaceable data generated over decades; the data represents millions of dollars of investment in computational hardware and immense time, effort, and scientific results from researchers around the world.”

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

insideHPC
2825 NW Upshur
Suite G
Portland, OR 97239

Phone: (503) 877-5048

#insidehpc, #nersc, #supercomputing

From Lawrence Berkeley National Lab: “DOE to Build Next-Generation Supercomputer at Lawrence Berkeley National Laboratory”

Berkeley Logo

From Lawrence Berkeley National Lab

October 30, 2018
Dan Krotz
dakrotz@lbl.gov
(510) 486-4019

New Pre-Exascale System Will Be Named ‘Perlmutter’ in Honor of Lab’s Nobel Prize-Winning Astrophysicist.

1
Saul Perlmutter

The U.S. Department of Energy announced today that the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has signed a $146 million contract with Cray for the facility’s next-generation supercomputer, a pre–exascale machine slated to be delivered in 2020. Named “Perlmutter” in honor of Nobel Prize-winning astrophysicist Saul Perlmutter, it is the first NERSC system specifically designed to meet the needs of large-scale simulations as well as data analysis from experimental and observational facilities.

The new supercomputer represents DOE Office of Science’s commitment to extreme-scale science, developing new energy sources, improving energy efficiency, and discovering new materials. It will be a heterogeneous system comprising both CPU-only and GPU-accelerated cabinets that will more than triple the computational power currently available at NERSC.

“Continued leadership in high performance computing is vital to America’s competitiveness, prosperity, and national security,” said U.S. Secretary of Energy Rick Perry in making the announcement.

This new supercomputer has a total contract value of $146 million, including multiple years of service and support, and will more than triple the computational power currently available at NERSC. It represents DOE Office of Science’s commitment to extreme-scale science, developing new energy sources, improving energy efficiency, and discovering new materials. In addition, the new system has a number of innovative capabilities that will facilitate analyzing massive data sets from scientific experimental facilities, a growing challenge for scientists across multiple disciplines.

“This agreement maintains U.S. leadership in HPC, both in the technology and in the scientific research that can be accomplished with such powerful systems, which is essential to maintaining economic and intellectual leadership,” said Barbara Helland, Associate Director of the Office of Advanced Scientific Computing Research at DOE Office of Science. “This agreement is a step forward in preparing the Office of Science user community for the kinds of computing systems we expect to see in the exascale era.”

“I’m delighted to hear that the next supercomputer will be especially capable of handling large and complex data analysis. So it’s a great honor to learn that this system will be called Perlmutter,” Dr. Perlmutter said. “Though I also realize I feel some trepidation since we all know what it’s like to be frustrated with our computers, and I hope no one will hold it against me after a day wrestling with a tough data set and a long computer queue! I have at least been assured that no one will have to type my ten-character last name to log in.”

“We are very excited about the Perlmutter system,” said NERSC Director Sudip Dosanjh. “It will provide a significant increase in capability for our users and a platform to continue transitioning our very broad workload to energy efficient architectures. The system is optimized for science, and we will collaborate with Cray, NVIDIA and AMD to ensure that Perlmutter meets the computational and data needs of our users. We are also launching a major power and cooling upgrade in Berkeley Lab’s Shyh Wang Hall, home to NERSC, to prepare the facility for Perlmutter.”

The new supercomputer will be a heterogeneous system comprising both CPU-only and GPU-accelerated cabinets. It will include a number of innovations designed to meet the diverse computational and data analysis needs of NERSC’s user base and speed their scientific productivity. That includes a new Cray system interconnect, code-named Slingshot, that is designed for data-centric computing; as well as NVIDIA GPUs with new Tensor Core technology, direct liquid cooling, and an all-flash scratch filesystem which will move data at a rate of more than 4 terabytes/sec.

“As a premier computing facility for the DOE, NERSC is driving scientific innovations that will help change the world, and we’re honored to partner with them in their efforts,” said Pete Ungaro, president and CEO of Cray. “We have collaborated very closely with the teams at Berkeley Lab to develop and implement our next-generation supercomputing and storage technology, which will enable a new era of access and flexibility in modeling, simulation, AI and analytics. We’re looking forward to seeing the many scientific discoveries that result from the work done on Perlmutter.”

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

A U.S. Department of Energy National Laboratory Operated by the University of California

Bringing Science Solutions to the World

In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

University of California Seal

DOE Seal

#doe-to-build-next-generation-supercomputer-at-lawrence-berkeley-national-laboratory, #lbnl, #nersc, #saul-perlmutter

From LBNL: “How the Earth Stops High-Energy Neutrinos in Their Tracks”

Berkeley Logo

Berkeley Lab

November 22, 2017
Glenn Roberts Jr.
geroberts@lbl.gov
(510) 520-0843

Efforts of Berkeley Lab scientists are key in new analysis of data from Antarctic experiment.

3
Illustration of how a muon interacts in the IceCube detector array. (Credit: IceCube Collaboration)


IceCube has measured for the first time the probability that neutrinos are absorbed by Earth as a function of their energy and the amount of matter that they go through. This measurement of the neutrino cross section using Earth absorption has confirmed predictions from the Standard Model to energies up to 980 TeV. A detailed understanding of how high-energy neutrinos interact with Earth’s matter will allow using these particles to investigate the composition of Earth’s core and mantel. (Credit: IceCube Collaboration)


U Wisconsin ICECUBE neutrino detector at the South Pole

Neutrinos are abundant subatomic particles that are famous for passing through anything and everything, only very rarely interacting with matter. About 100 trillion neutrinos pass through your body every second.

Now, scientists have demonstrated that the Earth stops energetic neutrinos—they do not go through everything. These high-energy neutrino interactions were seen by the IceCube detector, an array of 5,160 basketball-sized optical sensors deeply encased within a cubic kilometer of very clear Antarctic ice near the South Pole.

IceCube’s sensors do not directly observe neutrinos, but instead measure flashes of blue light, known as Cherenkov radiation, emitted by muons and other fast-moving charged particles, which are created when neutrinos interact with the ice, and by the charged particles produced when the muons interact as they move through the ice. By measuring the light patterns from these interactions in or near the detector array, IceCube can estimate the neutrinos’ directions and energies.

The study, published in the Nov. 22 issue of the journal Nature, was led by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley.

Spencer Klein, who leads Berkeley Lab’s IceCube research team, commented “This analysis is important because it shows that IceCube can make real contributions to particle and nuclear physics, at energies above the reach of current accelerators.”

Sandra Miarecki, who performed much of the data analysis while working toward her PhD as an IceCube researcher at Berkeley Lab and UC Berkeley, said, “It’s a multidisciplinary idea.” The analysis required input from geologists who have created models of the Earth’s interior from seismic studies. Physicists have used these models to help predict how neutrinos are absorbed in the Earth.

“You create ‘pretend’ muons that simulate the response of the sensors,” Miarecki said. “You have to simulate their behavior, there has to be an ice model to simulate the ice’s behavior, you also have to have cosmic ray simulations, and you have to simulate the Earth using equations. Then you have to predict, probability-wise, how often a particular muon would come through the Earth.”

The study’s results are based on one year of data from about 10,800 neutrino-related interactions, stemming from a natural supply of very energetic neutrinos from space that go through a thick and dense absorber: the Earth. The energy of the neutrinos was critical to the study, as higher energy neutrinos are more likely to interact with matter and be absorbed by the Earth.

Scientists found that there were fewer energetic neutrinos making it all the way through the Earth to the IceCube detector than from less obstructed paths, such as those coming in at near-horizontal trajectories. The probability of neutrinos being absorbed by the Earth was consistent with expectations from the Standard Model of particle physics, which scientists use to explain the fundamental forces and particles in the universe. This probability—that neutrinos of a given energy will interact with matter—is what physicists refer to as a “cross section.”

The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

“Understanding how neutrinos interact is key to the operation of IceCube,” explained Francis Halzen, principal investigator for the IceCube Neutrino Observatory and a University of Wisconsin–Madison professor of physics. Precision measurements at the HERA accelerator in Hamburg, Germany, allow us to compute the neutrino cross section with great accuracy within the Standard Model—which would apply to IceCube neutrinos of much higher energies if the Standard Model is valid at these energies.

UC Berkeley Hydrogen Epoch of Reionization Array (HERA)

“We were of course hoping for some new physics to appear, but we unfortunately find that the Standard Model, as usual, withstands the test,” added Halzen.

James Whitmore, program director in the National Science Foundation’s physics division, said, “IceCube was built to both explore the frontiers of physics and, in doing so, possibly challenge existing perceptions of the nature of universe. This new finding and others yet to come are in that spirit of scientific discovery.”

2
In this study, researchers measured the flux of muon neutrinos as a function of their energy and their incoming direction. Neutrinos with higher energies and with incoming directions closer to the North Pole are more likely to interact with matter on their way through Earth. (Credit: IceCube Collaboration)

This study provides the first cross-section measurements for a neutrino energy range that is up to 1,000 times higher than previous measurements at particle accelerators. Most of the neutrinos selected for this study were more than a million times more energetic than the neutrinos produced by more familiar sources, like the sun or nuclear power plants. Researchers took care to ensure that the measurements were not distorted by detector problems or other uncertainties.

“Neutrinos have quite a well-earned reputation of surprising us with their behavior,” said Darren Grant, spokesperson for the IceCube Collaboration and a professor of physics at the University of Alberta in Canada. “It is incredibly exciting to see this first measurement and the potential it holds for future precision tests.”

In addition to providing the first measurement of the Earth’s absorption of neutrinos, the analysis shows that IceCube’s scientific reach is extending beyond its core focus on particle physics discoveries and the emerging field of neutrino astronomy into the fields of planetary science and nuclear physics. This analysis will also interest geophysicists who would like to use neutrinos to image the Earth’s interior, although this will require more data than was used in the current study.

The neutrinos used in this analysis were mostly produced when hydrogen or heavier nuclei from high-energy cosmic rays, created outside the solar system, interacted with nitrogen or oxygen nuclei in the Earth’s atmosphere. This creates a cascade of particles, including several types of subatomic particles that decay, producing neutrinos. These particles rain down on the Earth’s surface from all directions.

The analysis also included a small number of astrophysical neutrinos, which are produced outside of the Earth’s atmosphere, from cosmic accelerators unidentified to date, perhaps associated with supermassive black holes.

The neutrino-interaction events that were selected for the study have energies of at least one trillion electron volts, or a teraelectronvolt (TeV), roughly the kinetic energy of a flying mosquito. At this energy, the Earth’s absorption of neutrinos is relatively small, and the lowest energy neutrinos in the study largely served as an absorption-free baseline. The analysis was sensitive to absorption in the energy range from 6.3 TeV to 980 TeV, limited at the high-energy end by a shortage of sufficiently energetic neutrinos.

At these energies, each individual proton or neutron in a nucleus acts independently, so the absorption depends on the number of protons or neutrons that each neutrino encounters. The Earth’s core is particularly dense, so absorption is largest there. By comparison, the most energetic neutrinos that have been studied at human-built particle accelerators were at energies below 0.4 TeV. Researchers have used these accelerators to aim beams containing an enormous number of these lower energy neutrinos at massive detectors, but only a very tiny fraction yield interactions.

IceCube researchers used data collected from May 2010 to May 2011, from a partial array of 79 “strings,” each containing 60 sensors embedded more than a mile deep in the ice.

Gary Binder, a UC Berkeley graduate student affiliated with Berkeley Lab’s Nuclear Science Division, developed the software that was used to fit IceCube’s data to a model describing how neutrinos propagate through the Earth.

From this, the software determined the cross-section that best fit the data. University of Wisconsin – Madison student Chris Weaver developed the code for selecting the detection events that Miarecki used.

Simulations to support the analysis have been conducted using supercomputers at the University of Wisconsin–Madison and at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC).

NERSC Cray Cori II supercomputer

LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

Physicists now hope to repeat the study using an expanded, multiyear analysis of data from the full 86-string IceCube array, which was completed in December 2010, and to look at higher ranges of neutrino energies for any hints of new physics beyond the Standard Model.

IceCube Gen-2 DeepCore


IceCube Gen-2 DeepCore PINGU

IceCube has already detected multiple ultra-high-energy neutrinos, in the range of petaelectronvolts (PeV), which have a 1,000-times-higher energy than those detected in the TeV range.

Klein said, “Once we can reduce the uncertainties and can look at slightly higher energies, we can look at things like nuclear effects in the Earth, and collective electromagnetic effects.”

Binder added, “We can also study how much energy a neutrino transfers to a nucleus when it interacts, giving us another probe of nuclear structure and physics beyond the Standard Model.”

A longer term goal is to build a larger detector, which would enable scientists to study neutrinos of even higher energies. The proposed IceCube-Gen2 would be 10 times larger than IceCube. Its larger size would enable the detector to collect more data from neutrinos at very high energies.

Some scientists are looking to build an even larger detector, 100 cubic kilometers or more, using a new approach that searches for pulses of radio waves produced when very high energy neutrinos interact in the ice. Measurements of neutrino absorption by a radio-based detector could be used to search for new phenomena that go well beyond the physics accounted for in the Standard Model and could scrutinize the structure of atomic nuclei in greater detail than those of other experiments.

Miarecki said, “This is pretty exciting – I couldn’t have thought of a more interesting project.”

Berkeley Lab’s National Energy Research Scientific Computing Center is a DOE Office of Science User Facility.

The work was supported by the U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, Grid Laboratory of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin–Madison, Open Science Grid (OSG) grid infrastructure, National Energy Research Scientific Computing Center, Louisiana Optical Network Initiative (LONI) grid computing resources, U.S. Department of Energy Office of Nuclear Physics, and United States Air Force Academy; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Initiative and Networking Fund of the Helmholtz Association, Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Villum Fonden, Danish National Research Foundation (DNRF), Denmark.

The IceCube Neutrino Observatory was built under a National Science Foundation (NSF) Major Research Equipment and Facilities Construction grant, with assistance from partner funding agencies around the world. The NSF Office of Polar Programs and NSF Physics Division support the project with a Maintenance and Operations (M&O) grant. The University of Wisconsin–Madison is the lead institution for the IceCube Collaboration, coordinating data-taking and M&O activities.

See the full article here .

Please help promote STEM in your local schools.

STEM Icon

Stem Education Coalition

A U.S. Department of Energy National Laboratory Operated by the University of California

University of California Seal

DOE Seal

#astronomy, #astrophysics, #basic-research, #cosmology, #lbnl, #nersc, #neutrino-astronomy, #neutrinos, #particle-physics, #u-wisconsin-icecube-and-icecube-gen-2

From NERSC: “Record-breaking 45-qubit Quantum Computing Simulation Run at NERSC on Cori”

NERSC Logo
NERSC

NERSC Cray Cori II supercomputer

LBL NERSC Cray XC30 Edison supercomputer

NERSC Hopper Cray XE6supercomputer

June 1, 2017
Kathy Kincade
kkincade@lbl.gov
+1 510 495 2124

When two researchers from the Swiss Federal Institute of Technology (ETH Zurich) announced in April that they had successfully simulated a 45-qubit quantum circuit, the science community took notice: it was the largest ever simulation of a quantum computer, and another step closer to simulating “quantum supremacy”—the point at which quantum computers become more powerful than ordinary computers.

1
A multi-qubit chip developed in the Quantum Nanoelectronics Laboratory at Lawrence Berkeley National Laboratory.

The computations were performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. Researchers Thomas Häner and Damien Steiger, both Ph.D. students at ETH, used 8,192 of 9,688 Intel Xeon Phi processors on NERSC’s newest supercomputer, Cori, to support this simulation, the largest in a series they ran at NERSC for the project.

“Quantum computing” has been the subject of dedicated research for decades, and with good reason: quantum computers have the potential to break common cryptography techniques and simulate quantum systems in a fraction of the time it would take on current “classical” computers. They do this by leveraging the quantum states of particles to store information in qubits (quantum bits), a unit of quantum information akin to a regular bit in classical computing. Better yet, qubits have a secret power: they can perform more than one calculation at a time. One qubit can perform two calculations in a quantum superposition, two can perform four, three eight, and so forth, with a corresponding exponential increase in quantum parallelism. Yet harnessing this quantum parallelism is difficult, as observing the quantum state causes the system to collapse to just one answer.

So how close are we to realizing a true working prototype? It is generally thought that a quantum computer deploying 49 qubits—a unit of quantum information—will be able to match the computing power of today’s most powerful supercomputers. Toward this end, Häner and Steiger’s simulations will aid in benchmarking and calibrating near-term quantum computers by carrying out quantum supremacy experiments with these early devices and comparing them to their simulation results. In the mean time, we are seeing a surge in investments in quantum computing technology from the likes of Google, IBM and other leading tech companies—even Volkswagen—which could dramatically accelerate the development process.

Simulation and Emulation of Quantum Computers

Both emulation and simulation are important for calibrating, validating and benchmarking emerging quantum computing hardware and architectures. In a paper [ACM=DL]presented at SC16, Häner and Steiger wrote: “While large-scale quantum computers are not yet available, their performance can be inferred using quantum compilation frameworks and estimates of potential hardware specifications. However, without testing and debugging quantum programs on small scale problems, their correctness cannot be taken for granted. Simulators and emulators … are essential to address this need.”

That paper discussed emulating quantum circuits—a common representation of quantum programs—while the 45-qubit paper focuses on simulating quantum circuits. Emulation is only possible for certain types of quantum subroutines, while the simulation of quantum circuits is a general method that also allows the inclusion of the effects of noise. Such simulations can be very challenging even on today’s fastest supercomputers, Häner and Steiger explained. For the 45-qubit simulation, for example, they used most of the available memory on each of the 8,192 nodes. “This increases the probability of node failure significantly, and we could not expect to run on the full system for more than an hour without failure,” they said. “We thus had to reduce time-to-solution at all scales (node-level as well as cluster-level) to achieve this simulation.”

Optimizing the quantum circuit simulator was key. Häner and Steiger employed automatic code generation, optimized the compute kernels and applied a scheduling algorithm to the quantum supremacy circuits, thus reducing the required node-to-node communication. During the optimization process they worked with NERSC staff and used Berkeley Lab’s Roofline Model to identify potential areas where performance could be boosted.

In addition to the 45-qubit simulation, which used 0.5 petabytes of memory on Cori and achieved a performance of 0.428 petaflops, they also simulated 30-, 36- and 42-qubit quantum circuits. When they compared the results with simulations of 30- and 36-qubit circuits run on NERSC’s Edison system, they found that the Edison simulations also ran faster.

“Our optimizations improved the performance – the number of floating-point operations per time – by 10x for Edison and between 10x and 20x for Cori (depending on the circuit to simulate and the size per node),” Häner and Steiger said. “The time-to-solution decreased by over 12x when compared to the times of a similar simulation reported in a recent paper on quantum supremacy by Boixo and collaborators, which made the 45-qubit simulation possible.”

Looking ahead, the duo is interested in performing more quantum circuit simulations at NERSC to determine the performance of near-term quantum computers solving quantum chemistry problems. They are also hoping to use solid-state drives to store larger wave functions and thus try to simulate even more qubits.

See the full article here.

Please help promote STEM in your local schools.

STEM Icon

Stem Education Coalition

The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

Grace Hopper

NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

#nersc, #nersc-cori-ii-supercomputer, #quantum-computing, #record-breaking-45-qubit-quantum-computing-simulation-run-at-nersc-on-cori

From LBNL via Ames Lab: “Towards Super-Efficient, Ultra-Thin Silicon Solar Cells”

AmesLabII
Ames Laboratory

LBNL


NERSC

March 16, 2017
Kathy Kincade
kkincade@lbl.gov
+1 510 495 2124

Ames Researchers Use NERSC Supercomputers to Help Optimize Nanophotonic Light Trapping

Despite a surge in solar cell R&D in recent years involving emerging materials such as organics and perovskites, the solar cell industry continues to favor inorganic crystalline silicon photovoltaics. While thin-film solar cells offer several advantages—including lower manufacturing costs—long-term stability of crystalline silicon solar cells, which are typically thicker, tips the scale in their favor, according to Rana Biswas, a senior scientist at Ames Laboratory, who has been studying solar cell materials and architectures for two decades.

“Crystalline silicon solar cells today account for more than 90 percent of all installations worldwide,” said Biswas, co-author of a new study that used supercomputers at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a Department of Energy Office of Science User Facility, to evaluate a novel approach for creating more energy-efficient ultra-thin crystalline silicon solar cells. “The industry is very skeptical that any other material could be as stable as silicon.”


LBL NERSC Cray XC30 Edison supercomputer


NERSC CRAY Cori supercomputer

Thin-film solar cells typically fabricated from semiconductor materials such as amorphous silicon are only a micron thick. While this makes them less expensive to manufacture than crystalline silicon solar cells, which are around 180 microns thick, it also makes them less efficient—12 to 14 percent energy conversion, versus nearly 25 percent for silicon solar cells (which translates into 15-21 percent for large area panels, depending on the size). This is because if the wavelength of incoming light is longer than the solar cell is thick, the light won’t be absorbed.

Nanocone Arrays

This challenge prompted Biswas and colleagues at Ames to look for ways to improve ultra-thin silicon cell architectures and efficiencies. In a paper published in Nanomaterials, they describe their efforts to develop a highly absorbing ultra-thin crystalline silicon solar cell architecture with enhanced light trapping capabilities.

“We were able to design a solar cell with a very thin amount of silicon that could still provide high performance, almost as high performance as the thick silicon being used today,” Biswas said.

2
Proposed crystalline silicon solar cell architecture developed by Ames Laboratory researchers Prathap Pathi, Akshit Peer and Rana Biswas.

The key lies in the wavelength of light that is trapped and the nanocone arrays used to trap it. Their proposed solar architecture comprises thin flat spacer titanium dioxide layers on the front and rear surfaces of silicon, nanocone gratings on both sides with optimized pitch and height and rear cones surrounded by a metallic reflector made of silver. They then set up a scattering matrix code to simulate light passing through the different layers and study how the light is reflected and transmitted at different wavelengths by each layer.

“This is a light-trapping approach that keeps the light, especially the red and long-wavelength infrared light, trapped within the crystalline silicon cell,” Biswas explained. “We did something similar to this with our amorphous silicon cells, but crystalline behaves a little differently.”

For example, it is critical not to affect the crystalline silicon wafer—the interface of the wafer—in any way, he emphasized. “You want the interface to be completely flat to begin with, then work around that when building the solar cell,” he said. “If you try to pattern it in some way, it will introduce a lot of defects at the interface, which are not good for solar cells. So our approach ensures we don’t disturb that in any way.”

Homegrown Code

In addition to the cell’s unique architecture, the simulations the researchers ran on NERSC’s Edison system utilized “homegrown” code developed at Ames to model the light via the cell’s electric and magnetic fields—a “classical physics approach,” Biswas noted. This allowed them to test multiple wavelengths to determine which was most optimum for light trapping. To optimize the absorption of light by the crystalline silicon based upon the wavelength, the team sent light waves of different wavelengths into a designed solar cell and then calculated the absorption of light in that solar cell’s architecture. The Ames researchers had previously studied the trapping of light in other thin film solar cells made of organic and amorphous silicon in previous studies.

“One very nice thing about NERSC is that once you set up the problem for light, you can actually send each incoming light wavelength to a different processor (in the supercomputer),” Biswas said. “We were typically using 128 or 256 wavelengths and could send each of them to a separate processor.”

Looking ahead, given that this research is focused on crystalline silicon solar cells, this new design could make its way into the commercial sector in the not-too-distant future—although manufacturing scalability could pose some initial challenges, Biswas noted.

“It is possible to do this in a rather inexpensive way using soft lithography or nanoimprint lithography processes,” he said. “It is not that much work, but you need to set up a template or a master to do that. In terms of real-world applications, these panels are quite large, so that is a challenge to do something like this over such a large area. But we are working with some groups that have the ability to do roll to roll processing, which would be something they could get into more easily.”

See the full article here .

Please help promote STEM in your local schools.
STEM Icon
Stem Education Coalition

Ames Laboratory is a government-owned, contractor-operated research facility of the U.S. Department of Energy that is run by Iowa State University.

For more than 60 years, the Ames Laboratory has sought solutions to energy-related problems through the exploration of chemical, engineering, materials, mathematical and physical sciences. Established in the 1940s with the successful development of the most efficient process to produce high-quality uranium metal for atomic energy, the Lab now pursues a broad range of scientific priorities.

Ames Laboratory shares a close working relationship with Iowa State University’s Institute for Physical Research and Technology, or IPRT, a network of scientific research centers at Iowa State University, Ames, Iowa.

DOE Banner

#ames-lab, #lbnl, #nersc, #towards-super-efficient-ultra-thin-silicon-solar-cells

From SLAC: “Researchers Use World’s Smallest Diamonds to Make Wires Three Atoms Wide”


SLAC Lab

December 26, 2016

LEGO-style Building Method Has Potential for Making One-Dimensional Materials with Extraordinary Properties

1
Fuzzy white clusters of nanowires on a lab bench, with a penny for scale. Assembled with the help of diamondoids, the microscopic nanowires can be seen with the naked eye because the strong mutual attraction between their diamondoid shells makes them clump together, in this case by the millions. At top right, an image made with a scanning electron microscope shows nanowire clusters magnified 10,000 times. (SEM image by Hao Yan/SIMES; photo by SLAC National Accelerator Laboratory)

Scientists at Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory have discovered a way to use diamondoids – the smallest possible bits of diamond – to assemble atoms into the thinnest possible electrical wires, just three atoms wide.

By grabbing various types of atoms and putting them together LEGO-style, the new technique could potentially be used to build tiny wires for a wide range of applications, including fabrics that generate electricity, optoelectronic devices that employ both electricity and light, and superconducting materials that conduct electricity without any loss. The scientists reported their results today in Nature Materials.

“What we have shown here is that we can make tiny, conductive wires of the smallest possible size that essentially assemble themselves,” said Hao Yan, a Stanford postdoctoral researcher and lead author of the paper. “The process is a simple, one-pot synthesis. You dump the ingredients together and you can get results in half an hour. It’s almost as if the diamondoids know where they want to go.”

2
This animation shows molecular building blocks joining the tip of a growing nanowire. Each block consists of a diamondoid – the smallest possible bit of diamond – attached to sulfur and copper atoms (yellow and brown spheres). Like LEGO blocks, they only fit together in certain ways that are determined by their size and shape. The copper and sulfur atoms form a conductive wire in the middle, and the diamondoids form an insulating outer shell. (SLAC National Accelerator Laboratory)

The Smaller the Better

3

Illustration of a cluster of nanowires assembled by diamondoids
An illustration shows a hexagonal cluster of seven nanowires assembled by diamondoids. Each wire has an electrically conductive core made of copper and sulfur atoms (brown and yellow spheres) surrounded by an insulating diamondoid shell. The natural attraction between diamondoids drives the assembly process. (H. Yan et al., Nature Materials)

Although there are other ways to get materials to self-assemble, this is the first one shown to make a nanowire with a solid, crystalline core that has good electronic properties, said study co-author Nicholas Melosh, an associate professor at SLAC and Stanford and investigator with SIMES, the Stanford Institute for Materials and Energy Sciences at SLAC.

The needle-like wires have a semiconducting core – a combination of copper and sulfur known as a chalcogenide – surrounded by the attached diamondoids, which form an insulating shell.

Their minuscule size is important, Melosh said, because a material that exists in just one or two dimensions – as atomic-scale dots, wires or sheets – can have very different, extraordinary properties compared to the same material made in bulk. The new method allows researchers to assemble those materials with atom-by-atom precision and control.

The diamondoids they used as assembly tools are tiny, interlocking cages of carbon and hydrogen. Found naturally in petroleum fluids, they are extracted and separated by size and geometry in a SLAC laboratory. Over the past decade, a SIMES research program led by Melosh and SLAC/Stanford Professor Zhi-Xun Shen has found a number of potential uses for the little diamonds, including improving electron microscope images and making tiny electronic gadgets.

4
Stanford graduate student Fei Hua Li, left, and postdoctoral researcher Hao Yan in one of the SIMES labs where diamondoids – the tiniest bits of diamond – were used to assemble the thinnest possible nanowires. (SLAC National Accelerator Laboratory)

Constructive Attraction

5
Ball-and-stick models of diamondoid atomic structures in the SIMES lab at SLAC. SIMES researchers used the smallest possible diamondoid – adamantane, a tiny cage made of 10 carbon atoms – to assemble the smallest possible nanowires, with conductive cores just three atoms wide. (SLAC National Accelerator Laboratory)

For this study, the research team took advantage of the fact that diamondoids are strongly attracted to each other, through what are known as van der Waals forces. (This attraction is what makes the microscopic diamondoids clump together into sugar-like crystals, which is the only reason you can see them with the naked eye.)

They started with the smallest possible diamondoids – single cages that contain just 10 carbon atoms – and attached a sulfur atom to each. Floating in a solution, each sulfur atom bonded with a single copper ion. This created the basic nanowire building block.

The building blocks then drifted toward each other, drawn by the van der Waals attraction between the diamondoids, and attached to the growing tip of the nanowire.

“Much like LEGO blocks, they only fit together in certain ways that are determined by their size and shape,” said Stanford graduate student Fei Hua Li, who played a critical role in synthesizing the tiny wires and figuring out how they grew. “The copper and sulfur atoms of each building block wound up in the middle, forming the conductive core of the wire, and the bulkier diamondoids wound up on the outside, forming the insulating shell.”

A Versatile Toolkit for Creating Novel Materials

The team has already used diamondoids to make one-dimensional nanowires based on cadmium, zinc, iron and silver, including some that grew long enough to see without a microscope, and they have experimented with carrying out the reactions in different solvents and with other types of rigid, cage-like molecules, such as carboranes.

The cadmium-based wires are similar to materials used in optoelectronics, such as light-emitting diodes (LEDs), and the zinc-based ones are like those used in solar applications and in piezoelectric energy generators, which convert motion into electricity.

“You can imagine weaving those into fabrics to generate energy,” Melosh said. “This method gives us a versatile toolkit where we can tinker with a number of ingredients and experimental conditions to create new materials with finely tuned electronic properties and interesting physics.”

Theorists led by SIMES Director Thomas Devereaux modeled and predicted the electronic properties of the nanowires, which were examined with X-rays at SLAC’s Stanford Synchrotron Radiation Lightsource, a DOE Office of Science User Facility, to determine their structure and other characteristics.

The team also included researchers from the Stanford Department of Materials Science and Engineering, Lawrence Berkeley National Laboratory, the National Autonomous University of Mexico (UNAM) and Justus-Liebig University in Germany. Parts of the research were carried out at Berkeley Lab’s Advanced Light Source (ALS)

LBNL ALS interior
LBNL ALS

and National Energy Research Scientific Computing Center (NERSC),

NERSC CRAY Cori supercomputer
NERSC

both DOE Office of Science User Facilities. The work was funded by the DOE Office of Science and the German Research Foundation.

See the full article here .

Please help promote STEM in your local schools.

STEM Icon

Stem Education Coalition

SLAC Campus
SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
i1

#applied-research-technology, #berkeley-als, #nanotechnology, #nersc, #researchers-use-worlds-smallest-diamonds-to-make-wires-three-atoms-wide, #slac

From NERSC: “A peek inside the earliest moments of the universe”

NERSC Logo
NERSC

August 1, 2016
Kathy Kincade
kkincade@lbl.gov
1 510 495 2124

1
The MuSun experiment at the Paul Scherrer Institute is measuring the rate for muon capture on the deuteron to better than 1.5% precision. This process is the simplest weak interaction on a nucleus that can be measured to a high degree of precision. Credit: Lawrence Berkeley National Laboratory

The Big Bang. That spontaneous explosion some 14 billion years ago that created our universe and, in the process, all matter as we know it today.

In the first few minutes following “the bang,” the universe quickly began expanding and cooling, allowing the formation of subatomic particles that joined forces to become protons and neutrons. These particles then began interacting with one another to create the first simple atoms. A little more time, a little more expansion, a lot more cooling—along with ever-present gravitational pull—and clouds of these elements began to morph into stars and galaxies.

For William Detmold, an assistant professor of physics at MIT who uses lattice quantum chromodynamics (LQCD) to study subatomic particles, one of the most interesting aspects of the formation of the early universe is what happened in those first few minutes—a period known as the “big bang nucleosynthesis.”

“You start off with very high-energy particles that cool down as the universe expands, and eventually you are left with a soup of quarks and gluons, which are strongly interacting particles, and they form into protons and neutrons,” he said. “Once you have protons and neutrons, the next stage is for those protons and neutrons to come together and start making more complicated things—primarily deuterons, which interact with other neutrons and protons and start forming heavier elements, such as Helium-4, the alpha particle.”

One of the most critical aspects of big bang nucleosynthesis is the radiative capture process, in which a proton captures a neutron and fuses to produce a deuteron and a photon. In a paper published in Physical Review Letters, Detmold and his co-authors—all members of the NPLQCD Collaboration, which studies the properties, structures and interactions of fundamental particles—describe how they used LQCD calculations to better understand this process and precisely measure the nuclear reaction rate that occurs when a neutron and proton form a deuteron. While physicists have been able to experimentally measure these phenomena in the laboratory, they haven’t been able to do the same, with certainty, using calculations alone—until now.

“One of the things that is very interesting about the strong interaction that takes place in the radiative capture process is that you get very complicated structures forming, not just protons and neutrons,” Detmold said. “The strong interaction has this ability to have these very different structures coming out of it, and if these primordial reactions didn’t happen the way they happened, we wouldn’t have formed enough deuterium to form enough helium that then goes ahead and forms carbon. And if we don’t have carbon, we don’t have life.”

Calculations Mirror Experiments

For the Physical Review Letters paper, the team used the Chroma LQCD code developed at Jefferson Lab to run a series of calculations with quark masses that were 10-20 times the physical value of those masses. Using heavier values rather than the actual physical values reduced the cost of the calculations tremendously, Detmold noted. They then used their understanding of how the calculations should depend on mass to get to the physical value of the quark mass.

“When we do an LQCD calculation, we have to tell the computer what the masses of the quarks we want to work with are, and if we use the values that the quark masses have in nature it is very computationally expensive,” he explained. “For simple things like calculating the mass of the proton, we just put in the physical values of the quark masses and go from there. But this reaction is much more complicated, so we can’t currently do the entire thing using the actual physical values of the quark masses.

While this is the first LQCD calculation of an inelastic nuclear reaction, Detmold is particularly excited by the fact that being able to reproduce this process through calculations means researchers can now calculate other things that are similar but that haven’t been measured as precisely experimentally—such as the proton-proton fusion process that powers the sun—or measured at all.

“The rate of the radiative capture reaction, which is really what we are calculating here, is very, very close to the experimentally measured one, which shows that we actually understand pretty well how to do this calculation, and we’ve now done it, and it is consistent with what is experimentally known,” Detmold said. “This opens up a whole range of possibilities for other nuclear interactions that we can try and calculate where we don’t know what the answer is because we haven’t, or can’t, measure them experimentally. Until this calculation, I think it is fair to say that most people were wary of thinking you could go from quark and gluon degrees of freedom to doing nuclear reactions. This research demonstrates that yes, we can.”

See the full article here.

Please help promote STEM in your local schools.

STEM Icon

Stem Education Coalition

The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

#basic-research, #big-bang-nucleosynthesis, #nersc, #physics