Tagged: LLNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:09 pm on February 8, 2018 Permalink | Reply
    Tags: , , Hayward fault earthquake simulations increase fidelity of ground motions, LLNL, ,   

    From LLNL: “Hayward fault earthquake simulations increase fidelity of ground motions” 


    Lawrence Livermore National Laboratory

    Feb. 8, 2018
    Anne M Stark
    stark8@llnl.gov (link sends e-mail)
    925-422-9799


    What will happen during an earthquake?

    In the next 30 years, there is a one-in-three chance that the Hayward fault will rupture with a 6.7 magnitude or higher earthquake, according to the United States Geologic Survey (USGS). Such an earthquake will cause widespread damage to structures, transportation and utilities, as well as economic and social disruption in the East Bay.

    Lawrence Livermore (LLNL) and Lawrence Berkeley (LBNL) national laboratory scientists have used some of the world’s most powerful supercomputers to model ground shaking for a magnitude (M) 7.0 earthquake on the Hayward fault and show more realistic motions than ever before. The research appears in Geophysical Research Letters.

    Past simulations resolved ground motions from low frequencies up to 0.5-1 Hertz (vibrations per second). The new simulations are resolved up to 4-5 Hertz (Hz), representing a four to eight times increase in the resolved frequencies. Motions with these frequencies can be used to evaluate how buildings respond to shaking.

    The simulations rely on the LLNL-developed SW4 seismic simulation program and the current best representation of the three-dimensional (3D) earth (geology and surface topography from the USGS) to compute seismic wave ground shaking throughout the San Francisco Bay Area. The results are, on average, consistent with models based on actual recorded earthquake motions from around the world.

    “This study shows that powerful supercomputing can be used to calculate earthquake shaking on a large, regional scale with more realism than we’ve ever been able to produce before,” said Artie Rodgers, LLNL seismologist and lead author of the paper.

    The Hayward fault is a major strike-slip fault on the eastern side of the Bay Area. This fault is capable of M 7 earthquakes and presents significant ground motion hazard to the heavily populated East Bay, including the cities of Oakland, Berkeley, Hayward and Fremont. The last major rupture occured in 1868 with an M 6.8-7.0 event. Instrumental observations of this earthquake were not available at the time. However, historical reports from the few thousand people who lived in the East Bay at the time indicate major damage to structures.

    The recent study reports ground motions simulated for a so-called scenario earthquake, one of many possibilities.

    “We’re not expecting to forecast the specifics of shaking from a future M 7 Hayward fault earthquake, but this study demonstrates that fully deterministic 3D simulations with frequencies up to 4 Hz are now possible. We get good agreement with ground motion models derived from actual recordings and we can investigate the impact of source, path and site effects on ground motions,” Rodgers said.

    As these simulations become easier with improvements in SW4 and computing power, the team will sample a range of possible ruptures and investigate how motions vary. The team also is working on improvements to SW4 that will enable simulations to 8-10 Hz for even more realistic motions.

    For residents of the East Bay, the simulations specifically show stronger ground motions on the eastern side of the fault (Orinda, Moraga) compared to the western side (Berkeley, Oakland). This results from different geologic materials — deep weaker sedimentary rocks that form the East Bay Hills. Evaluation and improvement of the 3D earth model is the subject of current research, for example using the Jan. 4, 2018 M 4.4 Berkeley earthquake that was widely felt around the northern Hayward fault.

    Ground motion simulations of large earthquakes are gaining acceptance as computational methods improve, computing resources become more powerful and representations of 3D earth structure and earthquake sources become more realistic.

    Rodgers adds: “It’s essential to demonstrate that high-performance computing simulations can generate realistic results and our team will work with engineers to evaluate the computed motions, so they can be used to understand the resulting distribution of risk to infrastructure and ultimately to design safer energy systems, buildlings and other infrastructure.”

    Other Livermore authors include seismologist Arben Pitarka, mathematicians Anders Petersson and Bjorn Sjogreen, along with project leader and structural engineer David McCallen of the University of California Office of the President and LBNL.

    This work is part of the DOE’s Exascale Computing Project (ECP (link is external)). The ECP is focused on accelerating the delivery of a capable exascale computing ecosystem that delivers 50 times more computational science and data analytic application power than possible with DOE HPC systems such as Titan (ORNL) and Sequoia (LLNL), with the goal to launch a U.S. exascale ecosystem by 2021.

    ORNL Cray XK7 Titan Supercomputer

    LLNL Sequoia IBM Blue Gene Q petascale supercomputer

    The ECP is a collaborative effort of two Department of Energy organizations — the DOE Office of Science and the National Nuclear Security Administration (link is external).

    Simulations were performed using a Computing Grand Challenge allocation on the Quartz supercomputer at LLNL and with an Exascale Computing Project allocation on Cori Phase-2 at the National Energy Research Scientific Computing Center (NERSC) at LBNL.

    See the full article here .

    ShakeAlert: An Earthquake Early Warning System for the West Coast of the United States

    1

    The U. S. Geological Survey (USGS) along with a coalition of State and university partners is developing and testing an earthquake early warning (EEW) system called ShakeAlert for the west coast of the United States. Long term funding must be secured before the system can begin sending general public notifications, however, some limited pilot projects are active and more are being developed. The USGS has set the goal of beginning limited public notifications in 2018.

    Watch a video describing how ShakeAlert works in English or Spanish.

    The primary project partners include:

    United States Geological Survey
    California Governor’s Office of Emergency Services (CalOES)
    California Geological Survey
    California Institute of Technology
    University of California Berkeley
    University of Washington
    University of Oregon
    Gordon and Betty Moore Foundation

    The Earthquake Threat

    Earthquakes pose a national challenge because more than 143 million Americans live in areas of significant seismic risk across 39 states. Most of our Nation’s earthquake risk is concentrated on the West Coast of the United States. The Federal Emergency Management Agency (FEMA) has estimated the average annualized loss from earthquakes, nationwide, to be $5.3 billion, with 77 percent of that figure ($4.1 billion) coming from California, Washington, and Oregon, and 66 percent ($3.5 billion) from California alone. In the next 30 years, California has a 99.7 percent chance of a magnitude 6.7 or larger earthquake and the Pacific Northwest has a 10 percent chance of a magnitude 8 to 9 megathrust earthquake on the Cascadia subduction zone.

    Part of the Solution

    Today, the technology exists to detect earthquakes, so quickly, that an alert can reach some areas before strong shaking arrives. The purpose of the ShakeAlert system is to identify and characterize an earthquake a few seconds after it begins, calculate the likely intensity of ground shaking that will result, and deliver warnings to people and infrastructure in harm’s way. This can be done by detecting the first energy to radiate from an earthquake, the P-wave energy, which rarely causes damage. Using P-wave information, we first estimate the location and the magnitude of the earthquake. Then, the anticipated ground shaking across the region to be affected is estimated and a warning is provided to local populations. The method can provide warning before the S-wave arrives, bringing the strong shaking that usually causes most of the damage.

    Studies of earthquake early warning methods in California have shown that the warning time would range from a few seconds to a few tens of seconds. ShakeAlert can give enough time to slow trains and taxiing planes, to prevent cars from entering bridges and tunnels, to move away from dangerous machines or chemicals in work environments and to take cover under a desk, or to automatically shut down and isolate industrial systems. Taking such actions before shaking starts can reduce damage and casualties during an earthquake. It can also prevent cascading failures in the aftermath of an event. For example, isolating utilities before shaking starts can reduce the number of fire initiations.

    System Goal

    The USGS will issue public warnings of potentially damaging earthquakes and provide warning parameter data to government agencies and private users on a region-by-region basis, as soon as the ShakeAlert system, its products, and its parametric data meet minimum quality and reliability standards in those geographic regions. The USGS has set the goal of beginning limited public notifications in 2018. Product availability will expand geographically via ANSS regional seismic networks, such that ShakeAlert products and warnings become available for all regions with dense seismic instrumentation.

    Current Status

    The West Coast ShakeAlert system is being developed by expanding and upgrading the infrastructure of regional seismic networks that are part of the Advanced National Seismic System (ANSS); the California Integrated Seismic Network (CISN) is made up of the Southern California Seismic Network, SCSN) and the Northern California Seismic System, NCSS and the Pacific Northwest Seismic Network (PNSN). This enables the USGS and ANSS to leverage their substantial investment in sensor networks, data telemetry systems, data processing centers, and software for earthquake monitoring activities residing in these network centers. The ShakeAlert system has been sending live alerts to “beta” users in California since January of 2012 and in the Pacific Northwest since February of 2015.

    In February of 2016 the USGS, along with its partners, rolled-out the next-generation ShakeAlert early warning test system in California joined by Oregon and Washington in April 2017. This West Coast-wide “production prototype” has been designed for redundant, reliable operations. The system includes geographically distributed servers, and allows for automatic fail-over if connection is lost.

    This next-generation system will not yet support public warnings but does allow selected early adopters to develop and deploy pilot implementations that take protective actions triggered by the ShakeAlert notifications in areas with sufficient sensor coverage.

    Authorities

    The USGS will develop and operate the ShakeAlert system, and issue public notifications under collaborative authorities with FEMA, as part of the National Earthquake Hazard Reduction Program, as enacted by the Earthquake Hazards Reduction Act of 1977, 42 U.S.C. §§ 7704 SEC. 2.

    For More Information

    Robert de Groot, ShakeAlert National Coordinator for Communication, Education, and Outreach
    rdegroot@usgs.gov
    626-583-7225

    Learn more about EEW Research

    ShakeAlert Fact Sheet

    ShakeAlert Implementation Plan

    YOU CAN HELP CATCH EARTHQUAKES AS THEY HAPPEN RIGHT NOW

    QCN bloc

    Quake-Catcher Network

    The Quake-Catcher Network is a collaborative initiative for developing the world’s largest, low-cost strong-motion seismic network by utilizing sensors in and attached to internet-connected computers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes, give early warning to schools, emergency response systems, and others. The Quake-Catcher Network also provides educational software designed to help teach about earthquakes and earthquake hazards.

    After almost eight years at Stanford, and a year at CalTech, the QCN project is moving to the University of Southern California Dept. of Earth Sciences. QCN will be sponsored by the Incorporated Research Institutions for Seismology (IRIS) and the Southern California Earthquake Center (SCEC).

    The Quake-Catcher Network is a distributed computing network that links volunteer hosted computers into a real-time motion sensing network. QCN is one of many scientific computing projects that runs on the world-renowned distributed computing platform Berkeley Open Infrastructure for Network Computing (BOINC).

    BOINCLarge

    BOINC WallPaper

    The volunteer computers monitor vibrational sensors called MEMS accelerometers, and digitally transmit “triggers” to QCN’s servers whenever strong new motions are observed. QCN’s servers sift through these signals, and determine which ones represent earthquakes, and which ones represent cultural noise (like doors slamming, or trucks driving by).

    There are two categories of sensors used by QCN: 1) internal mobile device sensors, and 2) external USB sensors.

    Mobile Devices: MEMS sensors are often included in laptops, games, cell phones, and other electronic devices for hardware protection, navigation, and game control. When these devices are still and connected to QCN, QCN software monitors the internal accelerometer for strong new shaking. Unfortunately, these devices are rarely secured to the floor, so they may bounce around when a large earthquake occurs. While this is less than ideal for characterizing the regional ground shaking, many such sensors can still provide useful information about earthquake locations and magnitudes.

    USB Sensors: MEMS sensors can be mounted to the floor and connected to a desktop computer via a USB cable. These sensors have several advantages over mobile device sensors. 1) By mounting them to the floor, they measure more reliable shaking than mobile devices. 2) These sensors typically have lower noise and better resolution of 3D motion. 3) Desktops are often left on and do not move. 4) The USB sensor is physically removed from the game, phone, or laptop, so human interaction with the device doesn’t reduce the sensors’ performance. 5) USB sensors can be aligned to North, so we know what direction the horizontal “X” and “Y” axes correspond to.

    If you are a science teacher at a K-12 school, please apply for a free USB sensor and accompanying QCN software. QCN has been able to purchase sensors to donate to schools in need. If you are interested in donating to the program or requesting a sensor, click here.

    BOINC is a leader in the field(s) of Distributed Computing, Grid Computing and Citizen Cyberscience.BOINC is more properly the Berkeley Open Infrastructure for Network Computing, developed at UC Berkeley.

    Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the worlds largest strong-motion seismic network.

    Below, the QCN Quake Catcher Network map
    QCN Quake Catcher Network map

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

    Advertisements
     
  • richardmitnick 7:06 am on February 7, 2018 Permalink | Reply
    Tags: , First experimental evidence for superionic ice, LLNL, , , U Rochester's Laboratory for Laser Energetics, Uranus and Neptune might contain vast amount of superionic water ice   

    From LLNL: “First experimental evidence for superionic ice” 


    Lawrence Livermore National Laboratory

    Feb. 5, 2018
    Breanna Bishop
    bishop33@llnl.gov
    925-423-9802

    1
    Time-integrated image of a laser-driven shock compression experiment to recreate planetary interior conditions and study the properties of superionic water. Image by M. Millot/E. Kowaluk/J.Wickboldt/LLNL/LLE/NIF

    U Rochester’s Laboratory for Laser Energetics

    LLNL/NIF

    Among the many discoveries on matter at high pressure that garnered him the Nobel Prize in 1946, scientist Percy Bridgman discovered five different crystalline forms of water ice, ushering in more than 100 years of research into how ice behaves under extreme conditions.

    One of the most intriguing properties of water is that it may become superionic when heated to several thousand degrees at high pressure, similar to the conditions inside giant planets like Uranus and Neptune. This exotic state of water is characterized by liquid-like hydrogen ions moving within a solid lattice of oxygen.

    Since this was first predicted in 1988, many research groups in the field have confirmed and refined numerical simulations, while others used static compression techniques to explore the phase diagram of water at high pressure. While indirect signatures were observed, no research group has been able to identify experimental evidence for superionic water ice — until now.

    In a paper published today by Nature Physics , a research team from Lawrence Livermore National Laboratory (LLNL), the University of California, Berkeley and the University of Rochester provides experimental evidence for superionic conduction in water ice at planetary interior conditions, verifying the 30-year-old prediction.

    Using shock compression, the team identified thermodynamic signatures showing that ice melts near 5000 Kelvin (K) at 200 gigapascals (GPa — 2 million times Earth’s atmosphere) — 4000 K higher than the melting point at 0.5 megabar (Mbar) and almost the surface temperature of the sun.

    “Our experiments have verified the two main predictions for superionic ice: very high protonic/ionic conductivity within the solid and high melting point,” said lead author Marius Millot, a physicist at LLNL. “Our work provides experimental evidence for superionic ice and shows that these predictions were not due to artifacts in the simulations, but actually captured the extraordinary behavior of water at those conditions. This provides an important validation of state-of-the-art quantum simulations using density-functional-theory-based molecular dynamics (DFT-MD).”

    “Driven by the increase in computing resources available, I feel we have reached a turning point,” added Sebastien Hamel, LLNL physicist and co-author of the paper. “We are now at a stage where a large enough number of these simulations can be run to map out large parts of the phase diagram of materials under extreme conditions in sufficient detail to effectively support experimental efforts.”

    2
    Visualization of molecular dynamics simulations showing the fast diffusion of hydrogen ions (pink trajectories) within the solid lattice of oxygen in superionic ice. Image by S. Hamel/M. Millot/J.Wickboldt/LLNL/NIF

    Using diamond anvil cells (DAC), the team applied 2.5 GPa of pressure (25 thousand atmospheres) to pre-compress water into the room-temperature ice VII, a cubic crystalline form that is different from “ice-cube” hexagonal ice, in addition to being 60 percent denser than water at ambient pressure and temperature. They then shifted to the University of Rochester’s Laboratory for Laser Energetics (LLE) to perform laser-driven shock compression of the pre-compressed cells. They focused up to six intense beams of LLE’s Omega-60 laser, delivering a 1 nanosecond pulse of UV light onto one of the diamonds. This launched strong shock waves of several hundred GPa into the sample, to compress and heat the water ice at the same time.

    “Because we pre-compressed the water, there is less shock-heating than if we shock-compressed ambient liquid water, allowing us to access much colder states at high pressure than in previous shock compression studies, so that we could reach the predicted stability domain of superionic ice,” Millot said.

    The team used interferometric ultrafast velocimetry and pyrometry to characterize the optical properties of the shocked compressed water and determine its thermodynamic properties during the brief 10-20 nanosecond duration of the experiment, before pressure release waves decompressed the sample and vaporized the diamonds and the water.

    “These are very challenging experiments, so it was really exciting to see that we could learn so much from the data — especially since we spent about two years making the measurements and two more years developing the methods to analyze the data,” Millot said.

    This work also has important implications for planetary science because Uranus and Neptune might contain vast amount of superionic water ice. Planetary scientists believe these giant planets are made primarily of a carbon, hydrogen, oxygen and nitrogen (C-H-O-N) mixture that corresponds to 65 percent water by mass, mixed with ammonia and methane.

    Many scientists envision these planets with fully fluid convecting interiors. Now, the experimental discovery of superionic ice should give more strength to a new picture for these objects with a relatively thin layer of fluid and a large “mantle” of superionic ice. In fact, such a structure was proposed a decade ago — based on dynamo simulation — to explain the unusual magnetic fields of these planets. This is particularly relevant as NASA is considering launching a probe to Uranus and/or Neptune, in the footsteps of the successful Cassini and Juno missions to Saturn and Jupiter.

    “Magnetic fields provide crucial information about the interiors and evolution of planets, so it is gratifying that our experiments can test — and in fact, support — the thin-dynamo idea that had been proposed for explaining the truly strange magnetic fields of Uranus and Neptune,” said Raymond Jeanloz, co-author on the paper and professor in Earth & Planetary Physics and Astronomy at the University of California, Berkeley. It’s also mind-boggling that frozen water ice is present at thousands of degrees inside these planets, but that’s what the experiments show.”

    “The next step will be to determine the structure of the oxygen lattice,” said Federica Coppari, LLNL physicist and co-author of the paper. “X-ray diffraction is now routinely performed in laser-shock experiments at Omega and it will allow to determine experimentally the crystalline structure of superionic water. This would be very exciting because theoretical simulations struggle to predict the actual structure of superionic water ice.”

    Looking ahead, the team plans to push to higher pre-compression and extend the technique to other materials, such as helium, that would be more representative of planets like Saturn and Jupiter.

    Co-authors include Hamel, Peter Celliers, Coppari, Dayne Fratanduono, Damian Swift and Jon Eggert from LLNL; Jeanloz from UC Berkeley; and Ryan Rygg and Gilbert Collins, previously at LLNL and now at the University of Rochester. The experiments also were supported by target fabrication efforts by LLNL’s Stephanie Uhlich, Antonio Correa Barrios, Carol Davis, Jim Emig, Eric Folsom, Renee Posadas Soriano, Walter Unites and Timothy Uphaus.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 4:27 pm on January 3, 2018 Permalink | Reply
    Tags: , , LLNL, , , TPL-two-photon lithography,   

    From LLNL: “Lab unlocks secrets of nanoscale 3D printing” 


    Lawrence Livermore National Laboratory

    Jan. 3, 2018
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Through the two-photon lithography (TPL) 3D printing process, researchers can print woodpile lattices with submicron features a fraction of the width of a human hair. Image by Jacob Long and Adam Connell/LLNL.

    Lawrence Livermore National Laboratory (LLNL) researchers have discovered novel ways to extend the capabilities of two-photon lithography (TPL), a high-resolution 3D printing technique capable of producing nanoscale features smaller than one-hundredth the width of a human hair.

    The findings, recently published on the cover of the journal ACS Applied Materials & Interfaces , also unleashes the potential for X-ray computed tomography (CT) to analyze stress or defects noninvasively in embedded 3D-printed medical devices or implants.

    Two-photon lithography typically requires a thin glass slide, a lens and an immersion oil to help the laser light focus to a fine point where curing and printing occurs. It differs from other 3D-printing methods in resolution, because it can produce features smaller than the laser light spot, a scale no other printing process can match. The technique bypasses the usual diffraction limit of other methods because the photoresist material that cures and hardens to create structures — previously a trade secret — simultaneously absorbs two photons instead of one.

    2
    LLNL researchers printed octet truss structures with submicron features on top of a solid base with a diameter similar to human hair. Photo by James Oakdale/LLNL.

    In the paper, LLNL researchers describe cracking the code on resist materials optimized for two-photon lithography and forming 3D microstructures with features less than 150 nanometers. Previous techniques built structures from the ground up, limiting the height of objects because the distance between the glass slide and lens is usually 200 microns or less. By turning the process on its head — putting the resist material directly on the lens and focusing the laser through the resist — researchers can now print objects multiple millimeters in height. Furthermore, researchers were able to tune and increase the amount of X-rays the photopolymer resists could absorb, improving attenuation by more than 10 times over the photoresists commonly used for the technique.

    “In this paper, we have unlocked the secrets to making custom materials on two-photon lithography systems without losing resolution,” said LLNL researcher James Oakdale, a co-author on the paper.

    Because the laser light refracts as it passes through the photoresist material, the linchpin to solving the puzzle, the researchers said, was “index matching” – discovering how to match the refractive index of the resist material to the immersion medium of the lens so the laser could pass through unimpeded. Index matching opens the possibility of printing larger parts, they said, with features as small as 100 nanometers.

    “Most researchers who want to use two-photon lithography for printing functional 3D structures want parts taller than 100 microns,” said Sourabh Saha, the paper’s lead author. “With these index-matched resists, you can print structures as tall as you want. The only limitation is the speed. It’s a tradeoff, but now that we know how to do this, we can diagnose and improve the process.”

    3
    Through the two-photon lithography (TPL) 3D printing process, researchers can print woodpile lattices with submicron features a fraction of the width of a human hair. Photo by James Oakdale/LLNL.

    By tuning the material’s X-ray absorption, researchers can now use X-ray-computed tomography as a diagnostic tool to image the inside of parts without cutting them open or to investigate 3D-printed objects embedded inside the body, such as stents, joint replacements or bone scaffolds. These techniques also could be used to produce and probe the internal structure of targets for the National Ignition Facility, as well as optical and mechanical metamaterials and 3D-printed electrochemical batteries.

    The only limiting factor is the time it takes to build, so researchers will next look to parallelize and speed up the process. They intend to move into even smaller features and add more functionality in the future, using the technique to build real, mission-critical parts.

    “It’s a very small piece of the puzzle that we solved, but we are much more confident in our abilities to start playing in this field now,” Saha said. “We’re on a path where we know we have a potential solution for different types of applications. Our push for smaller and smaller features in larger and larger structures is bringing us closer to the forefront of scientific research that the rest of the world is doing. And on the application side, we’re developing new practical ways of printing things.”

    The work was funded through the Laboratory Directed Research and Development (LDRD) program. Other LLNL researchers who contributed to the project include Jefferson Cuadra, Chuck Divin, Jianchao Ye, Jean-Baptiste Forien, Leonardus Bayu Aji, Juergen Biener and Will Smith.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 3:46 pm on December 13, 2017 Permalink | Reply
    Tags: , , ELI Beamlines Research Center in Dolní Břežany Czech Republic, L3-HAPLS advanced petawatt laser system, LLNL,   

    From LLNL: “Lawrence Livermore-developed petawatt laser system installed at ELI Beamlines” 


    Lawrence Livermore National Laboratory

    Dec. 13, 2017
    Breanna Bishop
    bishop33@llnl.gov
    925-423-9802

    1
    L3-HAPLS — the world’s most advanced and highest average power, diode-pumped petawatt laser system — was designed, developed and constructed in only three years by Lawrence Livermore National Laboratory’s NIF and Photon Science Directorate and delivered to ELI Beamlines in June 2017. Photo by Jason Laurea/LLNL

    ELI Beamlines facility, Prague

    The L3-HAPLS advanced petawatt laser system was installed last week at the ELI Beamlines Research Center in Dolní Břežany, Czech Republic. L3-HAPLS — the world’s most advanced and highest average power, diode-pumped petawatt laser system — was designed, developed and constructed in only three years by Lawrence Livermore National Laboratory’s (LLNL) NIF and Photon Science (NIF&PS) Directorate and delivered to ELI Beamlines in June 2017.

    Since the end of September, an integrated team of scientific and technical staff from LLNL and ELI Beamlines has worked intensively on the installation of the laser hardware. All laser support systems, such as vacuum and cooling, were connected to the building, signaling readiness to turn the laser back on in its new home.

    L3-HAPLS consists of a main petawatt beamline, energized by diode-pumped “pump” lasers. The system was constructed, assembled and ramped to an intermediary performance milestone that demonstrated its capability and marked the milestone for shipping and integrating with the facility. Staff from ELI was extensively trained on the laser’s assembly and operation while at Livermore to ensure success in transferring the HAPLS technology to the ELI user facility.

    In early 2018, the L3-HAPLS system’s high energy pump laser will be gradually brought up to previous performance, followed by ramping the petawatt beamline first in energy and then in average power.

    The L3-HAPLS laser system, installed at the ELI Beamlines Research Center in Dolní Břežany, Czech Republic.

    “Over the course of more than four decades, LLNL has built an international reputation for developing some of the world’s more powerful and complex lasers,” LLNL Director Bill Goldstein said. “The successful delivery and installation of L3-HAPLS represents a new generation of high-energy, high-peak-power laser power systems. This collaboration, and others like it, provide LLNL with the opportunity to carry on its tradition of redefining the boundaries of science and technology.”

    LLNL’s decades of cutting-edge laser research and development led to the key advancements that distinguish L3-HAPLS from other petawatt lasers: the ability to reach petawatt power levels while maintaining an unprecedented pulse rate; development of the world’s highest peak power diode arrays, driven by a Livermore-developed pulsed power system; a pump laser generating up to 200 joules at a 10 Hz repetition rate; a gas-cooled short-pulse titanium-doped sapphire amplifier; a sophisticated control system with a high level of automation including auto-alignment capability, fast laser startup, performance tracking and machine safety; a dual chirped-pulse-amplification high-contrast short-pulse front end; and a gigashot laser pump source for pumping the short-pulse preamplifiers.

    Despite its complexity, L3-HAPLS was designed for a user facility. The focus is on the application or experiment, and the laser must run reliably, robustly and with minimal user intervention at record performance. This ability already has been demonstrated during the test runs at LLNL.

    “L3-HAPLS is a quantum jump in technology. Not only did it allow LLNL to test and advance new laser concepts important for our mission as a national lab, it also is the first high peak power laser that can deliver petawatt pulses at average power – more than 1 megajoule/hour – entering the industrial application space,” said Constantin Haefner, LLNL’s program director for Advanced Photon Technologies (APT) in NIF&PS. “Innovations driven by L3-HAPLS, such as the semiconductor laser diode pumps or mid-scale Green DPSSL technology, already have reached the market — an important reminder that investment in laser technology spurs advancement in areas well beyond science.”

    When commissioning at ELI Beamlines is complete next year, L3-HAPLS will have a wide range of uses, supporting both basic and applied research. By focusing petawatt peak power pulses at high intensity on a target, L3-HAPLS will generate secondary sources such as electromagnetic radiation or accelerate charged particles, enabling unparalleled access to a variety of research areas, including time-resolved proton and X-ray radiography, laboratory astrophysics and other basic science and medical applications for cancer treatments, in addition to industrial applications such as nondestructive evaluation of materials and laser fusion.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 10:56 am on December 10, 2017 Permalink | Reply
    Tags: , LLNL, , Volumetric 3D printing builds on need for speed   

    From LLNL: “Volumetric 3D printing builds on need for speed” 


    Lawrence Livermore National Laboratory

    Dec. 8, 2017
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    By using laser-generated, hologram-like 3D images flashed into photosensitive resin, researchers at Lawrence Livermore National Laboratory, along with academic collaborators, have discovered they can build complex 3D parts in a fraction of the time of traditional layer-by-layer printing. With this process, researchers have printed beams, planes, struts at arbitrary angles, lattices and complex and uniquely curved objects in a matter of seconds.

    While additive manufacturing (AM), commonly known as 3D printing, is enabling engineers and scientists to build parts in configurations and designs never before possible, the impact of the technology has been limited by layer-based printing methods, which can take up to hours or days to build three-dimensional parts, depending on their complexity.

    However, by using laser-generated, hologram-like 3D images flashed into photosensitive resin, researchers at Lawrence Livermore National Laboratory (LLNL), along with collaborators at UC Berkeley (link is external), the University of Rochester (link is external) and the Massachusetts Institute of Technology (link is external) (MIT), have discovered they can build complex 3D parts in a fraction of the time of traditional layer-by-layer printing. The novel approach is called “volumetric” 3D printing, and is described in the journal Science Advances.

    “The fact that you can do fully 3D parts all in one step really does overcome an important problem in additive manufacturing,” said LLNL researcher Maxim Shusteff, the paper’s lead author. “We’re trying to print a 3D shape all at the same time. The real aim of this paper was to ask, ‘Can we make arbitrary 3D shapes all at once, instead of putting the parts together gradually layer by layer?’ It turns out we can.”

    The way it works, Shusteff explained, is by overlapping three laser beams that define an object’s geometry from three different directions, creating a 3D image suspended in the vat of resin. The laser light, which is at a higher intensity where the beams intersect, is kept on for about 10 seconds, enough time to cure the part. The excess resin is drained out of the vat, and, seemingly like magic, researchers are left with a fully formed 3D part.

    The approach, the scientists concluded, results in parts built many times faster than other polymer-based methods, and most, if not all, commercial AM methods used today. Due to its low cost, flexibility, speed and geometric versatility, the researchers expect the framework to open a major new direction of research in rapid 3D printing.

    2
    Volumetric 3D printing creates parts by overlapping three laser beams that define an object’s geometry from three different directions, creating a hologram-like 3D image suspended in the vat of resin. The laser light, which is at a higher intensity where the beams intersect, is kept on for about 10 seconds, enough time to cure the object. No image credit.

    “It’s a demonstration of what the next generation of additive manufacturing may be,” said LLNL engineer Chris Spadaccini, who heads Livermore Lab’s 3D printing effort. “Most 3D printing and additive manufacturing technologies consist of either a one-dimensional or two-dimensional unit operation. This moves fabrication to a fully 3D operation, which has not been done before. The potential impact on throughput could be enormous and if you can do it well, you can still have a lot of complexity.”

    With this process, Shusteff and his team printed beams, planes, struts at arbitrary angles, lattices and complex and uniquely curved objects. While conventional 3D printing has difficulty with spanning structures that might sag without support, Shusteff said, volumetric printing has no such constraints; many curved surfaces can be produced without layering artifacts.

    “This might be the only way to do AM that doesn’t require layering,” Shusteff said. “If you can get away from layering, you have a chance to get rid of ridges and directional properties. Because all features within the parts are formed at the same time, they don’t have surface issues.

    “I’m hoping what this will do is inspire other researchers to find other ways to do this with other materials,” he added. “It would be a paradigm shift.”
    Shusteff believes volumetric printing could be made even faster with a higher power light source. Extra-soft materials such as hydrogels could be wholly fabricated, he said, which would otherwise be damaged or destroyed by fluid motion. Volumetric 3D printing also is the only additive manufacturing technique that works better in zero gravity, he said, expanding the possibility of space-based production.

    3
    The LLNL logo in 3D printed technology.

    The technique does have limitations, researchers said. Because each beam propagates through space without changing, there are restrictions on part resolution and on the kinds of geometries that can be formed. Extremely complex structures would require lots of intersecting laser beams and would limit the process, they explained.

    Spadaccini added that additional polymer chemistry and engineering also would be needed to improve the resin properties and fine tune them to make better structures.

    “If you leave the light on too long it will start to cure everywhere, so there’s a timing game,” Spadaccini said. “A lot of the science and engineering is figuring out how long you can keep it on and at what intensity, and how that couples with the chemistry.”

    The work received Laboratory Directed Research and Development (LDRD) program funding. Additional LLNL researchers who contributed to the project were Todd Weisgraber and Robert Panas, Lawrence Graduate Scholar and University of Rochester Ph.D. student Allison Browar, UC Berkeley graduate students Brett Kelly and Johannes Henriksson, along with Nicholas Fang at MIT.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 5:10 pm on November 13, 2017 Permalink | Reply
    Tags: Elusive Atomic Deformations, , LLNL, Matter in Extreme Condition (MEC) experimental station at SLAC’s LCLS, , SLAC X-ray Laser Reveals How Extreme Shocks Deform a Metal’s Atomic Structure, The Tremendous Shock of a Tiny Recoil, , , When hit by a powerful shock wave materials can change their shape – a property known as plasticity – yet keep their lattice-like atomic structure   

    From SLAC: “SLAC X-ray Laser Reveals How Extreme Shocks Deform a Metal’s Atomic Structure” 


    SLAC Lab

    November 13, 2017
    Glennda Chui

    1
    This image depicts an experimental setup at SLAC’s Linac Coherent Light Source, where a tantalum sample is shocked by a laser and probed by an X-ray beam. The resulting diffraction patterns, collected by an array of detectors, show the material undergoes a particular type of plastic deformation called twinning. The background illustration shows a lattice structure that has created twins. (Ryan Chen/LLNL)

    SLAC/LCLS

    When hit by a powerful shock wave, materials can change their shape – a property known as plasticity – yet keep their lattice-like atomic structure. Now scientists have used the X-ray laser at the Department of Energy’s SLAC National Accelerator Laboratory to see, for the first time, how a material’s atomic structure deforms when shocked by pressures nearly as extreme as the ones at the center of the Earth.

    The researchers said this new way of watching plastic deformation as it happens can help study a wide range of phenomena, such as meteor impacts, the effects of bullets and other penetrating projectiles and high-performance ceramics used in armor, as well as how to protect spacecraft from high-speed dust impacts and even how dust clouds form between the stars.

    The experiments took place at the Matter in Extreme Condition (MEC) experimental station at SLAC’s Linac Coherent Light Source (LCLS). They were led by Chris Wehrenberg, a physicist at the DOE’s Lawrence Livermore National Laboratory, and described in a recent paper in Nature.

    “People have been creating these really high-pressure states for decades, but what they didn’t know until MEC came online is exactly how these high pressures change materials – what drives the change and how the material deforms,” said SLAC staff scientist Bob Nagler, a co-author of the report.

    “LCLS is so powerful, with so many X-rays in such a short time, that it can interrogate how the material is changing while it is changing. The material changes in just one-tenth of a billionth of a second, and LCLS can deliver enough X-rays to capture information about those changes in a much shorter time that that.”

    Elusive Atomic Deformations

    The material they studied here was a thin foil made of tantalum, a blue-gray metallic element whose atoms are arranged in cubes. The team used a polycrystalline form of tantalum that is naturally textured so the orientation of these cubes varies little from place to place, making it easier to see certain types of disruptions from the shock.

    When this type of crystalline material is squeezed by a powerful shock, it can deform in two distinct ways: twinning, where small regions develop lattice structures that are the mirror images of the ones in surrounding areas, and slip deformation, where a section of the lattice shifts and the displacement spreads, like a propagating crack.

    But while these two mechanisms are fundamentally important in plasticity, it’s hard to observe them as a shock is happening. Previous research had studied shocked materials after the fact, as the material recovered, which introduced complications and led to conflicting interpretations.

    The Tremendous Shock of a Tiny Recoil

    In this experiment, the scientists shocked a piece of tantalum foil with a pulse from an optical laser. This vaporizes a small piece of the foil into a hot plasma that flies away from the surface. The recoil from this tiny plume creates tremendous pressures in the remaining foil – up to 300 gigapascals, which is three million times the atmospheric pressure around us and comparable to the 350-gigapascal pressure at the center of the Earth, Nagler said.

    While this was happening, researchers probed the state of the metal with X-ray laser pulses. The pulses are extremely short – only 50 femtoseconds, or millionths of a billionth of a second, long – and like a camera with a very fast shutter speed they can record the metal’s response in great detail.

    The X-rays bounce off the metal’s atoms and into a detector, where they create a “diffraction pattern” – a series of bright, concentric rings – that scientists analyze to determine the atomic structure of the sample. X-ray diffraction has been used for decades to discover the structures of materials, biomolecules and other samples and to observe how those structures change, but it’s only recently been used to study plasticity in shock-compressed materials, Wehrenberg said.

    And this time the researchers took the technique one step further: They analyzed not just the diffraction patterns, but also how the scattering signals were distributed inside individual diffraction rings and how their distribution changed over time. This deeper level of analysis revealed changes in the tantalum’s lattice orientation, or texture, taking place in about one-tenth of a billionth of a second. It also showed whether the lattice was undergoing twinning or slip over a wide range of shock pressures – right up to the point where the metal melts. The team discovered that as the pressure increased, the dominant type of deformation changed from twinning to slip deformation.

    Wehrenberg said the results of this study are directly applicable to Lawrence Livermore’s efforts to model both plasticity and tantalum at the molecular level.

    These experiments, he said, “are providing data that the models can be directly compared to for benchmarking or validation. In the future, we plan to coordinate these experimental efforts with related experiments on LLNL’s National Ignition Facility that study plasticity at even higher pressures.”

    In addition to LLNL and SLAC, researchers from the University of Oxford, the DOE’s Los Alamos National Laboratory and the University of York contributed to this study. Funding for the work at SLAC came from the DOE Office of Science. LCLS is a DOE Office of Science User Facility.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    SLAC Campus
    SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the DOE’s Office of Science.
    i1

     
  • richardmitnick 1:52 pm on October 23, 2017 Permalink | Reply
    Tags: , , Biosorption experiments conducted with leachates from metal-mine tailings and rare earth deposits, LLNL, LLNL researchers turn to bioengineered bacteria to increase U.S. supply of rare earth metals, Microbial mediated surface adsorption (biosorption) represents a potentially cost-effective and ecofriendly approach for metal recovery, The rare earths comprise 17 elements in the periodic table -- scandium yttrium and the 15 lanthanides (lanthanum cerium praseodymium neodymium promethium samarium europium gadolinium terbium dysprosiu   

    From LLNL: “LLNL researchers turn to bioengineered bacteria to increase U.S. supply of rare earth metals” 


    Lawrence Livermore National Laboratory

    Oct. 23, 2017
    Anne M Stark
    stark8@llnl.gov
    925-422-9799

    1
    Europium, a rare earth element with the same relative hardness of lead, is used to create fluorescent lightbulbs. With no proven substitutes, europium is considered critical to the clean energy economy. Photo courtesy of Ames Laboratory.

    To help increase the U.S. supply of rare earth metals, a Lawrence Livermore National Laboratory (LLNL) team has created a new way to recover rare earths using bioengineered bacteria.

    Rare earth elements (REEs) are essential for American competitiveness in the clean energy industry because they are used in many devices important to a high-tech economy and national security, including computer components, high-power magnets, wind turbines, mobile phones, solar panels, superconductors, hybrid/electric vehicle batteries, LCD screens, night vision goggles, tunable microwave resonators — and, at the Laboratory, the National Ignition Facility’s neodymium-glass laser amplifiers.

    More than 90 percent of the global REE supply comes from China.

    “To alleviate supply vulnerability and diversify the global REE supply chain, we’ve developed a new extraction methodology using engineered bacteria that allowed us to tap into low-grade feedstocks,” said Yongqin Jiao, lead author of a paper appearing in the journal, Environmental Science & Technology.

    “Non-traditional REE resources, such as mine tailings, geothermal brines and coal byproducts, are abundant and offer a potential means to diversify the REE supply chain. However, no current technology exists that is capable of economic extraction of rare earths from them, which creates a big challenge and an opportunity.”

    A Department of Energy (DOE) report said in 2011 that supply problems associated with five rare earth elements (dysprosium, terbium, europium, neodymium and yttrium) may affect clean energy development in coming years.

    2
    https://www.llnl.gov/sites/default/files/media/2017/10/criticalstrategies.pdf

    Many recent studies have looked at the use of biomass for adsorption of REEs. However, REE adsorption by bioengineered systems has been scarcely documented, and rarely tested with complex natural feedstocks.

    But in the new research, the LLNL team recovered rare earth elements from low-grade feedstock (raw material supplied to a machine or processing plant) using engineered bacteria.

    Through biosorption experiments conducted with leachates from metal-mine tailings and rare earth deposits, the team showed that functionalization of the cell surface yielded several notable advantages over the non-engineered control. The team saw significant improvements in adsorption efficiency and selectivity for REEs versus other non-rare earth metals.

    Microbial mediated surface adsorption (biosorption) represents a potentially cost-effective and ecofriendly approach for metal recovery. Microorganisms exhibit high metal adsorption capacities because of their high surface area per unit weight and the abundance of cell surface functional groups with metal coordination functionality. Additionally, the reversibility and fast kinetics of adsorption enables an efficient metal extraction process, while the ability of cells to withstand multiple adsorption-desorption cycles avoids the need for frequent cell-regeneration, decreasing operational costs. Biosorption also is expected to have a minimal environmental impact relative to traditional extraction techniques.

    “Our results demonstrate the technical and economic feasibility of coupling bioengineering with biosorption for REE extraction from low-grade feedstocks,” Jiao said.

    The rare earths comprise 17 elements in the periodic table — scandium, yttrium and the 15 lanthanides (lanthanum, cerium, praseodymium, neodymium, promethium, samarium, europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium and lutetium).
    The work is part of the Critical Materials Institute, a DOE Innovation Hub led by the DOE’s Ames Laboratory and supported by DOE’s Office of Energy Efficiency and Renewable Energy’s Advanced Manufacturing Office. CMI seeks ways to eliminate and reduce reliance on rare-earth metals and other materials critical to the success of clean energy technologies.

    Other Livermore researchers include Dan Park, Lawrence Scholar Aaron Brewer and colleagues from the University of Washington, Idaho National Laboratory and the University of California, Berkeley.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 12:47 pm on September 22, 2017 Permalink | Reply
    Tags: Additive manufacturing (3D printing), , , Computational Design Optimization strategic initiative, Computer-aided design (CAD) and computer-aided engineering (CAE) have not caught up to advanced manufacturing technologies and the sheer number of design possibilities they afford, DARPA funds TRAnsformative DESign (TRADES) at LLNL Autodesk the UC Berkeley International Computer Science Institute (ICSI) and the University of Texas at Austin, LLNL, LLNL Center for Design and Optimization, Powerful multi-physics computer simulations, The linchpin for LLNL's research is the Livermore Design Optimization (LiDO) code   

    From LLNL: “LLNL gears up for next generation of computer-aided design and engineering” 


    Lawrence Livermore National Laboratory

    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Lawrence Livermore has instituted the Center for Design and Optimization, tasked with utilizing advanced manufacturing techniques, high-performance computing and cutting-edge simulation codes to optimize design. No image credit.

    The emergence of additive manufacturing (3D printing) and powerful multi-physics computer simulations have enabled design to reach unprecedented levels of complexity, expanding the realm of what can be engineered and created. However, conventional design tools such as computer-aided design (CAD) and computer-aided engineering (CAE) have not caught up to advanced manufacturing technologies and the sheer number of design possibilities they afford.

    To address next-generation technological capabilities and their potential impact, Lawrence Livermore National Laboratory instituted the Center for Design and Optimization last October, tasked with using advanced manufacturing techniques, high-performance computing and cutting-edge simulation codes to optimize design. With Laboratory Directed Research and Development (LDRD) strategic initiative (SI) funding, the center began a new Computational Design Optimization strategic initiative, including collaborators at University of Illinois at Urbana-Champaign, Lund University, University of Wisconsin Madison, and Technical University in Denmark.

    “I want to solve real, relevant problems,” said center director Dan Tortorelli, who recently came to the Lab from a mechanical sciences and engineering professorship at the University of Illinois at Urbana-Champaign. “We want to take the great simulation capabilities that we have with our supercomputers, along with optimization and our AM (additive manufacturing) capabilities, and combine them into a single systematic design environment.”

    Although optimization has been around since the 1980s, Tortorelli said, it has been largely limited to linear elastic design problems and is often performed through trial and error, looping a simulation over and over with different parameters until reaching a functional design. The process is not only time-consuming and tedious, it’s also expensive. Through the SI, researchers want to have computers do the repetitive work, freeing up engineers to focus their attention on more creative matters.

    “It may surprise some people but engineers are still doing things by trial and error — even if they are doing simulations, it is still trial and error,” said Computational Design Optimization project co-lead Dan White. “We are trying to give the computer the objectives and constraints, give it a list of materials to use, then it does the calculations and designs the part for us. This is a new way of thinking about engineering and design.”

    Ultimately, researchers want to have a fully integrated optimization program that incorporates multiple scales, complex physics and transient and nonlinear phenomena. By no longer adhering to outdated design paradigms, researchers said, the possibilities are essentially endless.

    “Fundamentally changing the way design is done is really what we want to do,” said Rob Sharpe, deputy associate director for research and development within Engineering. “We want to be able to design things that are so complicated that human intuition isn’t going to work…Almost all design work has been done for things that are static. But a lot of the things that we want to create, we want it to be dynamic and, in some cases, push it until it fails, and we want them to be designed to fail in a specific way. In addition, if we just used all the advanced AM techniques to make the same designs, we’d be missing the point. That’s really why we started the center.”

    Optimal design through advanced codes

    Current 3D printers have the resolution to create parts with millions, even billions of parameters, in theory placing a different material at every point in the structure. To optimize design for those printers, engineers will need the same level of resolution in their design software, White said, a feat only possible through high-performance computing.

    The linchpin for LLNL’s research is the Livermore Design Optimization (LiDO) code. Based on LLNL’s open source Modular Finite Element Method library (MFEM), it uses gradient-based algorithms that incorporate uncertainty, multiple scales, multiple physics and multiple functionality into design. White is working on writing the software and said the code’s basic capability is already functioning, having already created designs for cantilevers and lattice structures.

    “It’s already designing things better than a human could do,” White said. “The idea is that the computer is doing this work on the weekend when we are not here. It is fully automated; the computer sorts through all the scenarios and comes up with the best designs.”

    So far, researchers have designed different micro-architected lattice structures with new properties, including those with negative coefficient of thermal expansion (i.e., lattices that contract when heated or expand when cooled) or negative Poisson’s ratio (i.e., lattices that expand laterally when stretched or contract when compressed, which is opposite of most materials — for example, a rubber band). Other research has been done to design electrodes, multifunctional metamaterials, carbon-fiber-reinforced composites and path planning for direct ink writing (a 3D printing process) onto curvilinear surfaces. The advances someday could be applied to optimize armor plating for military vehicles, blast and impact-resistant structures, optics and electromagnetic devices, researchers said. Interestingly, some of the designs generated have resembled “organic” shapes found in nature, Sharpe said.

    “A lot of what nature has done is to do evolutionary optimization over time,” Sharpe said. “Human efforts inspired by these are often called biomimetic designs — and some do almost have an organic look…We’re moving to where the computer fully explores the possibilities, does the work and shows the possibilities. We can now begin to explore complex, nonlinear, multi-function, dynamic designs for the first time. The computer is going to be able to do larger problems than humans could possibly do — we’re talking about a billion design variables. It’s going to change the way people approach design problems and the way engineers interact with all these tools.”

    There’s a strong link between AM and design optimization, according to LLNL engineer Eric Duoss, because of the ability to control both material composition and structure at multiple length scales with 3D printing.

    “With new printing approaches, we can deterministically place material and structure into previously unobtainable form factors; that means we really need to rethink design not only to keep pace with manufacturing advances, but to fully realize the potential of AM,” said Duoss, who is working on a project to optimize printing 3D lattices onto 3D surfaces. “Achieving the goals of this strategic initiative would be revolutionary for design and have far-reaching impact beyond just additive manufacturing, but it’s a really hard problem. There’s a real need for the proposed capability right now, and we’re going to scramble to get there as fast as we can.”

    In January, the Defense Advanced Research Projects Agency (DARPA) awarded a multimillion dollar grant to LLNL, Autodesk, the UC Berkeley International Computer Science Institute (ICSI) and the University of Texas at Austin, to institute a project called TRAnsformative DESign (TRADES), aimed at advancing the tools needed to generate and better manage the enormous complexity of design afforded by new technologies.

    Under the four-year DARPA project, LLNL will be using its high-performance computing libraries to develop algorithms capable of optimizing large, complex systems and working with Autodesk to create a more robust and user-friendly graphical interface. The algorithms, when combined with emerging advanced manufacturing capabilities, will be used to design revolutionary systems for the Department of Defense.

    The collaboration partially stemmed from previous research performed with Autodesk seeking to improve performance of helmets, which resulted in designs for rigid shell materials formed from graded density, lattice structures with optimized macroscale shape and microscale material distribution.

    “As part of that project, we encountered challenges that helped inspire our current efforts in design,” said Duoss, a principal investigator on the project. “Almost by definition, helmets are required to function under highly nonlinear and dynamic conditions. Turns out those conditions are hard to simulate, let alone design for, and it is certainly a new area for design optimization. With the Autodesk collaboration, we collectively identified these gaps in capability and knew the time was right to assemble a team to attack these difficult problems.”

    LLNL looks to be world leader

    About a dozen LLNL employees are part of the Computational Design Optimization strategic initiative. In the coming years, they will work to improve the LiDO program to perform generalized shape optimization, ensure accurate simulations, increase optimization speed through Reduced Order Models (ROMs) and accommodate uncertainty.

    Specific projects include multi-physics lattices with optimal mechanical, heat transfer and electromagnetic responses, to create antennas with sufficient strength and gain. Researchers also want to enhance fabrication resolution in their designs down to the submicron level, to potentially design and manufacture lenses for optics simpler, cheaper and not necessarily parabolic.

    Other goals are to investigate machine learning, and consider a variety of response metrics. A benchmark problem will be the optimization of a large 3D-printed part consisting of a spatially varying lattice to illustrate LiDO’s ability to incorporate ROMs. Optimization under uncertainty using models derived from Lab data, and the utilization of domain symmetry to simplify the analysis and design process also will be investigated.

    At the end of the three-year SI, the researchers hope to have funding from various Lab programs and go from having almost no capabilities in design optimization to being one of the world leaders.

    “We want to do this inverse problem where we know the outcome that we want and what our design domain should be,” Tortorelli said. “We want to be able to define the parameters, press a button and have it come out the way we want. We are not taking the designers or engineers out of the loop. We are not saying, ‘do away with them.’ We are saying, ‘do away with the drudgery.’ We are simplifying their lives so they can be more creative.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 5:54 am on July 1, 2017 Permalink | Reply
    Tags: , , LLNL, ,   

    From LLNL: “National labs, industry partners prepare for new era of computing through Centers of Excellence” 


    Lawrence Livermore National Laboratory

    June 30, 2017
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    IBM employees and Lab code and application developers held a “Hackathon” event in June to work on coding challenges for a predecessor system to the Sierra supercomputer. Through the ongoing Centers of Excellence (CoE) program, employees from IBM and NVIDIA have been on-site to help LLNL developers transition applications to the Sierra system, which will have a completely different architecture than the Lab has had before. Photo by Jeremy Thomas/LLNL

    The Department of Energy’s (link is external) drive toward the next generation of supercomputers, “exascale” machines capable of more than a quintillion (1018) calculations per second, isn’t simply to boast about having the fastest processing machines on the planet. At Lawrence Livermore National Laboratory (LLNL) and other DOE national labs, these systems will play a vital role in the National Nuclear Security Administration’s (NNSA) core mission of ensuring the nation’s nuclear stockpile in the absence of underground testing.

    The driving force behind faster, more robust computing power is the need for simulation and codes that are higher resolution, increasingly predictive and incorporate more complex physics. It’s an evolution that is changing the way the national labs’ application and code developers are approaching design. To aid in the transition and prepare researchers for pre-exascale and exascale systems, LLNL has brought experts from IBM (link is external) and NVIDIA together with Lab computer scientists in a Center of Excellence (CoE), a co-design strategy born out of the need for vendors and government to work together to optimize emerging supercomputing systems.

    “There are disruptive machines coming down the pike that are changing things out from under us,” said Rob Neely, an LLNL computer scientist and Weapon Simulation & Computing Program coordinator for Computing Environments. “We need a lot of time to prepare; these applications need insight, and who better to help us with that than the companies who will build the machines? The idea is that when a machine gets here, we’re not caught flat-footed. We want to hit the ground running right away.”

    While LLNL’s exascale system isn’t scheduled for delivery until 2023, Sierra, the Laboratory’s pre-exascale system, is on track to begin installation this fall and will begin running science applications at full machine scale by early next spring.

    LLNL IBM Sierra supercomputer

    Built by IBM and NVIDIA, Sierra will have about six times more computing power than LLNL’s current behemoth, Sequoia.

    2
    Sequoia at LLNL

    The Sierra system is unique to the Lab in that it’s made up of two kinds of hardware — IBM CPUs and NVIDIA GPUs — that have different memory locations associated with each type of computing device and a programming model more complex than LLNL scientists have programmed to in the past. In the meantime, Lab scientists are receiving guidance from experts from the two companies, utilizing a small predecessor system that is already running some components and has some of the technological features that Sierra will have.

    LLNL’s Center of Excellence, which began in 2014, involves about a half dozen IBM and NVIDIA personnel on-site, and a number of remote collaborators who work with Lab developers. The team is on hand to answer any questions Lab computer scientists have, educate LLNL personnel to use best practices in coding hybrid systems, develop strategies for optimizations, debug and advise on global code restructuring that often is needed to obtain performance. The CoE is a symbiotic relationship — LLNL scientists get a feel for how Sierra will operate, and IBM and NVIDIA gain better insight into what the Lab’s needs are and what the machines they build are capable of.

    “We see how the systems we design and develop are being used and how effective they can be,” said IBM research staff member Leopold Grinberg, who works on the LLNL site. “You really need to get into the mind of the developer to understand how they use the tools. To sit next to the developers’ seats and let them drive, to observe them, gives us a good idea of what we are doing right and what needs to be improved. Our experts have an intimate knowledge of how the system works, and having them side-by-side with Lab employees is very useful.”

    Sierra, Grinberg explained, will use a completely different system architecture than what has been used before at LLNL. It’s not only faster than any machine the Lab has had, it also has different tools built into the compilers and programming models. In some cases, the changes developers need to make are substantial, requiring restructuring hundreds or thousands of lines of code. Through the CoE, Grinberg said he’s learning more about how the system will be used for production scientific applications.

    “It’s a constant process of learning for everybody,” Grinberg said. “It’s fun, it’s challenging. We gather the knowledge and it’s also our job to distribute it. There’s always some knowledge to be shared. We need to bring the experience we have with heterogenous systems and emerging programming models to the lab, and help people generate updated codes or find out what can be kept as is to optimize the system we build. It’s been very fruitful for both parties.”

    The CoE strategy is additionally being implemented at Oak Ridge National Laboratory, which is bringing in a heterogenous system of its own called Summit. Other CoE programs are in place at Los Alamos and Lawrence Berkeley national laboratories. Each CoE has a similar goal of preparing computational scientists with the tools they will need to utilize pre-exascale and exascale systems. Since Livermore is new to using GPUs for the bulk of computing power, the Sierra architecture places a heavy emphasis on figuring out which sections of a multi-physics application are the most performance-critical, and the code restructuring that must take place to most effectively use the system.

    “Livermore and Oak Ridge scientists are really pushing the boundaries of the scale of these GPU-based systems,” said Max Katz, a solutions architect at NVIDIA who spends four days a week at LLNL as a technical adviser. “Part of our motivation is to understand machine learning and how to make it possible to merge high-performance computing with the applications demanded by industry. The CoE is essential because it’s difficult for any one party to predict how these CPU/GPU systems will behave together. Each one of us brings in expertise and by sharing information, it makes us all more well-rounded. It’s a great opportunity.”

    In fact, the opportunity was so compelling that in 2016 the CoE was augmented with a three-year institutional component (dubbed the Institutional Center of Excellence, or iCE) to ensure that other mission critical efforts at the Laboratory also could participate. This has added nine applications development efforts, including one in data science, and expanded the number of IBM and NVIDIA personnel. By working together cooperatively, many more types of applications can be explored, performance solutions developed and shared among all the greater CoE code teams.

    “At the end of the iCOE project, the real value will be not only that some important institutional applications run well, but that every directorate at LLNL will have trained staff with expertise in using Sierra, and we’ll have documented lessons learned to help train others,” said Bert Still, leader for Application Strategy (Livermore Computing).

    Steve Rennich, a senior HPC developer-technology engineer with NVIDIA, visits the Lab once a week to help LLNL scientists port mission-critical applications optimized for CPUs over to NVIDIA GPUs, which have an order of magnitude greater computing power than CPUs. Besides writing bug-free code, Rennich said, the goal is to improve performance enough to meet the Lab’s considerable computing requirements.

    “The challenge is they’re fairly complex codes so to do it correctly takes a fair amount of attention to detail,” Rennich said. “It’s about making sure the new system can handle as large a model as the Lab needs. These are colossal machines, so when you create applications at this scale, it’s like building a race car. To take advantage of this increase in performance, you need all the pieces to fit and work together.”

    Current plans are to continue the existing Center of Excellence at LLNL at least into 2019, when Sierra is fully operational. Until then, having experts working shoulder-to-shoulder with Lab developers to write code will be a huge benefit to all parties, said LLNL’s Neely, who wants the collaboration to publish their discoveries to share it with the broader computing community.

    “We’re focused on the issue at hand, and moving things toward getting ready for these machines is hugely beneficial,” Neely said. “These are very large applications developed over decades, so ultimately it’s the code teams that need to be ready to take this over. We’ve got to make this work because we need to ensure the safety and performance of the U.S. stockpile in the absence of nuclear testing. We’ve got the right teams and people to pull this off.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 9:27 am on June 18, 2017 Permalink | Reply
    Tags: , ECP, LLNL,   

    From ECP via LLNL: “On the Path to the Nation’s First Exascale Supercomputers: PathForward” 


    Lawrence Livermore National Laboratory

    1

    ECP

    06/15/17

    Department of Energy Awards Six Research Contracts Totaling $258 Million to Accelerate U.S. Supercomputing Technology.

    Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers.

    The awardees will receive funding for research and development to maximize the energy efficiency and overall performance of future large-scale supercomputers, which are critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences. The $258 million in funding will be allocated over a three-year contract period, with companies providing additional funding amounting to at least 40 percent of their total project cost, bringing the total investment to at least $430 million.

    “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry.

    “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

    “The PathForward program is critical to the ECP’s co-design process, which brings together expertise from diverse sources to address the four key challenges: parallelism, memory and storage, reliability and energy consumption,” ECP Director Paul Messina said. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand. It is essential that private industry play a role in this work going forward: advances in computer hardware and architecture will contribute to meeting all four challenges.”

    The following U.S. technology companies are the award recipients:

    Advanced Micro Devices (AMD)
    Cray Inc. (CRAY)
    Hewlett Packard Enterprise (HPE)
    International Business Machines (IBM)
    Intel Corp. (Intel)
    NVIDIA Corp. (NVIDIA)

    The Department’s funding for this program is supporting R&D in three areas—hardware technology, software technology, and application development—with the intention of delivering at least one exascale-capable system by 2021.

    Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.

    Additional information and attributed quotes from the vendors receiving the PathForward funding can be found here [See below].

    [It is this writer’s opinion that none of this funding should be necessary. These are all for-profit companies which have noting to do but gain from their own investments in this work.]

    Advanced Micro Devices (AMD)

    Sunnyvale, California

    “AMD is excited to extend its long-term computing partnership with the U.S. Government in its PathForward program for exascale computing. We are thrilled to see AMD’s unique blend of high-performance computing and graphics technologies drive the industry forward and enable breakthroughs like exascale computing. This technology collaboration will drive outstanding performance and power-efficiency on applications ranging from scientific computing to machine learning and data analytics. As part of PathForward, AMD will explore processors, memory architectures, and high-speed interconnects to improve the performance, power-efficiency, and programmability of exascale systems. This effort emphasizes an open, standards-based approach to heterogeneous computing as well as co-design with the Exascale Computing Project (ECP) teams to foster innovation and achieve the Department of Energy’s goals for capable exascale systems.”

    — Dr. Lisa Su, president and CEO

    Cray Inc. (CRAY)

    Seattle, Washington

    “At Cray, our focus is on innovation and advancing supercomputing technologies that allow customers to solve their most demanding scientific, engineering, and data-intensive problems. We are honored to play an important role in the Department of Energy’s Exascale Computing Project, as we collaboratively explore new advances in system and node technology and architectures. By pursuing improvements in sustained performance, power efficiency, scalability, and reliability, the ECP’s PathForward program will help make significant advancements towards exascale computing.”

    — Peter Ungaro, president and CEO

    Hewlett Packard Enterprise (HPE)

    Palo Alto, California

    “The U.S. Department of Energy has selected HPE to rethink the fundamental architecture of supercomputers to make exascale computing a reality. This is strong validation of our vision, strategy and execution capabilities as a systems company with deep expertise in Memory-Driven Computing, VLSI, photonics, non-volatile memory, software and systems design. Once operational, these systems will help our customers to accelerate research and development in science and technology.”

    — Mike Vildibill, vice president, Advanced Technology Programs

    Intel Corp. (INTEL)

    Santa Clara, California

    “Intel is investing to offer a balanced portfolio of products for high performance computing that are essential to not only achieving Exascale class computing, but also to drive breakthrough capability across the entire ecosystem. This research with the US Department of Energy focused on advanced computing and I/O technologies will accelerate the deployment of leading HPC solutions that contribute to scientific discovery for economic and societal benefits for the United States and people around the world. These gains will impact many application domains and be realized in traditional high performance simulations as well as data analytics and the rapidly growing field of artificial intelligence.”

    — Al Gara, Intel Fellow, Data Center Group Chief Architect, Exascale Systems

    International Business Machines (IBM)

    Armonk, New York

    “IBM has a roadmap for future Data Centric Systems to deliver enterprise-strength cloud services and on-premise mission-critical application performance for our customers. We are excited to once again work with the DOE and we believe the PathForward program will help accelerate our capabilities to deliver cognitive, flexible, cost-effective and energy efficient exascale-class systems for a wide variety of important workloads.”

    — Michael Rosenfield, vice president of Data Centric Solutions, IBM Research

    NVIDIA Corp. (NVIDIA)

    Santa Clara, California

    “NVIDIA has been researching and developing faster, more efficient GPUs for high performance computing (HPC) for more than a decade. This is our sixth DOE research and development contract, which will help accelerate our efforts to develop highly efficient throughput computing technologies to ensure U.S. leadership in HPC. Our R&D will focus on critical areas including energy-efficient GPU architectures and resilience. We’re particularly proud of the work we’ve been doing to help the DOE achieve exascale performance at a fraction of the power of traditional compute architectures.”

    — Dr. Bill Dally, chief scientist and senior vice president of research

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: