Tagged: Lawrence Livermore National Laboratory Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:35 pm on September 27, 2014 Permalink | Reply
    Tags: , , , Lawrence Livermore National Laboratory   

    From LLNL: “Giant Steps For Adaptive Optics” 


    Lawrence Livermore National Laboratory

    Science & Technology Review
    september 2014
    Arnie Heller

    LAST November, high in the Chilean Andes, an international team of scientists and engineers, including Lawrence Livermore researchers, celebrated jubilantly in the early morning hours. The cause for their celebration was the appearance of a faint but unmistakable image of a planet 63 light-years from Earth circling a nearby star called Beta Pictoris. The clear image was viewable from a ground-based telescope thanks to one of the most advanced adaptive optics systems in existence, a key element of the newly fielded Gemini Planet Imager (GPI).

    Gemini Planet Imager
    Gmini Planet Imager

    GPI (pronounced gee-pie) is deployed on the 8.1-meter-diameter Gemini South telescope, situated near the summit of Cerro Pachón at an altitude of 2,715 meters. The size of a small car, GPI is mounted behind the primary mirror of the giant telescope. Although the imager is still in its shakedown phase, it is producing the fastest and clearest images of extrasolar planets (exoplanets) ever recorded. GPI is perhaps the most impressive scientific example of Lawrence Livermore’s decades-long preeminence in adaptive optics. This technology uses an observing instrument’s optical components to remove distortions that are induced by the light passing through a turbulent medium, such as Earth’s atmosphere, or by mechanical vibration.

    Gemini South telescope
    Gemini South Interior
    Gemini South Telescope

    More than two decades ago, Livermore scientists were among the first to show how adaptive optics can be used in astronomy to eliminate the effects of atmospheric turbulence, which cause the twinkle we see in stars when viewing them from Earth. Those effects also create blurring in images recorded by ground-based telescopes. Laboratory researchers have since applied adaptive optics to other fields, including lasers and medicine. For example, adaptive optics helped produce extremely high-resolution images of the retina with an instrument that won an R&D 100 Award in 2010. (See S&TR, October/November 2010, A Look inside the Living Eye.) Livermore teams are now working on an adaptive optics system to transport x-ray beams in a new generation of high-energy research facilities. In addition, outreach efforts by the Laboratory are strengthening educational opportunities in this field at U.S. colleges and universities.

    Designed for Exoplanet Imaging

    GPI is the first astronomical instrument designed and optimized for direct exoplanet imaging and analysis. Imaging planets directly is exceedingly difficult because most planets are at least 1 to 10 million times fainter than the parent stars they orbit. One way to improve image quality is to send telescopes into orbit, which boosts research costs enormously. A much less expensive approach is to equip a ground-based telescope with adaptive optics to compensate in real time for the distortions of light caused by Earth’s atmosphere.

    Livermore computational engineer David Palmer, GPI project manager and leader of its adaptive optics development effort, notes that GPI comprises several interconnected systems and components. In addition to adaptive optics, the imager includes an interferometer, coronagraph, spectrometer, four computers, and an optomechanical structure to which everything is attached. All are packaged into an enclosure 2 cubic meters in volume and flanked on either side by “pods” that hold the accompanying electronics.

    GPI hangs on the back end of Gemini South, a design that sharply constrains the imager’s volume, weight, and power requirements. While in use, it constantly faces the high winds and hostile environment at high altitude. As the telescope slews to track a star, the instrument flexes, making alignment more complicated. Nevertheless, says Palmer, GPI has maintained its alignment “phenomenally well” and performed superbly. “The precision requirements worked up by the GPI design team are almost staggering,” he says, “especially those for the adaptive optics system.”

    Laboratory electrical engineer Lisa Poyneer adds, “GPI features several new approaches that enable us to correct the atmosphere with precision never before achieved.” Poyneer developed the algorithms (mathematical procedures) that control two deformable mirrors and led adaptive optics system testing in the laboratory and at the telescope.

    GPI is an international project with former Livermore astrophysicist Bruce Macintosh (now a professor at Stanford University) serving as principal investigator. The Gemini South telescope is an international partnership as well, involving the U.S., Canada, Australia, Argentina, Brazil, and Chile. Macintosh says the first discussions concerning a ground-based instrument dedicated to the search for exoplanets began in 2001. “A lot of exoplanets were being discovered at that time, but the discoveries didn’t tell us much about the planets themselves,” he says. “There was a clear scientific need to incorporate adaptive optics, and the technology was progressing quickly.”

    After more than eight years in development, GPI components were tested and integrated at the Laboratory for Adaptive Optics at the University of California (UC) at Santa Cruz in 2012 and 2013. The imager was shipped to Chile in August 2013, with first light conducted in November.

    Scientists will use GPI over the next three years to discover and characterize dozens or more exoplanets circling stars located up to 230 light-years from Earth. In addition to resolving exoplanets from their parent stars, GPI uses a spectrometer to probe the composition of each exoplanet’s atmosphere. The instrument also studies disks around young stars with a technique called polarization differential imaging.

    two
    Lawrence Livermore engineer Lisa Poyneer (left) and Stanford University astrophysicist Bruce Macintosh (previously at Livermore) stand in front of the Gemini Planet Imager (GPI), which is installed on the Gemini South telescope in Chile. Two electronic pods (blue) on either side of the main enclosure hold GPI’s electronics. (Photograph by Jeff Chilcote, University of California at Los Angeles [UCLA].)

    Age of Exoplanet Discovery

    The discovery of exoplanets was a historic breakthrough in modern astronomy. More than 1,000 exoplanets have been identified, mainly through indirect techniques that infer a planet’s mass and orbit. Astronomers have been surprised by the diversity of planetary systems that differ from our solar system. GPI is expected to strengthen scientific understanding of how planetary systems form and evolve, how planet orbits change, and what comprises their atmospheres.

    GPI masks the light emitted by a parent star to reveal the faint light of young (up to 1-billion-year-old) giant planets in orbits a few times greater than Earth’s path around the Sun. These young gas giants (the size of Jupiter and larger) are detected through their thermal radiation (about 1.0 to 2.4 micrometers wavelength in the near-infrared region).

    GPI is not sensitive enough to see Earth-sized planets, which are 10,000 times fainter than giant planets. (See S&TR, July/August 2012, A Spectra-Tacular Sight.) However, it complements astronomical instruments that infer a planet’s mass and orbit by measuring the small gravitational tugs exerted on a parent star or, as with NASA’s Kepler Space Telescope, by blocking very small amounts of light emitted by the parent star as the planet passes in front of that star. “These indirect methods tell us a planet is there and a bit about its orbit and mass, but not much else,” says Macintosh. “Kepler can detect tiny planets similar to the size of Earth. With GPI, we can find much larger planets, the size of Jupiter, so the two instruments provide complementary information.”

    NASA Kepler Telescope
    NASA/Kepler

    The direct imaging of giant planets permits the use of spectroscopy to estimate their size, temperature, surface gravity, and atmospheric composition. Because different molecules absorb light at different wavelengths, scientists can correlate the light emitted from a planet to the molecules in its atmosphere.

    team
    In November 2013, members of the GPI first-light team celebrated when the system acquired its first images. The team includes: (from left to right) Pascale Hibon, Stephen Goodsell, Markus Hartung, and Fredrik Rantakyrö from Gemini Observatory; Jeffrey Chilcote, UCLA; Jennifer Dunn, National Research Council (NRC) Canada Herzberg Institute of Astrophysics; Sandrine Thomas, NASA Ames Research Center; Macintosh; David Palmer, Lawrence Livermore; Dmitry Savransky, Cornell University; Marshall Perrin, Space Telescope Science Institute.; and Naru Sadakuni, Gemini Observatory. (Photograph by Jeff Chilcote, UCLA.)

    In November 2013, members of the GPI first-light team celebrated when the system acquired its first images. The team includes: (from left to right) Pascale Hibon, Stephen Goodsell, Markus Hartung, and Fredrik Rantakyrö from Gemini Observatory; Jeffrey Chilcote, UCLA; Jennifer Dunn, National Research Council (NRC) Canada Herzberg Institute of Astrophysics; Sandrine Thomas, NASA Ames Research Center; Macintosh; David Palmer, Lawrence Livermore; Dmitry Savransky, Cornell University; Marshall Perrin, Space Telescope Science Institute; and Naru Sadakuni, Gemini Observatory. (Photograph by Jeff Chilcote, UCLA.)

    Extreme Adaptive Optics

    The heart of GPI is its highly advanced, high-contrast adaptive optics system (sometimes called extreme adaptive optics) that measures and corrects wavefront errors induced by atmospheric air motion and the inevitable tiny flaws in optics. As light passes through the Gemini South telescope, GPI measures its wavefront 1,000 times per second at nearly 2,000 locations. The system corrects the distortions within 1 millisecond by precisely changing the positions of thousands of actuators, which adjusts the shape of two mirrors. As the adaptive optics system operates, GPI typically takes about 60 consecutive, 1-minute exposures and can detect an exoplanet 70 times more rapidly than existing instruments.

    To meet GPI’s stringent requirements, the Livermore team developed several technologies specifically for exoplanet science. A self-optimizing computer system controls the actuators, with computationally efficient algorithms determining the best position for each actuator with nanometer-scale precision. A spatial filter prevents aliasing (artifacts).

    Livermore optical engineer Brian Bauman designed the innovative and compact adaptive optics for GPI. He has also worked on adaptive optics components for vision science and Livermore’s Atomic Vapor Laser Isotope System and has developed simpler systems for telescopes at the Lick Observatory and other observatories. Says Bauman, “We wanted GPI to provide much greater contrast and resolution than had been achieved in an adaptive optics system without producing artifacts that could mask a planet or be mistaken for one.”

    The system corrects aberrations by adjusting the shape of two deformable mirrors. Incoming light from the telescope is relayed to the first mirror, called the woofer. Measuring about 5 centimeters across, this mirror has 69 actuators to correct atmospheric components with low spatial frequencies.

    The woofer passes the corrected light to the tweeter—a 2.56-centimeter-square deformable mirror with 4,096 actuators for finer corrections. The tweeter is a microelectromechanical systems– (MEMS-) based device developed for GPI by Boston Micromachines. It is made of etched silicon, similar to the material used for microchips, rather than reflective glass. The tweeter’s actuators are spaced only 400 micrometers apart; a circular patch of 44 actuators in diameter is used to compensate for the high-spatial-frequency components of the atmosphere.

    GPI has 10 times the actuator density of a general-purpose adaptive optics system. Poyneer explains that the more actuators, the more accurately the mirror surface can correct for atmospheric turbulence. “MEMS was the only technology that could give us thousands of actuators and meet our space and power requirements,” she says. “Given the number of actuators, we had to design the system to measure all aberrations at the same resolution.” This precision in controlling the mirrors is accomplished by a wavefront sensor that breaks incoming light into smaller subregions, similar to the receptors on a fly’s compound eye.

    A major challenge to the increased number of actuators is that existing algorithms required far too much computation to adjust the mirrors as quickly as needed. In response, Poyneer developed a new algorithm that requires 45 times less computation. “GPI must continually perform all of its calculations within 1 millisecond,” says Palmer, who implemented the real-time software that achieves this goal. Remarkably, the system of algorithms is self-optimized. That is, says Poyneer, “A loop monitors how the operations are going and adjusts the control system every 8 seconds. If the atmospheric turbulence gets stronger, the system control will become more aggressive to give the best performance possible.”

    The mirrors forward the corrected light to a coronagraph, which blocks out much of the light from the parent star being observed, revealing the vastly fainter planets orbiting that star. Relay optics then reform the light onto a lenslet array, and a prism disperses the light into thousands of tiny spectra. The resulting pattern is transferred to a high-speed detector, and a few minutes of postprocessing removes the last remaining noise, or speckles.

    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    disk
    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    First Light November 2013

    Researchers conducted the first observations with GPI in November 2013, when they trained the Gemini South telescope on two known planetary systems: the four-planet HR8799 system (codiscovered in 2008 by a Livermore-led team at the Gemini and Keck observatories) and the one-planet Beta Pictoris system. A highlight from the November observations was GPI recording the first-ever spectrum of the young planet Beta Pictoris b, which is visible as a small but distinct dot.

    bp
    This composite image represents the close environment of Beta Pictoris as seen in near infrared light. This very faint environment is revealed after a very careful subtraction of the much brighter stellar halo. The outer part of the image shows the reflected light on the dust disc, as observed in 1996 with the ADONIS instrument on ESO’s 3.6 m telescope; the inner part is the innermost part of the system, as seen at 3.6 microns with NACO on the Very Large Telescope. The newly detected source is more than 1000 times fainter than Beta Pictoris, aligned with the disc, at a projected distance of 8 times the Earth-Sun distance. Both parts of the image were obtained on ESO telescopes equipped with adaptive optics.
    Date 21 November 2008
    Source http://www.eso.org

    ESO 3.6m telescope & HARPS at LaSilla
    ESO 3.6 M Telescope at Cerro LaSilla

    ESO VLT Interferometer
    ESO VLT at Cerro Paranal

    Keck Observatory
    Keck Observatory Interior
    Keck

    Using the instrument’s polarization mode, the first-light team also detected starlight scattered by tiny particles and studied a faint ring of dust orbiting the young star HR4796A. The team released the images at the January 2014 meeting of the American Astronomical Society. “The first images were a factor of 10 better than those taken with the previous generation of instruments,” says Macintosh. “We could see a planet in the raw image, which was pretty amazing. In one minute, we found planets that used to take us an hour to detect.”

    Data from the first-light observations are allowing researchers to refine estimates of the orbit and size of Beta Pictoris b. To analyze the exoplanet, the Livermore team and their international collaborators looked at the two disks of dense gas and debris surrounding the parent star. They found that the planet is not aligned with the main debris disk but instead with an inner warped disk, with which it may interact. “If Beta Pictoris b is warping the disk, that helps us see how the planet-forming disk in our own solar system might have evolved long ago,” says Poyneer.

    Since first light, the Livermore adaptive optics team has been working to improve GPI’s performance by minimizing vibration caused by the coolers that chill the spectrometer to a very low temperature. Vibrations decrease the stability of the parent star on the coronagraph and inject a significant focusing error into the system as the telescope optics shake. In response, the team developed algorithms that effectively cancel the errors in a manner similar to noise-canceling headphones. The filters have reduced pointing vibrations to a mere one-thousandth of an arcsecond and decreased the focusing error by 30 times, from 90 to 3 nanometers.

    In November 2014, the GPI Exoplanet Survey—an international team that includes dozens of leading exoplanet scientists—will begin an 890-hour-long campaign to discover and characterize giant exoplanets orbiting 600 young stars. These planets are located between 5 and 50 astronomical units from their parent stars, or up to 50 times the distance of Earth from the Sun (nearly 150 million kilometers). The observing time is the largest amount allocated to one group at Gemini South and represents 10 to 15 percent of the time available for the next three years. In the meantime, GPI verification and commissioning efforts continue.

    spot
    (left) During its first observations, GPI captured this image within 60 seconds. It shows a planet orbiting the star Beta Pictoris, which is 63 light-years from Earth. (right) A series of 30 images was later combined to enhance the signal-to-noise ratio and remove spectral artifacts. The four spots equidistant from the star are fiducials, or reference points. (Image processing by Christian Marois, NRC Canada.)

    (left) During its first observations, GPI captured this image within 60 seconds. It shows a planet orbiting the star Beta Pictoris, which is 63 light-years from Earth. (right) A series of 30 images was later combined to enhance the signal-to-noise ratio and remove spectral artifacts. The four spots equidistant from the star are fiducials, or reference points. (Image processing by Christian Marois, NRC Canada.)

    deform

    GPI also records data using polarization differential imaging to more clearly capture scattered light. Images of the young star HR4796A revealed a narrow ring around the star, which could be dust from asteroids or comets left behind by planet formation. The left image shows normal light scattered by Earth’s turbulent atmosphere, including both the dust ring and the residual light from the central star. The right image shows only polarized light taken with GPI. (Image processing by Marshall Perrin, Space Telescope Science Institute.)
    GPI also records data using polarization differential imaging to more clearly capture scattered light. Images of the young star HR4796A revealed a narrow ring around the star, which could be dust from asteroids or comets left behind by planet formation. The left image shows normal light scattered by Earth’s turbulent atmosphere, including both the dust ring and the residual light from the central star. The right image shows only polarized light taken with GPI. (Image processing by Marshall Perrin, Space Telescope Science Institute.)

    graph
    The Livermore adaptive optics team has improved GPI’s performance by minimizing vibration caused by the coolers that chill the spectrometer. Vibrations inject a large focusing error into the system as the telescope optics shake. The team developed filters that reduced the focusing error by 30 times—from 90 nanometers to 3.

    Adaptive Control of X-Ray Beams

    Building on the adaptive optics expertise gained with GPI, the Laboratory has launched an effort, led by Poyneer, to design, fabricate, and test x-ray deformable mirrors equipped with adaptive optics. “We took some of the best adaptive optics people in the world and put them with our experts in x-ray mirrors,” says physicist Michael Pivovaroff, who initiated the program.

    Livermore researchers previously applied their expertise in x-ray optics to design and fabricate the six advanced mirrors for the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in Menlo Park, California. These mirrors transport the LCLS x-ray beam and control its size and direction. The brightest x-ray source in the world, LCLS can capture stop-action shots of moving molecules with a “shutter speed” measured in femtoseconds, or million-billionths of a second. With a wavelength about the size of an atom, it can image objects as small as the DNA helix. (See S&TR, January/February 2011, Groundbreaking Science with the World’s Brightest X Rays.)

    SLAC LCLS
    SLAC LCLS

    Despite the outstanding performance of current x-ray mirrors, further advances in their quality are required to take full advantage of the capabilities of LCLS and newer facilities, such as the Department of Energy’s (DOE’s) National Synchrotron Light Source II at Brookhaven National Laboratory and those under construction in Europe. “DOE is investing billions of dollars building x-ray light sources such as synchrotrons and x-ray lasers,” says Pivovaroff. “Scientists working with those systems need certain spatial and spectral characteristics for their experiments, but every x-ray optic distorts the photons in some way. We don’t want our mirrors to get in the way of the science.”

    BNL NSLS II
    nsls II interior
    BNL NSLS-II

    Combining adaptive optics with x-ray mirrors may lead to three significant benefits. First, active control is a potentially inexpensive way to achieve better surface flatness than is possible by polishing the mirrors alone. Second, the ability to change a mirror’s flatness allows for real-time correction of aberrations in an x-ray beamline. This capability includes self-correction of errors in the mirror itself (such as those caused by heat buildup) and correction of errors introduced by other optics. Finally, adaptive optics–corrected x-ray mirrors could widen the possible attributes of x-ray beams, leading to new kinds of experiments.

    Unlike mirrors used at visible and near-infrared wavelengths, x-ray mirrors must operate at a shallow angle called a grazing incidence. This requirement makes their design and profile quite different from deformable mirrors for astronomy. Traditional x-ray optics are rigid and have a longitudinal, or ribbon, profile up to 1 meter long. If adaptive optics systems can be designed to correct distortions in x-ray beams, next-generation research facilities could offer greater experimental flexibility and achieve close to their theoretical performance.

    “As with visible and infrared light, we want to manipulate the x-ray wavefront with mirrors while preserving coherence,” says Livermore optical engineer Tom McCarville, who was lead engineer for the LCLS x-ray mirrors. “The fabrication tolerances are much greater because x-ray wavelengths are so short. Technologies for diffracting and transmitting x rays are relatively limited compared to those available for visible light. Reflective x-ray technology is, however, mature enough to deploy for transporting x rays from source to experiment. Dynamically controlling the mirror’s surface figure will preserve the x-ray source’s properties during transport and thus enhance the precision of experimental results.”

    beam
    Extremely small adjustments to the surface height on the x-ray deformable mirror correct the incoming beam, as depicted in this artist’s rendering (not to scale). Unlike visible light, the x rays can only be reflected off the mirror at a very shallow incoming angle, called a grazing incidence. (Rendering by Kwei-Yu Chu.)

    First X-Ray Deformable Mirror

    With funding from the Laboratory Directed Research and Development (LDRD) Program, the Livermore team designed and built the first grazing-incidence adaptive optics x-ray mirror with demonstrated performance suitable for use at high-intensity DOE light sources. This x-ray deformable mirror, developed with partner Northrop-Grumman AOA Xinetics, was made from a superpolished single-crystal silicon bar measuring 45 centimeters long, 30 millimeters high, and 40 millimeters wide, the same dimensions of the three hard x-ray mirrors built for LCLS.

    A single row of 45 actuators bonded opposite the reflecting surface makes the mirror deformable. These 1-centimeter-wide actuators provide fine-scale control of the mirror’s surface figure (overall shape). Actuators respond to voltage changes by expanding or contracting in width along the mirror’s long axis to bend the reflecting surface. Seven internal temperature sensors and 45 strain gauges monitor the silicon bar, providing a method to self-correct for long-term drifts in the surface figure.

    As with all x-ray optics, the quality of the mirror’s surface is extremely important because the slightest bump or imperfection will scatter x rays. The substrate was thus fabricated and superpolished to nanometer-scale precision before assembly into a deformable mirror. The initial surface figure error for the deformable mirror was 19 nanometers. Although extremely small, it is substantially above the 1-nanometer-level required for best performance in an x-ray beamline.

    To meet that requirement, the team used high-precision visible light measurements of the mirror’s surface to “flatten” the mirror. With this approach, interferometer measurements are processed with specialized control algorithms. Specific voltages are then applied to the actuators to adjust the mirror’s surface. The resulting figure error was only 0.7 nanometers. “We demonstrated the first subnanometer active flattening of a substrate longer than 15 centimeters,” says Poyneer. “It was a very important step in validating our technological approach.”

    For deformable mirrors to be fully effective, scientists must develop better methods to analyze the x-ray beamline. “We need a sensor that won’t distort the beam,” says Pivovaroff. Such a sensor would provide a feedback loop that continuously feeds beam characteristics to the mirror actuators so they compensate for inconsistencies in the beam. Poyneer is working on new diagnostic techniques at Lawrence Berkeley National Laboratory’s Advanced Light Source (ALS), and the Livermore team is scheduled to begin testing the mirror on a beamline at ALS. The long-term goal of that testing will be to repeat the subnanometer flattening experiment, this time using x rays to measure the surface.

    Poyneer is hopeful the adaptive optics research effort will eventually result in a national capability that DOE next-generation x-ray light sources can draw on for new beamlines. She has shared the results with scientists at several DOE high-energy research centers and is working to better understand the needs of beamline engineers and the scientists who use those systems. “There’s a lot of interest and excitement in the community because deformable mirrors let us do better science,” says Pivovaroff. “The performance of our mirror has surprised many people. Controlling the surface of a half-meter-long optic to less than a nanometer is quite an accomplishment.”

    By enabling delivery of more coherent and better-focused x rays, the mirrors are expected to produce sharper images, which could lead to advances in physics, chemistry, and biology. The technology may enable new types of x-ray diagnostics for experiments at the National Ignition Facility.

    graph2
    In an experiment, high-precision visible light measurements were used to flatten the x-ray deformable mirror to a surface figure error of only 0.7 nanometers average deviation.

    art
    This artist’s concept illustrates the difference in reconstruction quality that adaptive optics could provide if installed at next-generation x-ray beamline facilities. At the top, a partially coherent x-ray beam hits the target object, producing a diffraction pattern on the detector and limiting the accuracy of the recovered image. At the bottom, adaptive optics provide a coherent beam with excellent wavefront quality, which improves resolution of the object. (Rendering by Kwei-Yu Chu.)

    Expanded Educational Outreach

    The Laboratory’s adaptive optics team is also dedicated to training the next generation of scientists and engineers for careers in adaptive optics and is working to disseminate expertise in adaptive optics technology to academia and industry. In a joint project between Lawrence Livermore National Security (the managing contractor for Lawrence Livermore) and UC, two graduate students from the UC Santa Cruz Department of Astronomy and Astrophysics are testing advanced algorithms that could further improve the performance of systems such as GPI. The algorithms are designed to predict wind-blown turbulence and further negate the effects of the atmosphere. Poyneer and astronomer Mark Ammons are mentoring the students, Alex Rudy and Sri Srinath.

    Poyneer says, “GPI has demonstrated how continued work on technology developments can lead to significantly improved instrument performance.” According to Ammon, “An important frontier in astronomy is pushing adaptive optics operation to visible wavelengths, which requires better control. GPI routinely meets these stringent performance requirements.”

    The lessons learned as part of the GPI experience will be critical input for next-generation adaptive optics on large telescopes, such as the W. M. Keck telescopes in Hawaii. Ammons adds, “While adaptive optics were first developed for military purposes, the loop has now closed—the advances made with GPI offer a wide range of potential applications for national security applications.”

    In addition, the Livermore team is applying its expertise to other fields, as exemplified by progress in the extremely flat x-ray deformable mirror. Thanks to adaptive optics, the universe—from planets to x rays—is coming into greater focus.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:32 pm on September 25, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory,   

    From LLNL: “From RAGS to riches” 


    Lawrence Livermore National Laboratory

    09/25/2014
    Breanna Bishop, LLNL, (925) 423-9802, bishop33@llnl.gov

    The Radiochemical Analysis of Gaseous Samples (RAGS) is a true trash to treasure story, turning debris from the National Ignition Facility’s (NIF) target chamber into valuable data that helps to shape future experiments.

    LLNL NIF
    NIF at LLNL

    The RAGS diagnostic, developed for NIF by Sandia National Laboratories and commissioned in 2012, is a cryogenic system designed to collect the gaseous debris from the NIF target chamber after a laser shot, then concentrate, purify and analyze the debris for radioactive gas products. Radiation detectors on the apparatus produce rapid, real-time measurements of the radioactivity content of the gas. Based on the results of the counting, the total number of radioactive atoms that were produced via nuclear reactions during a NIF shot can be determined.

    team
    Members of the RAGS team with the apparatus. From left to right: Bill Cassata and Carol Velsko, primary RAGS operators and data analysts; Wolfgan Stoeffl, RAGS designer; and Dawn Shaughnessy, principal investigator for the project. Photo by Julie Russell/LLNL

    If the number of target atoms in the fuel capsule and/or hohlraum (cylinder surrounding the fuel capsule) was known prior to the shot, then the results from RAGS determine the number of reactions that occurred, which in turn is used to determine the flux of particles that was produced by the capsule as it underwent fusion. This information is used to validate models of NIF capsule performance under certain shot conditions.

    If specific materials are added to the capsule or hohlraum prior to the shot, then reactions related to a particular experiment can be measured. For instance, there have been gas-based experiments designed to measure areal density (a measure of the combined thickness and density of the imploding frozen fuel shell) and mix (a potentially undesirable condition during which spikes of the plastic rocket shell penetrate to the core of the hot fuel and cool it, decreasing the probability of igniting a sustained fusion reaction with energy gain).

    “Radiochemical diagnostics probe reactions that occur within the capsule or hohlraum material. By adding materials into the capsule ablator and subsequently measuring the resulting products, we can explore certain capsule parameters such as fuel-ablator mix,” said radiochemist and RAGS principal investigator Dawn Shaughnessy. “There are plans to add isotopes of xenon gas into capsules specifically for this purpose – to quantify the amount of mix that occurs during a NIF implosion.”

    RAGS can be used to perform basic nuclear science experiments. Recently, the diagnostic has been employed during shots where the hohlraum contained small amounts of depleted uranium. Gaseous fission fragments were collected by RAGS, including very short-lived species with half-lives on order of a few seconds. Based on these observations, there are plans to use RAGS in the future to measure independent fission product yields of gaseous species, which is difficult to do at traditional neutron sources.

    “This opens up the possibility of also using RAGS for fundamental science experiments, such as measuring reaction rates of species relevant to nuclear astrophysics, and measuring independent fission yields,” Shaughnessy said.

    In addition to Shaughnessy as PI, other contributors to RAGS include: Tony Golod and Jay Rouse, who wrote the NIF control software that collects data and operates the diagnostic; Wolfgang Stoeffl, who designed the RAGS apparatus, and Allen Riddle, who built it; Don Jedlovec, who serves as the responsible system engineer; and Carol Velsko and Bill Cassata, who are the primary RAGS operators and data analysts.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 12:57 pm on September 23, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory, ,   

    From LLNL: “Right on target” 


    Lawrence Livermore National Laboratory

    09/23/2014
    Breanna Bishop, LLNL, (925) 423-9802, bishop33@llnl.gov

    Dozens of employees gathered on Friday to celebrate two important milestones achieved by the NIF & Photon Science (NIF&PS) Directorate‘s and Weapons and Complex Integration (WCI) Directorate’s Target Fabrication team.

    Target Fabrication Manager Alex Hamza welcomed the crowd and kicked off the celebration by announcing the first of the milestones: This summer, the target fabrication team built the 10,000th target for the Omega Laser Facility at the University of Rochester’s Laboratory for Laser Energetics (LLE).

    ah
    Target Fabrication Manager Alex Hamza welcomed the crowd and kicked off the celebration by announcing two important milestones.
    Photo by Julie Russell/LLNL

    “When I think about NIF, the two things that always come to my mind is the incredible engineering that happens on a large scale and on a small scale. Of course, today we are celebrating the small scale with targets. But the other thing is the partnerships,” said Jeff Wisoff, principal associate director for NIF&PS.

    “The Omega facility has been an incredible partner and LLE is incredibly important to us,” he added. “Omega has provided the testing ground for a lot of things we wanted to do on NIF. It’s a very enabling capability, and our success in building targets for that facility is part of that great partnership.”

    The second milestone announced by Hamza was the completion of the 500th cryogenic target for NIF. This is an important achievement because these millimeter-sized targets are complicated engineering marvels in tiny packages — so complicated that targets were produced at a rate of one per year when the capability first got off the ground in 2005.

    National Ignition Facility
    NIF

    thing
    A beryllium capsule in the hohlraum of a keyhole shock-timing target. Keyhole experiments measure the strength (velocity) and timing of the shock waves from the laser pulse as they transit the capsule. The viewing cone for the Velocity Interferometer System for Any Reflector (VISAR) diagnostic, shown mounted above the hohlraum, is inserted through the side of the hohlraum wall and into the capsule.
    Photo by Jason Laurea

    Design and fabrication is so complex because the precision required to perform under the extreme conditions experienced during an experiment on NIF — temperatures of 180 million degrees Fahrenheit and pressures of 100 billion atmospheres. Components must be machined to within an accuracy of 1 micron (1 millionth of a meter) and many material structures and features can be no larger than 100 nanometers, which is just 1/1,000th the width of a human hair.

    The capsule must have a smoothness tolerance approaching 1 nanometer, 1/100,000th the thickness of a human hair. Because surface debris can interfere with the uniformity of capsule heating and compression, dust particles greater than 5 microns in diameter on the capsule wall must be eliminated. Finally, the target temperature is held in the range of 18 to 20 kelvins (-427 to -424 degrees Fahrenheit) just before the laser shot so that an incredibly smooth and uniform solid hydrogen fuel layer can be formed.

    Many different target designs exist — for stockpile stewardship purposes, high-energy-density science and more. Today, the target fabrication team has the capability to produce 5-6 cryogenic targets per week and to produce more than 100 different types of targets each year.

    “This is an incredible success story that has been so enabling in moving the science forward,” Wisoff said. “We couldn’t have done this without the incredible partnerships we’ve had with General Atomics and Schafer over the years. That partnership, and the partnerships that extend through our organization, is really what enables us to be successful.”

    Des Pilkington, leader of WCI’s AX Division, was on hand to speak to one of those partnerships.

    “We really push ourselves hard in WCI to think about what the right experiments are and what we want to do to test our ability to demonstrate that we have a predictive capability,” he said. “In Target Fabrication, the response to some of these pushing needs from WCI has been incredible. I’d like to say thank you for everything that you’ve done, and everything you’ve delivered for us so far. I’m really looking forward to an exciting future with exciting challenges for us and for you.”

    Abbas Nikroo, leader of General Atomics’ Target Fabrication Program, echoed those thanks. “It’s been a pleasure working with you guys — top quality people from the S&T team to the engineering team to the people on the floor who do the hard work of assembly,” he said. “I see the transition we’ve made to streamlined production, where we’ve gone from one target a year to 5-6 targets a week. It’s incredible, and is really built on the great work that everyone has done at all levels.”

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 7:50 am on September 11, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory,   

    From LLNL: “New energy record set for multilayer-coated mirrors” 


    Lawrence Livermore National Laboratory

    09/11/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    Multilayer-coated mirrors, if used as focusing optics in the soft gamma-ray photon energy range, can enable and advance a range of scientific and technological applications that would benefit from the large improvements in sensitivity and resolution that true imaging provides.

    In a paper published in a recent online edition of Optics Express, LLNL postdoc Nicolai Brejnholt and colleagues from LLNL, the Technical University of Denmark and the European Synchrotron Radiation Facility demonstrate for the first time that very short-period multilayer coatings deposited on super-polished substrates operate efficiently as reflective optics above 0.6 MeV, nearly a factor of two higher than the previous record at 384 keV, set last year by this same group (Physical Review Letters 101 027404, 2013).

    three
    Regina Soufli, Marie-Anne Descalle, postdoc Nicolai Brejnholt (shown in photo) and LLNL colleagues and collaborators recently demonstrated that very short-period multilayer coatings deposited on super-polished substrates operate efficiently as reflective optics.

    Multilayer mirrors can be used for two broad classes of applications. First, they can be used in spectroscopy, to enhance or suppress certain photon. energies. The team is looking into how to use multilayers to examine spent nuclear fuel for non-proliferation missions.

    Second, multilayer mirrors can be used as focusing, imaging optics by applying multilayer coatings to curved substrates. “We have previously made hard X-ray optics for nuclear medicine and astrophysics applications, and we can now consider adapting the same fabrication techniques to work in the soft gamma-ray band,” said Michael Pivovaroff, LLNL co-author.

    The field of astrophysics would benefit the most from gamma-ray focusing optics, including the sub-disciplines of galactic and extragalactic astronomy, solar astronomy, cosmic-ray research and potentially observational cosmology. Gamma-ray optics also have shown promise for nuclear medicine and nuclear non-proliferation applications.

    “We have demonstrated the capability to make highly reflective multilayer thin films with ultra-short period thickness (1-2 nanometers) and stable, ultra-smooth interfaces between the layers, as needed for operation at these extremely high photon energies. We chose tungsten carbide/silicon carbide (WC/SiC) multilayers for this purpose,” said Regina Soufli, another LLNL co-author.

    “The measurements at 0.65 MeV showed we had to understand sub-nanometer variations across the 36-square-inch mirror to model the measured performance,” Brejnholt said.

    The team demonstrated that multilayer mirrors in the gamma-ray band operate efficiently and according to well-understood models. The team combined classical, wave interference models with a Monte-Carlo particle simulation code. The latter was used to account for incoherent scattering, a phenomenon that is negligible at lower photon energies but becomes significant in the soft gamma ray range. Incoherent scattering was observed and modeled on multilayer structures for the first time by the LLNL team.

    Other Livermore co-authors include Marie-Anne Descalle, principal investigator of the Laboratory Directed Research and Development (LDRD) project that funded this effort, Mónica Fernández-Perea, Jennifer Alameda, Tom McCarville and Sherry Baker.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on August 27, 2014 Permalink | Reply
    Tags: , ELI-Beamlines, , Lawrence Livermore National Laboratory,   

    From Livermore Lab: “LLNL synchs up with ELI Beamlines on timing system” 


    Lawrence Livermore National Laboratory

    08/27/2014
    Breanna Bishop, LLNL, (925) 423-9802, bishop33@llnl.gov

    In 2013, Lawrence Livermore National Laboratory (LLNL), through Lawrence Livermore National Security LLC (LLNS), was awarded more than $45 million to develop and deliver a state-of-the-art laser system for the European Union’s Extreme Light Infrastructure Beamlines facility (ELI-Beamlines), under construction in the Czech Republic.

    two
    Thomas Manzec and Marc-Andre Drouin, from ELI Beamlines, work on synchronizing the HAPLS and ELI timing systems. Photo by Jim Pryatel.

    eli
    The ELI Beamlines facility is being built on a brownfield site with sufficient infrastructure. According to the current zoning plan, the area can be used for public amenities, science and research. It is therefore a place that provides enough space both for the laser center, as well as for any other building of similar use (technology park buildings, spin-off companies or other research facilities).

    When commissioned to its full design performance, the laser system, called the “High repetition-rate Advanced Petawatt Laser System” (HAPLS), will be the world’s highest average power petawatt laser system.

    HAPLS
    HAPLS

    Nearly a year into the project, much progress has been made, and all contract milestones to date have been delivered on schedule. Under the same agreement, ELI Beamlines delivers various work packages to LLNL enabling HAPLS control and timing systems to interface with the overarching ELI Beamlines facility control system. In a collaborative effort, researchers and engineers from LLNL’s NIF & Photon Science Directorate work with scientists from the ELI facility to develop, program and configure these systems.

    National Ignition Facility
    NIF at Livermore

    According to Constantin Haefner, LLNL’s project manager for HAPLS, this joint work is vital. “Working closely together on these collaborative efforts allows us to deliver a laser system most consistent with ELI Beamlines facility requirements. It also allows the ELI-Beamlines team to gain early insight into the laser system architecture and gain operational experience,” he said.

    This summer, that process began. Marc-Andre Drouin and Karel Kasl, control system programmers for ELI, spent three months at LLNL working with the HAPLS integrated control system team. During their time at LLNL, they focused almost exclusively on the ELI-HAPLS timing interface, which allows exact synchronization of HAPLS to the ELI Beamlines master clock.

    “The HAPLS timing system must be able to operate independent of the ELI timing system,” Drouin said. “But, it also needs to be capable of being perfectly synchronized to ELI. That bridge between timing systems is what we have been working on – making sure HAPLS runs very well independently as well as integrating with ELI.”

    Haefner pointed out that while HAPLS is a major component, it becomes a subsystem when it moves to the ELI facility. Once at ELI, HAPLS will integrate with the wider user facility, consisting of target systems, experimental systems, diagnostic systems – all of which have to be timed and fed from a master clock.

    Kasl likened the master clock to a universal clock used by an office. “We brought the clock here, and now everyone in the office is using the clock to synchronize their work,” he said.

    The master clock, built by ELI, was programmed as a bridge between the ELI and HAPLS timing systems. During their time at LLNL, Drouin and Kasl worked on configuring that hardware and writing the software that talks to the clock and to the subcomponents that control a very precise sequence of events.

    Last week, the ELI team finished their three-month stint at LLNL, but will be back in early fall to continue work – and they’re looking forward to it.

    “This unit is going to get integrated with our other systems, so there needs to be an overlap between the two teams,” Kasl said.

    “It’s good experience for us to learn about the internal workings of the HAPLS system,” Drouin added. “Having this inside knowledge of the most integral parts of the laser is a very big advantage for us in the long run.”

    Earlier this year, Jack Naylon and Tomas Mazanec, also from ELI, visited LLNL to contribute to the work.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:41 am on August 20, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory,   

    From Livermore Lab: “Livermore researchers create engineered energy absorbing material” 


    Lawrence Livermore National Laboratory

    08/20/2014
    James A Bono, LLNL, (925) 422-9919, bono4@llnl.gov

    Livermore researchers create engineered energy absorbing material

    Materials like solid gels and porous foams are used for padding and cushioning, but each has its own advantages and limitations. Gels are effective as padding but are relatively heavy; gel performance can also be affected by temperature, and possesses a limited range of compression due to a lack of porosity. Foams are lighter and more compressible, but their performance is not consistent due to the inability to accurately control the size, shape and placement of the voids (or air pockets) during the foam manufacturing process.

    To overcome these limitations, a team of engineers and scientists at Lawrence Livermore National Laboratory (LLNL) has found a way to design and fabricate, at the microscale, new cushioning materials with a broad range of programmable properties and behaviors that exceed the limitations of the material’s composition, through additive manufacturing, also known as 3D printing.

    The research is the subject of a paper published in Advanced Functional Materials.

    Livermore researchers, led by engineer Eric Duoss and scientist Tom Wilson, focused on creating a micro-architected cushion using a silicone-based ink that cures to form a rubber-like material after printing. During the printing process, the ink is deposited as a series of horizontally aligned filaments (which can be fine as a human hair) in a single layer. The second layer of filaments is then placed in the vertical direction. This process repeats itself until the desired height and pore structure is reached.

    LLNL researchers constructed cushions using two different configurations, one in an inline stacked configuration and the other in a staggered configuration (see figure). While both architectures were created out of the same constituent material and have the same degree of porosity, they each exhibited markedly different responses under compression and shear. The stacked architecture is stiffer in compression and, with increased compression, undergoes a buckling instability. The staggered architecture is softer in compression and undergoes more of a bending type of deformation. The stacked structure has solid columns of material beneath it to offer more support, while the staggered structure has voids under each filament that offer much less resistance to compression.

    scale
    A silicone cushion with programmable mechanical energy absorption properties was produced through a 3D printing process using a silicone-based ink by Lawrence Livermore National Laboratory researchers.

    With the help of LLNL engineer Todd Weisgraber, the team was able to model and predict the performance of each of the architectures under both compression and shear. This feat would be difficult or impossible to replicate with foams due to their random structure.

    “The ability to dial in a predetermined set of behaviors across a material at this resolution is unique, and it offers industry a level of customization that has not been seen before”, said Eric Duoss, research engineer and lead author.

    The researchers envision using their novel energy absorbing materials in many applications, including shoe and helmet inserts, protective materials for sensitive instrumentation and in aerospace applications to combat the effects of temperature fluctuations and vibration.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:00 pm on August 19, 2014 Permalink | Reply
    Tags: , , , Lawrence Livermore National Laboratory,   

    From Livermore Lab: “New project is the ACME of addressing climate change” 


    Lawrence Livermore National Laboratory

    08/19/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    High performance computing (HPC) will be used to develop and apply the most complete climate and Earth system model to address the most challenging and demanding climate change issues.

    Eight national laboratories, including Lawrence Livermore, are combining forces with the National Center for Atmospheric Research, four academic institutions and one private-sector company in the new effort. Other participating national laboratories include Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest and Sandia.

    The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications. The plan is to exploit advanced software and new high performance computing machines as they become available.

    book

    The initial focus will be on three climate change science drivers and corresponding questions to be answered during the project’s initial phase:

    Water Cycle: How do the hydrological cycle and water resources interact with the climate system on local to global scales? How will more realistic portrayals of features important to the water cycle (resolution, clouds, aerosols, snowpack, river routing, land use) affect river flow and associated freshwater supplies at the watershed scale?
    Biogeochemistry: How do biogeochemical cycles interact with global climate change? How do carbon, nitrogen and phosphorus cycles regulate climate system feedbacks, and how sensitive are these feedbacks to model structural uncertainty?
    Cryosphere Systems: How do rapid changes in cryospheric systems, or areas of the earth where water exists as ice or snow, interact with the climate system? Could a dynamical instability in the Antarctic Ice Sheet be triggered within the next 40 years?

    Over a planned 10-year span, the project aim is to conduct simulations and modeling on the most sophisticated HPC machines as they become available, i.e., 100-plus petaflop machines and eventually exascale supercomputers. The team initially will use U.S. Department of Energy (DOE) Office of Science Leadership Computing Facilities at Oak Ridge and Argonne national laboratories.

    “The grand challenge simulations are not yet possible with current model and computing capabilities,” said David Bader, LLNL atmospheric scientist and chair of the ACME council. “But we developed a set of achievable experiments that make major advances toward answering the grand challenge questions using a modeling system, which we can construct to run on leading computing architectures over the next three years.”
    To address the water cycle, the project plan (link below) hypothesized that: 1) changes in river flow over the last 40 years have been dominated primarily by land management, water management and climate change associated with aerosol forcing; 2) during the next 40 years, greenhouse gas (GHG) emissions in a business as usual scenario may drive changes to river flow.

    “A goal of ACME is to simulate the changes in the hydrological cycle, with a specific focus on precipitation and surface water in orographically complex regions such as the western United States and the headwaters of the Amazon,” the report states.

    To address biogeochemistry, ACME researchers will examine how more complete treatments of nutrient cycles affect carbon-climate system feedbacks, with a focus on tropical systems, and investigate the influence of alternative model structures for below-ground reaction networks on global-scale biogeochemistry-climate feedbacks.

    For cryosphere, the team will examine the near-term risks of initiating the dynamic instability and onset of the collapse of the Antarctic Ice Sheet due to rapid melting by warming waters adjacent to the ice sheet grounding lines.

    The experiment would be the first fully-coupled global simulation to include dynamic ice shelf-ocean interactions for addressing the potential instability associated with grounding line dynamics in marine ice sheets around Antarctica.

    Other LLNL researchers involved in the program leadership are atmospheric scientist Peter Caldwell (co-leader of the atmospheric model and coupled model task teams) and computer scientists Dean Williams (council member and workflow task team leader) and Renata McCoy (project engineer).

    Initial funding for the effort has been provided by DOE’s Office of Science.

    More information can be found in the Accelerated Climate Modeling For Energy: Project Strategy and Initial Implementation Plan.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:51 pm on August 13, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory,   

    From Livermore Lab: “It’s nanotubular: New material could be used for energy storage and conversion” 


    Lawrence Livermore National Laboratory

    08/13/2014
    Anne M Stark

    Lawrence Livermore researchers have made a material that is 10 times stronger and stiffer than traditional aerogels of the same density.

    This ultralow-density, ultrahigh surface area bulk material with an interconnected nanotubular makeup could be used in catalysis, energy storage and conversion, thermal insulation, shock energy absorption and high energy density physics.

    Ultralow-density porous bulk materials have recently attracted renewed interest due to many promising applications.

    Unlocking the full potential of these materials, however, requires realization of mechanically robust architectures with deterministic control over form, cell size, density and composition, which is difficult to achieve by traditional chemical synthesis methods, according to LLNL’s Monika Biener, lead author of a paper appearing on the cover of the July 23 issue of Advanced Materials.

    mag
    Lawrence Livermore National Laboratory researchers have made a material that is 10 times stronger and stiffer than traditional aerogels of the same density, which is detailed in a featured story appearing on the cover of Advanced Materials.

    Biener and colleagues report on the synthesis of ultralow-density, ultrahigh surface area bulk materials with interconnected nanotubular morphology. The team achieved control over density (5 to 400 mg/cm3), pore size (30 um to 4 um) and composition by atomic layer deposition (ALD) using nanoporous gold as a tunable template.

    “The materials are thermally stable and, by virtue of their narrow unimodal pore size distributions and their thin-walled, interconnected tubular architecture, about 10 times stronger and stiffer than traditional aerogels of the same density,” Biener said.

    The three-dimensional nanotubular network architecture developed by the team opens new opportunities in the fields of energy harvesting, catalysis, sensing and filtration by enabling mass transport through two independent pore systems separated by a nanometer-thick 3D membrane.

    Other Livermore authors include Jianchao Ye, Theodore Baumann, Y. Morris Wang, Swanee Shin, Juergen Biener and Alex Hamza.

    The paper titled Ultra-Strong and Low-Density Nanotubular Bulk Materials with Tunable Feature Size” can be found on the Web.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 8:45 am on August 1, 2014 Permalink | Reply
    Tags: Lawrence Livermore National Laboratory,   

    From CNBC: “Inside Lawrence Livermore and the arms race for innovation” 

    CNBC logo

    7/31/14
    Heesun Wee. Additional reporting by Brad Quick.

    Consider a supercomputer so fast and powerful that it generates simulated models to better understand everything from irregular human heartbeats to earthquakes. Picture tiny brain implants that can restore sight and possibly memory. Or what about the world’s largest laser, with powerful beams, zooming rocket-like across three football fields—research that could lead to future sources of clean energy?

    sequoia
    Sequoia at Livermore

    This is the world inside the Lawrence Livermore National Laboratory, a national security lab 50 miles east of San Francisco.

    Livermore Lab Campus
    Lawrence Livermore National Laboratory campus

    man
    Jeff Wisoff in front of the world’s largest laser at the Lawrence Livermore National Laboratory. Heesun Wee | CNBC

    National labs have been around for decades and are commonly associated with nuclear weapons testing. But inside Livermore’s mile-square campus, some 6,000 employees hover over hundreds of projects that span multiple industries, including oil and gas, health care and transportation.

    Livermore, like other labs, often collaborates with private companies to create solutions such as more fuel-efficient, long-haul trucks, and more resilient airplane components. The lab secured $1.5 billion in funding from multiple sources last year—the majority from the government.

    But in recent years, companies have been ponying up more money. Private industry contributed about $40 million for research at Livermore in 2013. “That will continue to go up,” said Richard Rankin, director of the lab’s industrial partnerships office.

    Labs also are emphasizing they’re open to collaboration. And part of the courtship can be explained by the growing complexity of modern problems. Think cyber and chemical warfare, or securing future energy supplies as climate change barrels down, or treating and managing more American soldiers, returning injured without limbs.

    Just as major energy companies have worked together to drill ever deeper for offshore oil, leading government-funded labs and companies are realizing they can’t go it alone.

    As the world becomes a scarier place, competition also is growing for brain power to solve the most pressing problems. In Silicon Valley, for example, a top science degree means options—research labs of your choosing, maybe an Apple gig, maybe a founding role at a start-up.

    But globally, there’s also demand for talent and big ideas—an innovation arms race, if you will.

    Lawrence Livermore has the world’s third-fastest supercomputer with the help of IBM. But China now holds the number one slot. And while the Livermore Lab has the world’s largest laser, France, China and Russia are pursuing super lasers of their own.

    Don’t laugh at this “mine is bigger, better, faster” game. Initial breakthroughs in science and technology can lead to patent-related revenues, of course. But first-mover advantages can also help secure medicine such as a cancer treatment or an Ebola vaccine. And there are national security consequences to such information. Just recall the 2011 film “Contagion” and the loss of social order, as a coveted vaccine is administered. You can see how this stuff might play out.

    This push to innovate or embrace the “art of the possible,” as one scientist put it, is why websites track the supercomputer race, which China is winning at the moment. “We should be concerned about that,” said Frederick Streitz, director of the lab’s High Performance Computing Innovation Center.

    Added Streitz: “Ideas are power.”

    inside
    Instruments are viewed inside the target chamber at Lawrence Livermore lab’s National Ignition Facility.

    Livermore was founded in 1952, during the height of the Cold War, to tackle national security challenges through science, engineering and technology.

    It was a formal naval base, and squat barracks remain on the property. Pilots in training were dunked into a swimming pool.

    The lab feels like a college campus or tech company. Cyclists take a break from research, likely pedaling past one of the many wineries in the Tri-Valley.

    Beyond the region, Livermore is among other leading national labs including Los Alamos in New Mexico and Oak Ridge in Tennessee.

    The groundwork for the government and private company collaboration was laid by passage of the Federal Technology Transfer Act in 1986. In industry circles, it’s widely referred to as “tech transfer.” And the shift is only intensifying.

    National labs have been around for decades and are commonly associated with nuclear weapons testing. But inside Livermore’s mile-square campus, some 6,000 employees hover over hundreds of projects that span multiple industries, including oil and gas, health care and transportation.

    Livermore, like other labs, often collaborates with private companies to create solutions such as more fuel-efficient, long-haul trucks, and more resilient airplane components. The lab secured $1.5 billion in funding from multiple sources last year—the majority from the government.

    But in recent years, companies have been ponying up more money. Private industry contributed about $40 million for research at Livermore in 2013. “That will continue to go up,” said Richard Rankin, director of the lab’s industrial partnerships office.

    Labs also are emphasizing they’re open to collaboration. And part of the courtship can be explained by the growing complexity of modern problems. Think cyber and chemical warfare, or securing future energy supplies as climate change barrels down, or treating and managing more American soldiers, returning injured without limbs.

    Just as major energy companies have worked together to drill ever deeper for offshore oil, leading government-funded labs and companies are realizing they can’t go it alone.

    As the world becomes a scarier place, competition also is growing for brain power to solve the most pressing problems. In Silicon Valley, for example, a top science degree means options—research labs of your choosing, maybe an Apple gig, maybe a founding role at a start-up.

    But globally, there’s also demand for talent and big ideas—an innovation arms race, if you will.

    Lawrence Livermore has the world’s third-fastest supercomputer with the help of IBM. But China now holds the number one slot. And while the Livermore Lab has the world’s largest laser, France, China and Russia are pursuing super lasers of their own.

    Don’t laugh at this “mine is bigger, better, faster” game. Initial breakthroughs in science and technology can lead to patent-related revenues, of course. But first-mover advantages can also help secure medicine such as a cancer treatment or an Ebola vaccine. And there are national security consequences to such information. Just recall the 2011 film “Contagion” and the loss of social order, as a coveted vaccine is administered. You can see how this stuff might play out.

    Read MoreWhy are American pigs dying?

    This push to innovate or embrace the “art of the possible,” as one scientist put it, is why websites track the supercomputer race, which China is winning at the moment. “We should be concerned about that,” said Frederick Streitz, director of the lab’s High Performance Computing Innovation Center.

    Added Streitz: “Ideas are power.”
    Inside the lab
    Instruments are viewed inside the target chamber at Lawrence Livermore lab’s National Ignition Facility.
    Tony Avelar | Bloomberg | Getty Images
    Instruments are viewed inside the target chamber at Lawrence Livermore lab’s National Ignition Facility.

    Livermore was founded in 1952, during the height of the Cold War, to tackle national security challenges through science, engineering and technology.

    It was a formal naval base, and squat barracks remain on the property. Pilots in training were dunked into a swimming pool.

    The lab feels like a college campus or tech company. Cyclists take a break from research, likely pedaling past one of the many wineries in the Tri-Valley.

    Beyond the region, Livermore is among other leading national labs including Los Alamos in New Mexico and Oak Ridge in Tennessee.

    The groundwork for the government and private company collaboration was laid by passage of the Federal Technology Transfer Act in 1986. In industry circles, it’s widely referred to as “tech transfer.” And the shift is only intensifying,

    Government-funded U.S. science labs receive about $140 billion annually in taxpayer money. But even the most gee-whiz research is just that: research. Every federal dollar spent creating early-stage inventions in the lab requires $10 of private sector-funded development to generate a useful product.

    Plus, there’s no guaranteed return. Nailing a commercial solution or patent, after months or years of research, can be akin to winning the lottery. The stakes, meanwhile, for successful research only are getting higher.

    Beyond the growing intricacy of scientific problems, there’s a public perception that taxpayer-funded research should yield concrete results. This expectation emerged during the 1980s recession and has intensified in recent years, said Joe Allen, who helped create and pass the technology transfer legislation.

    “Virtually every government is saying that publicly funded research needs to be made into a practical benefit for its taxpayers,” said Allen, now president of Allen & Associates, based in Bethesda, Ohio. The firm specializes in managing public-private partnerships.

    Added Allen: “When taxpayers fund cutting-edge research, they expect more than a white paper. They want to see a product like a new treatment for disease.”

    The lab collaborates with big tech companies like Intel and Hewlett-Packard to smaller start-ups. And successful public-private relationships naturally require work.

    But culture among companies and government-funded labs can vary. Joint efforts mean altering workflows. “It’s hard to change behavior,” Livermore’s Streitz said.

    two
    Scientists are creating tiny implantable devices, capable of restoring sight and possibly memory. Heesun Wee | CNBC

    But challenges and high-risk can yield potentially big rewards.

    Lab work includes brain-focused research to treat soldiers and other patients for illnesses and injuries such as traumatic brain injury.

    Development of a neural device and bionic eye, or retinal prosthesis, largely have been government funded. The retinal implant received more than $75 million over 10 years. The project was conducted under a Cooperative Research and Development Agreement with private sector company Second Sight in Sylmar, California, and included researchers from several national laboratories.

    Several neural prosthesis projects have received some $8.1 million in federal funding.

    ‘Grand challenges’

    Also housed at Livermore is the National Ignition Facility or NIF—the world’s largest laser. It was built for $3.5 billion, and costs around $330 million annually to operate, including related programs.

    Livermore NIF
    NIF at Livermore

    The facility has many roles, ranging from national security to advancing energy security.

    NIF scientists support nuclear weapons maintenance without underground testing—which has been abandoned. Researchers can instead duplicate the phenomena that occurs inside a nuclear device to manage weapons stockpiles.

    Experiments at NIF also are laying the groundwork to generate clean energy. The idea is to use lasers to ignite fusion fuel.

    If all that doesn’t grab you, NIF was used as the set for the “warp core” scene in the 2013 film, “Star Trek Into Darkness.”

    “The government can pursue grand challenges that are difficult for private companies to do,” said Jeff Wisoff, NIF’s principal associate director.

    See the full article here.


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 4:28 pm on July 17, 2014 Permalink | Reply
    Tags: , , Lawrence Livermore National Laboratory, ,   

    From Livermore lab: “Peering into giant planets from in and out of this world “ 


    Lawrence Livermore National Laboratory

    07/17/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    Lawrence Livermore scientists for the first time have experimentally re-created the conditions that exist deep inside giant planets, such as Jupiter, Uranus and many of the planets recently discovered outside our solar system.

    point
    The interior of the target chamber at the National Ignition Facility at Lawrence Livermore National Laboratory. The object entering from the left is the target positioner, on which a millimeter-scale target is mounted. Researchers recently used NIF to study the interior state of giant planets. Image by Damien Jemison/LLNL

    Researchers can now re-create and accurately measure material properties that control how these planets evolve over time, information essential for understanding how these massive objects form. This study focused on carbon, the fourth most abundant element in the cosmos (after hydrogen, helium and oxygen), which has an important role in many types of planets within and outside our solar system. The research appears in the July 17 edition of the journal, Nature.

    Using the largest laser in the world, the National Ignition Facility at Lawrence Livermore National Laboratory, teams from the Laboratory, University of California, Berkeley and Princeton University squeezed samples to 50 million times Earth’s atmospheric pressure, which is comparable to the pressures at the center of Jupiter and Saturn. Of the 192 lasers at NIF, the team used 176 with exquisitely shaped energy versus time to produce a pressure wave that compressed the material for a short period of time. The sample — diamond — is vaporized in less than 10 billionths of a second.

    Though diamond is the least compressible material known, the researchers were able to compress it to an unprecedented density greater than lead at ambient conditions.

    “The experimental techniques developed here provide a new capability to experimentally reproduce pressure-temperature conditions deep in planetary interiors,” said Ray Smith, LLNL physicist and lead author of the paper.

    Such pressures have been reached before, but only with shock waves that also create high temperatures — hundreds of thousands of degrees or more — that are not realistic for planetary interiors. The technical challenge was keeping temperatures low enough to be relevant to planets. The problem is similar to moving a plow slowly enough to push sand forward without building it up in height. This was accomplished by carefully tuning the rate at which the laser intensity changes with time.

    “This new ability to explore matter at atomic scale pressures, where extrapolations of earlier shock and static data become unreliable, provides new constraints for dense matter theories and planet evolution models,” said Rip Collins, another Lawrence Livermore physicist on the team.

    The data described in this work are among the first tests for predictions made in the early days of quantum mechanics, more than 80 years ago, which are routinely used to describe matter at the center of planets and stars. While agreement between these new data and theory are good, there are important differences discovered, suggesting potential hidden treasures in the properties of diamond compressed to such extremes. Future experiments on NIF are focused on further unlocking these mysteries.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 332 other followers

%d bloggers like this: