Tagged: Lawrence Livermore National Laboratory Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 10:58 am on December 12, 2014 Permalink | Reply
    Tags: Lawrence Livermore National Laboratory,   

    From LLNL: “Colleges and national labs develop STEM core curriculum” 


    Lawrence Livermore National Laboratory

    Dec. 11, 2014

    Kenneth K Ma
    ma28@llnl.gov
    925-423-7602

    The success of Lawrence Livermore National Laboratory’s (LLNL) Engineering Technology Program to educate veterans for technical careers has inspired a statewide push to create an educational core curriculum to prepare junior college students for technical jobs at California’s national labs.

    The core curriculum being designed by a consortium of community colleges, national labs and nonprofit educational institutes emphasizes a heavy focus on science, technology, engineering and math (STEM) courses to prepare women, minorities, veterans and other underserved populations for high-paying jobs as technologists.

    The consortium recently met at the Livermore Valley Open Campus’ High Performance Computing and Innovation Center to lay the groundwork for a common STEM educational standard that colleges can adopt, and to develop internships and other employment pipelines for national labs in the state. Those labs include LLNL, Lawrence Berkeley National Laboratory, NASA Ames Research Center and NASA’s Jet Propulsion Laboratory.

    s
    Susan Brady Wells, manager of Workforce Development & Education at Lawrence Berkeley National Laboratory, discusses internships available at LBNL during the statewide STEM core curriculum meeting at the Livermore Valley Open Campus.

    “We believe it’s important to provide underserved communities with a strong STEM curriculum in college because there are so many opportunities for exciting and financially rewarding careers in this area,” said Beth McCormick, LLNL Engineering’s Recruiting and Diversity manager, who helped create the Engineering Technology Program and organized the statewide STEM core consortium meeting. “The national labs need a pipeline of talent to fill hundreds of technical positions in the next five years.”

    Implemented over the summer, the Engineering Technology Program is a 24-month academic program to provide veterans with a technical education at Las Positas College and hands-on training at Lawrence Livermore. Created by LLNL, Las Positas, the Alameda County Workforce and the nonprofit Growth Sector, the program is designed to help veterans develop the skills and training for engineering technician careers and establishes a pipeline of qualified candidates for LLNL and other Bay Area employers such as NASA, Sandia and Lawrence Berkeley.

    The STEM consortium is seeking to build upon the Engineering Technology Program and other pilot programs focusing on providing advanced math curriculums and a cohort educational model that has proven to retain greater numbers of students within STEM programs.

    Community colleges see high STEM dropout rates because many of the underprivileged students enrolled in these classes are not adequately prepared in math and science at their high schools. In order to survive in a high-tech based economy, they will need to succeed in this area.

    “We want to increase the number of people who are finding employment opportunities through STEM programs,” said John Colborn, director of the Aspen Institute’s Skills for America’s Future, who facilitated the STEM consortium’s meeting. “We want to know that employers who are hiring those people are finding them as good or better than the folks they are able to get through ordinary means. And that these pools of candidates are more diverse and able to contribute to the diverse workforce that the labs are looking for.”

    j
    John Colborn, director of the Aspen Institute’s Skills for America’s Future, facilitates the statewide STEM core consortium’s meeting at the Livermore Valley Open Campus. Photos by Julie Russell/LLNL

    The consortium outlined a STEM educational standard that consisted of math up to pre-calculus (12 to 15 units), basic computer programing (three to four units), tech English (three to four units) and engineering/career application (two units). The basic math classes include algebra, trigonometry and geometry; while the tech English includes giving presentations in technical terminology. The engineering/career application is an introductory course to the different types of engineering. The STEM curriculum program can lead to an associate’s degree or prepare students for the transition to a four-year college. Along the way, students will have opportunities to apply for internships at national labs that can lead to full-time positions.

    McCormick said the next steps are to support and/or expand program implementation at the eight community colleges that attended the consortium meeting: Las Positas, Canada, Ohlone, San Jose Evergreen, San Jose, Saddleback, Palomar and Santa Ana.

    Most schools will need to reorganize classes for the STEM program, including some class modification, and potentially create new classes based on employers’ needs in order to receive state and local funding, she said. Funds for internships are available for veterans and displaced workers through local workforce investment boards. Employers may be asked to commit to hiring the student interns.

    The statewide collaboration on STEM education, McCormick said, will allow the consortium to apply for educational grants such as those from the Helmsley Charitable Trust with more than $5 billion in assets, to support wide-ranging endeavors including the advancement of education. The trust prefers donating to causes that have a far-reaching impact.

    The consortium plans to launch an extended pilot program in the fall of 2015 with the new STEM core curriculum.

    “If successful, the STEM core curriculum program will provide Lawrence Livermore and other California employers with technicians in biotech, optics, computer sciences and other areas,” McCormick said. “LLNL’s Engineering Directorate plans to hire 200 to 300 technicians in the next five years.”

    LLNL’s Computation Directorate, in collaboration with the Lab’s chief information officer and Information Communication Services, are planning to launch a similar core curriculum program at Las Positas College in information technology and computer science in 2015.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 5:25 pm on December 3, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory, ,   

    From NIF at LLNL: “Measuring NIF’s enormous shocks” 


    Lawrence Livermore National Laboratory

    Nov. 21, 2014

    Breanna Bishop
    bishop33@llnl.gov
    925-423-9802

    LLNL NIF Banner

    LLNL NIF

    NIF experiments generate enormous pressures—many millions of atmospheres—in a short time: just a few billionths of a second. When a pressure source of this type is applied to any material, the pressure wave in the material will quickly evolve into a shock front. One of NIF’s most versatile and frequently-used diagnostics, the Velocity Interferometer System for Any Reflector (VISAR), is used to measure these shocks, providing vital information for future experiment design and calibration.

    LLNL NIF VISAR
    Target Diagnostics Operator Andrew Wong sets up the Velocity Interferometer System for Any Reflector (VISAR) optical diagnostic system for a shock timing shot.

    Invented in the 1970s, VISAR was developed by Sandia National Laboratory scientists to study the motion of samples driven by shocks and other kinds of dynamic pressure loading. It has since become a standard measurement tool in many areas where dynamic pressure loading is applied to materials.

    “It is a big challenge to figure out how to apply these enormous pressures without immediately forming a shock wave,’ said Peter Celliers, responsible scientist for VISAR. “Instead of trying to avoid forming shocks, many NIF experiments use a sequence of increasing shocks as a convenient way of monitoring the performance of the target as the pressure drive is increased—for example, during a (target) capsule implosion.”

    v
    Target Area Operator Mike Visaya aligns a VISAR transport mirror in preparation for an experiment.

    To measure these shocks, VISAR determines the speed of a moving object by measuring the Doppler shift in a reflected light beam. More specifically, it directs a temporally-coherent laser beam at the object, collects a returned reflection, and sends it through a specially-configured interferometer. The interferometer produces an interference pattern containing information about the Doppler shift.

    The Doppler shift provides information on how fast the reflecting part of the target is moving. In most cases the reflector is a shock front, which acts like a mirror moving through a transparent material (for example liquid deuterium, quartz or fused silica). In some cases the moving mirror is a physical surface on the back part of a package (called a free surface) that is accelerated during the experiment. In yet other scenarios, the moving mirror could be a reflecting interface embedded in the target behind a transparent window.

    After the light reflected from the target passes through the interferometers, it forms a fringe pattern. With the NIF VISAR design, this light is collected in the form of a two-dimensional image with an optical image relay system. The fringe pattern is superimposed on the image, then projected on the slit of a streak camera. Because the target image is spatially-resolved across the slit of the streak camera, this type of VISAR is called a line-imaging VISAR. The spatial and temporal arrangement of the fringe pattern in the output streak record reveals how different parts of the target move during the experiment.

    There is a very close connection between the velocities of the moving parts of the target and the pressure driving the motion. If the velocity is measured accurately, a highly accurate picture of the driving pressure can be formed. This information is vital for understanding the details of target performance.

    “Our simulation models are not accurate enough to calculate the timing of the shocks that produces the best performance without some sort of calibration,” Celliers said. “But by monitoring the shocks with the VISAR, we have precise and detailed information that can be used to tune the laser pulse (the pressure drive) to achieve optimal target performance, and to calibrate the simulation codes.”

    Looking to the future, VISAR will see improvements to its streaked optical pyrometer (SOP), an instrument that can be used to infer the temperature of a hot object by measuring the heat radiated from the object in the form of visible light. The SOP is undergoing modifications to improve its imaging performance and to reduce background levels on the streak camera detectors. This will benefit future equation-of-state experiments where accurate thermal emission data is crucial. This upgrade will be complete in early 2015.

    men
    Physicists Dave Farley (left) and Peter Celliers and scientist Curtis Walter watch a live VISAR image as they monitor the deuterium fill of a keyhole capsule in the NIF Control Room during shock-timing experiments.

    Along with Celliers, the VISAR implementation team includes Stephen Azevedo, David Barker, Jeff Baron, Mark Bowers, Aaron Busby, Allen Casey, John Celeste, Hema Chandrasekaran, Kim Christensen, Philip Datte, Jon Eggert, Gene Frieders, Brad Golick, Robin Hibbard, Matthew Hutton, John Jackson , Dan Kalantar, Kenn Knittel, Kerry Krauter, Brandi Lechleiter, Tony Lee, Brendan Lyon, Brian MacGowan, Stacie Manuel, JoAnn Matone, Marius Millot, Jason Neumann, Ed Ng, Brian Pepmeier, Karl Pletcher, Lynn Seppala, Ray Smith, Zack Sober, Doug Speck, Bill Thompson, Gene Vergel de Dios, Abbie Warrick, Phil Watts, Eric Wen, Ziad Zeid and colleagues from National Security Technologies.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 5:07 pm on November 25, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory,   

    From LLNL: “Lawrence Livermore researchers develop efficient method to produce nanoporous metals” 


    Lawrence Livermore National Laboratory

    Nov. 25, 2014

    Kenneth K Ma
    ma28@llnl.gov
    925-423-7602

    Nanoporous metals — foam-like materials that have some degree of air vacuum in their structure — have a wide range of applications because of their superior qualities.

    They posses a high surface area for better electron transfer, which can lead to the improved performance of an electrode in an electric double capacitor or battery. Nanoporous metals offer an increased number of available sites for the adsorption of analytes, a highly desirable feature for sensors.

    Lawrence Livermore National Laboratory (LLNL) and the Swiss Federal Institute of Technology (ETH) researchers have developed a cost-effective and more efficient way to manufacture nanoporous metals over many scales, from nanoscale to macroscale, which is visible to the naked eye.

    The process begins with a four-inch silicon wafer. A coating of metal is added and sputtered across the wafer. Gold, silver and aluminum were used for this research project. However, the manufacturing process is not limited to these metals.

    Next, a mixture of two polymers is added to the metal substrate to create patterns, a process known as diblock copolymer lithography (BCP). The pattern is transformed in a single polymer mask with nanometer-size features. Last, a technique known as anisotropic ion beam milling (IBM) is used to etch through the mask to make an array of holes, creating the nanoporous metal.

    During the fabrication process, the roughness of the metal is continuously examined to ensure that the finished product has good porosity, which is key to creating the unique properties that make nanoporous materials work. The rougher the metal is, the less evenly porous it becomes.

    “During fabrication, our team achieved 92 percent pore coverage with 99 percent uniformity over a 4-in silicon wafer, which means the metal was smooth and evenly porous,” said Tiziana Bond, an LLNL engineer who is a member of the joint research team.

    tb
    Tiziana Bond

    The team has defined a metric — based on a parametrized correlation between BCP pore coverage and metal surface roughness — by which the fabrication of nanoporous metals should be stopped when uneven porosity is the known outcome, saving processing time and costs.

    “The real breakthrough is that we created a new technique to manufacture nanoporous metals that is cheap and can be done over many scales avoiding the lift-off technique to remove metals, with real-time quality control,” Bond said. “These metals open the application space to areas such as energy harvesting, sensing and electrochemical studies.”

    The lift-off technique is a method of patterning target materials on the surface of a substrate by using a sacrificial material. One of the biggest problems with this technique is that the metal layer cannot be peeled off uniformly (or at all) at the nanoscale.

    The research team’s findings were reported in an article titled Manufacturing over many scales: High fidelity macroscale coverage of nanoporous metal arrays via lift-off-free nanofrabication. It was the cover story in a recent issue of Advanced Materials Interfaces.

    imf

    Other applications of nanoporous metals include supporting the development of new metamaterials (engineered materials) for radiation-enhanced filtering and manipulation, including deep ultraviolet light. These applications are possible because nanoporous materials facilitate anomalous enhancement of transmitted (or reflected) light through the tunneling of surface plasmons, a feature widely usable by light-emitting devices, plasmonic lithography, refractive-index-based sensing and all-optical switching.

    The other team members include ETH researcher Ali Ozhan Altun and professor Hyung Gyu Park.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , , , Lawrence Livermore National Laboratory,   

    Fron LLNL: “Lawrence Livermore tops Graph 500″ 


    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston
    johnston19@llnl.gov
    925-784-3980

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    3
    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    ibm
    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 6:29 pm on October 29, 2014 Permalink | Reply
    Tags: , Lawrence Livermore National Laboratory, ,   

    From LLNL: “Tiny carbon nanotube pores make big impact “ 


    Lawrence Livermore National Laboratory

    Oct. 29, 2014

    Anne M Stark
    stark8@llnl.gov
    925-422-9799

    A team led by the Lawrence Livermore scientists has created a new kind of ion channel consisting of short carbon nanotubes, which can be inserted into synthetic bilayers and live cell membranes to form tiny pores that transport water, protons, small ions and DNA.

    These carbon nanotube “porins” have significant implications for future health care and bioengineering applications. Nanotube porins eventually could be used to deliver drugs to the body, serve as a foundation of novel biosensors and DNA sequencing applications, and be used as components of synthetic cells.

    Researchers have long been interested in developing synthetic analogs of biological membrane channels that could replicate high efficiency and extreme selectivity for transporting ions and molecules that are typically found in natural systems. However, these efforts always involved problems working with synthetics and they never matched the capabilities of biological proteins.

    Unlike taking a pill which is absorbed slowly and is delivered to the entire body, carbon nanotubes can pinpoint an exact area to treat without harming surrounding other organs.

    “Many good and efficient drugs that treat diseases of one organ are quite toxic to another,” said Aleksandr Noy, an LLNL biophysicist who led the study and is the senior author on the paper appearing in the Oct. 30 issue of the journal, Nature. “This is why delivery to a particular part of the body and only releasing it there is much better.”

    men
    From left: Lawrence Livermore National Laboratory scientists Aleksandr Noy, Kyunghoon Kim and Jia Geng hold up a model of a carbon nanotube that can be inserted into live cells, which can pinpoint an exact area to treat without harming other organs. Photo by Julie Russell.

    The Lawrence Livermore team, together with colleagues at the Molecular Foundry at the Lawrence Berkeley National Laboratory, University of California Merced and Berkeley campuses, and University of Basque Country in Spain created a much more efficient, biocompatible membrane pore channel out of a carbon nanotube (CNT) — a straw-like molecule that consists of a rolled up graphene sheet.

    This research showed that despite their structural simplicity, CNT porins display many characteristic behaviors of natural ion channels: they spontaneously insert into the membranes, switch between metastable conductance states, and display characteristic macromolecule-induced blockades. The team also found that, just like in the biological channels, local channel and membrane charges could control the ionic conductance and ion selectivity of the CNT porins.

    “We found that these nanopores are a promising biomimetic platform for developing cell interfaces, studying transport in biological channels, and creating biosensors,” Noy said. “We are thinking about CNT porins as a first truly versatile synthetic nanopore that can create a range of applications in biology and materials science.”

    “Taken together, our findings establish CNT porins as a promising prototype of a synthetic membrane channel with inherent robustness toward biological and chemical challenges and exceptional biocompatibility that should prove valuable for bionanofluidic and cellular interface applications,” said Jia Geng, a postdoc who is the first co-author of the paper.

    Kyunghoon Kim, a postdoc and another co-author, added: “We also expect that our CNT porins could be modified with synthetic ‘gates’ to dramatically alter their selectivity, opening up exciting possibilities for their use in synthetic cells, drug delivery and biosensing.”

    Other LLNL researchers include Ramya Tunuguntla, Kang Rae Cho, Dayannara Munoz and Morris Wang. The team members performed some of the work at the Molecular Foundry, a DOE user facility as a part of its user project.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:20 pm on October 22, 2014 Permalink | Reply
    Tags: , , , , Human Space Flight, Lawrence Livermore National Laboratory,   

    From LLNL: “NASA taps Livermore photon scientists for heat-shield research” 


    Lawrence Livermore National Laboratory

    10/22/2014
    Breanna Bishop, LLNL, (925) 423-9802, bishop33@llnl.gov

    Researchers in Lawrence Livermore National Laboratory’s NIF & Photon Science Directorate are working with NASA Ames Research Center at Moffet Field, California on the development of technology to simulate re-entry effects on the heat shield for the Orion spacecraft, NASA’s next crewed spaceship. Orion is designed to carry astronauts beyond low Earth orbit to deep-space destinations such as an asteroid and, eventually, Mars.

    NASA Orion Spacecraft
    NASA/Orion

    The Orion heat shield will have to withstand re-entry temperatures that are too severe for existing reusable thermal protection systems, such as those used on the space shuttles. NASA’s development and characterization of a more robust shield requires that radiant heating capability be added to the Arc Jet Complex at NASA Ames, which develops thermal protection materials and systems in support of the Orion Program Office at NASA Johnson Space Center in Houston and the NASA Human Exploration and Operations Mission Directorate at NASA headquarters in Washington, DC.

    NASA Ames currently owns two 50 kilowatt (kW) commercial fiber laser systems and needs to augment the optical power into the Arc Jet chamber by another 100 to 200 kW. The team at Ames recently approached LLNL to explore an option of using commercially available radiance-conditioned laser diode arrays for this task, similar to the diodes used in the Laboratory’s Diode-Pumped Alkali Laser (DPAL) and E-23/HAPLS laser projects. Their aim is to assess whether such systems can better meet the technical objectives for survival testing. If successful, such diode arrays would offer a dramatically lower-cost solution.

    shield
    Technicians install a protective shell onto the Orion crew module for its first test flight this December. Credit: Dimitri Gerondidaki/NASA

    To perform these tests, LLNL is collaborating with Ames on diode array characterizations using an existing diode system developed for LLNL’s laser programs. These tests will allow NASA Ames to assess whether their optical output can meet in-chamber target illumination requirements, and thus inform their choice for a future system.

    While the space shuttles traveled at 17,000 miles per hour, Orion will be re-entering Earth’s atmosphere at 20,000 miles per hour on its first flight test in December. The faster a spacecraft travels through the atmosphere, the more heat it generates. The hottest the space shuttle tiles got was about 2,300 degrees Fahrenheit; the Orion back shell could get as hot as 4,000 degrees. For more about Orion, see the NASA video.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:04 pm on October 20, 2014 Permalink | Reply
    Tags: , , Lawrence Livermore National Laboratory,   

    From LLNL: “Supercomputers link proteins to drug side effects” 


    Lawrence Livermore National Laboratory

    10/20/2014
    Kenneth K Ma, LLNL, (925) 423-7602, ma28@llnl.gov

    New medications created by pharmaceutical companies have helped millions of Americans alleviate pain and suffering from their medical conditions. However, the drug creation process often misses many side effects that kill at least 100,000 patients a year, according to the journal Nature.

    Lawrence Livermore National Laboratory researchers have discovered a high-tech method of using supercomputers to identify proteins that cause medications to have certain adverse drug reactions (ADR) or side effects. They are using high-performance computers (HPC) to process proteins and drug compounds in an algorithm that produces reliable data outside of a laboratory setting for drug discovery.

    The team recently published its findings in the journal PLOS ONE, titled Adverse Drug Reaction Prediction Using Scores Produced by Large-Scale Drug-Protein Target Docking on High-Performance Computer Machines.

    “We need to do something to identify these side effects earlier in the drug development cycle to save lives and reduce costs,” said Monte LaBute, a researcher from LLNL’s Computational Engineering Division and the paper’s lead author.

    It takes pharmaceutical companies roughly 15 years to bring a new drug to the market, at an average cost of $2 billion. A new drug compound entering Phase I (early stage) testing is estimated to have an 8 percent chance of reaching the market, according to the Food and Drug Administration (FDA).

    A typical drug discovery process begins with identifying which proteins are associated with a specific disease. Candidate drug compounds are combined with target proteins in a process known as binding to determine the drug’s effectiveness (efficacy) and/or harmful side effects (toxicity). Target proteins are proteins known to bind with drug compounds in order for the pharmaceutical to work.

    While this method is able to identify side effects with many target proteins, there are myriad unknown “off-target” proteins that may bind to the candidate drug and could cause unanticipated side effects.

    Because it is cost prohibitive to experimentally test a drug candidate against a potentially large set of proteins — and the list of possible off-targets is not known ahead of time — pharmaceutical companies usually only test a minimal set of off-target proteins during the early stages of drug discovery. This results in ADRs remaining undetected through the later stages of drug development, such as clinical trials, and possibly making it to the marketplace.

    There have been several highly publicized medications with off-target protein side effects that have reached the marketplace. For example, Avandia, an anti-diabetic drug, caused heart attacks in some patients; and Vioxx, an anti-inflammatory medication, caused heart attacks and strokes among certain patient populations. Both therapeutics were recalled because of their side effects.

    “There were no indications of side effects of these medications in early testing or clinical trials,” LaBute said. “We need a way to determine the safety of such therapeutics before they reach patients. Our work can help direct such drugs to patients who will benefit the most from them with the least amount of side effects.”

    LaBute and the LLNL research team tackled the problem by using supercomputers and information from public databases of drug compounds and proteins. The latter included protein databases of DrugBank, UniProt and Protein Data Bank (PDB), along with drug databases from the FDA and SIDER, which contain FDA-approved drugs with ADRs.

    The team examined 4,020 off-target proteins from DrugBank and UniProt. Those proteins were indexed against the PDB, which whittled the number down to 409 off-proteins that have high-quality 3D crystallographic X-ray diffraction structures essential for analysis in a computational setting.

    mp

    The 409 off-target proteins were fed into a Livermore HPC software known as VinaLC along with 906 FDA-approved drug compounds. VinaLC used a molecular docking matrix that bound the drugs to the proteins. A score was given to each combination to assess whether effective binding occurred.

    The binding scores were fed into another computer program and combined with 560 FDA-approved drugs with known side effects. An algorithm was used to determine which proteins were associated with certain side effects.

    The Lab team showed that in two categories of disorders — vascular disorders and neoplasms — their computational model of predicting side effects in the early stages of drug discovery using off-target proteins was more predictive than current statistical methods that do not include binding scores.

    In addition to LLNL ADR prediction methods performing better than current prediction methods, the team’s calculations also predicted new potential side effects. For example, they predicted a connection between a protein normally associated with cancer metastasis to vascular disorders like aneurysms. Their ADR predictions were validated by a thorough review of existing scientific data.

    “We have discovered a very viable way to find off-target proteins that are important for side effects,” LaBute said. “This approach using HPC and molecular docking to find ADRs never really existed before.”

    The team’s findings provide drug companies with a cost-effective and reliable method to screen for side effects, according to LaBute. Their goal is to expand their computational pharmaceutical research to include more off-target proteins for testing and eventually screen every protein in the body.

    “If we can do that, the drugs of tomorrow will have less side effects that can potentially lead to fatalities,” Labute said. “Optimistically, we could be a decade away from our ultimate goal. However, we need help from pharmaceutical companies, health care providers and the FDA to provide us with patient and therapeutic data.”

    two
    LLNL researchers Monte LaBute (left) and Felice Lightstone (right) were part of a Lab team that recently published an article in PLOS ONE detailing the use of supercomputers to link proteins to drug side effects. Photo by Julie Russell/LLNL

    The LLNL team also includes Felice Lightstone, Xiaohua Zhang, Jason Lenderman, Brian Bennion and Sergio Wong.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 9:04 pm on October 16, 2014 Permalink | Reply
    Tags: , , , Lawrence Livermore National Laboratory,   

    From LLNL: “Lab, UC Davis partner to personalize cancer medications” 


    Lawrence Livermore National Laboratory

    10/16/2014
    Stephen P Wampler, LLNL, (925) 423-3107, wampler1@llnl.gov

    Buoyed by several dramatic advances, Lawrence Livermore National Laboratory scientists think they can tackle biological science in a way that couldn’t be done before.

    Over the past two years, Lab researchers have expedited accelerator mass spectrometer sample preparation and analysis time from days to minutes and moved a complex scientific process requiring accelerator physicists into routine laboratory usage.

    Ken Turteltaub, the leader of the Lab’s Biosciences and Biotechnology Division, sees the bio AMS advances as allowing researchers to undertake quantitative assessments of complex biological pathways.

    “We are hopeful that we’ll be able to quantify the individual steps in a metabolic pathway and be able to measure indicators of disease processes and factors important to why people differ in responses to therapeutics, to diet and other factors,” Turteltaub said.

    Graham Bench, the director of the Lab’s Center for Accelerator Mass Spectrometry, anticipates the upgrades will enable Lab researchers “to produce high-density data sets and tackle novel biomedical problems that in the past couldn’t be addressed.”

    Ted Ognibene, a chemist who has worked on AMS for 15 years and who co-developed the technique that accommodates liquid samples, also envisions new scientific work coming forth.

    two
    Ted Ognibene (left), a chemist who co-developed the technique that accommodates liquid samples for accelerator mass spectrometry, peers with biomedical scientist Mike Malfatti at the new biological AMS instrument that has been installed in the Laboratory’s biomedical building. Photo by George Kitrinos

    “We previously had the capability to detect metabolites, but now with the ability to see our results almost immediately for a fraction of the cost, it’s going to enable a lot more fundamental and new science to be done,” Ognibene said.

    Biological AMS is a technique in which carbon-14 is used as a tag to study with extreme precision and sensitivity complex biological processes, such as cancer, molecular damage, drug and toxin behavior, nutrition and other areas.

    Among the biomedical studies that will be funded through the five-year, $7.8 million National Institutes of Health grant for biological AMS work is one to try to develop a test to predict how people will respond to chemotherapeutic drugs.

    Another research project seeks to create an assay that is so sensitive that it can detect one cancer cell among one million healthy cells. If this work is successful, it could be possible to evaluate the metastasis potential of different primary human cancer cells.

    Lab biomedical scientist Mike Malfatti and two researchers - Paul Henderson, an associate professor, and Chong-Xian Pan, a medical oncologist — from the University of California, Davis Comprehensive Cancer Center, are using the AMS in a human trial with 50 patients to see how cancer patients respond or don’t respond to the chemotherapeutic drug carboplatin. This drug kills cancer cells by binding to DNA, and is toxic to rapidly dividing cells.

    The three researchers have the patients take a microdose of carboplatin — about 1/100th of a therapeutic dose — that has no toxicity or therapeutic value to evaluate how effectively the drug will bind to a person’s DNA during full dose treatment.

    Within a few days of patients receiving the microdose, the degree of drug binding is checked by blood sample, in which the DNA is isolated from white blood cells, or by tumor biopsy, in which the DNA is isolated from the tumor cells.

    The carboplatin dose is prepared with a carbon-14 tag. The DNA sample is analyzed using AMS and the instrument quantifies the carbon-14 level, with a high level of carbon-14 indicating a high level of drug binding to the DNA.

    “A high degree of binding indicates that you have a high probability of a favorable response to the drug,” Malfatti said. “Conversely, a low degree of binding means it is likely the person’s body won’t respond to the treatment.

    “If we can identify which people will respond to which chemotherapeutic drug, we can tailor the treatment to the individual.

    “There are many negative side effects associated with chemotherapy, such as nausea, loss of appetite, loss of hair and even death. We don’t want someone to receive chemotherapy that’s not going to help them, yet leave them with these negative side effects,” he added.

    Malfatti, Henderson and Pan also are using the AMS in pre-clinical studies to investigate the resistance or receptivity of other commonly used chemotherapeutic agents such as cisplatin, oxaliplatin and gemcitabine.

    Another team of researchers, led by Gaby Loots, a Lab biomedical scientist and an associate professor at the University of California, Merced, wants to use AMS to measure cancer cells labeled with carbon-14 to study the cancer cells’ migration to healthy tissues to determine how likely they are to form metastatic tumors.

    While today’s standard methods can detect tumors that are comprised of thousands of cells, the team would like to develop an assay with a thousand-fold better resolution – to detect one cancer cell among one million healthy ones.

    “The sensitivity of AMS allows us to develop more accurate, quantitative assays with single-cell resolution. Is the cancer completely gone, or do we see one cell worth of cancer DNA?” Loots noted.

    Some of the questions the team would like to answer are: 1) why certain cells metastasize? 2) how do cells metastasize? 3) what new methods can be developed to prevent metastasis?

    “Tumors shed cells all the time that enter our circulation. We would like to find ways to prevent the circulating tumor cells from forming metastatic tumors,” Loots continued.

    As a part of their research, the team members hope to determine whether cancer cells with stem-cell-like properties form more aggressive tumors.

    “We’re going to separate the cancer cells into stem-cell-like and non-stem-cell-like populations and seek to determine if they behave differently,” said Loots, who is working with fellow Lab biomedical scientists Nick Hum and Nicole Collette.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:53 pm on October 8, 2014 Permalink | Reply
    Tags: , , Lawrence Livermore National Laboratory   

    From LLNL: “LLNL to play key role in 10 petawatt laser project” 


    Lawrence Livermore National Laboratory

    10/08/2014
    Breanna Bishop, LLNL, (925) 423-9802, bishop33@llnl.gov

    Lawrence Livermore National Laboratory has once again been selected to support the development and construction of an ultra-intense laser system for the European Union’s Extreme Light Infrastructure Beamlines (ELI-Beamlines) in the Czech Republic.

    beam
    3D CAD rendering of the 10PW laser system. Credit: National Energetics Inc.

    A consortium led by National Energetics Inc., in partnership with Ekspla UAB, was awarded the contract to construct the system, which will be the most powerful laser of its class in the world, capable of producing peak power in excess of 10 petawatts (PW). Due to LLNL’s long-standing expertise in laser research and development, the consortium selected the Laboratory for a $3.5 million subcontract to support development of this system.

    “The award of this project is another clear statement of the internationally leading nature of LLNL’s laser design capability,” said Mike Dunne, LLNL’s director of laser fusion energy. “It also is another great example of where LLNL is helping move forward the global state-of-the-art of high power lasers, working collaboratively with the U.S. and European industry and academia. We are very excited to work with the National Energetics consortium, which has a very compelling approach to reaching the 10 petawatt goal.”

    Under the subcontract, LLNL will be responsible for: contributing to the physics design of the liquid-cooled laser amplifier, allowing it to get to the high energies required for accessing 10 PW power; contributing to the physics and engineering design of the short pulse compressor, which takes the high energy beam and compresses it to the required pulse length; and production of the large aperture diffraction gratings for this compressor.

    This system will be one of the four major beamlines at the new ELI-Beamlines facility. In addition to work on this beamline, LLNL also is responsible for the design, development and installation of a second beamline at the facility, through a more than $45 million contract awarded in 2013. When commissioned to its full design performance, the laser system, called the “High repetition-rate Advanced Petawatt Laser System” (HAPLS), will be the world’s highest average power PW laser system. The two systems will thus offer a highly attractive capability for the new ELI-Beamlines facility, which will appeal to academic and industrial researchers from across the world.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:35 pm on September 27, 2014 Permalink | Reply
    Tags: , , , Lawrence Livermore National Laboratory   

    From LLNL: “Giant Steps For Adaptive Optics” 


    Lawrence Livermore National Laboratory

    Science & Technology Review
    september 2014
    Arnie Heller

    LAST November, high in the Chilean Andes, an international team of scientists and engineers, including Lawrence Livermore researchers, celebrated jubilantly in the early morning hours. The cause for their celebration was the appearance of a faint but unmistakable image of a planet 63 light-years from Earth circling a nearby star called Beta Pictoris. The clear image was viewable from a ground-based telescope thanks to one of the most advanced adaptive optics systems in existence, a key element of the newly fielded Gemini Planet Imager (GPI).

    Gemini Planet Imager
    Gmini Planet Imager

    GPI (pronounced gee-pie) is deployed on the 8.1-meter-diameter Gemini South telescope, situated near the summit of Cerro Pachón at an altitude of 2,715 meters. The size of a small car, GPI is mounted behind the primary mirror of the giant telescope. Although the imager is still in its shakedown phase, it is producing the fastest and clearest images of extrasolar planets (exoplanets) ever recorded. GPI is perhaps the most impressive scientific example of Lawrence Livermore’s decades-long preeminence in adaptive optics. This technology uses an observing instrument’s optical components to remove distortions that are induced by the light passing through a turbulent medium, such as Earth’s atmosphere, or by mechanical vibration.

    Gemini South telescope
    Gemini South Interior
    Gemini South Telescope

    More than two decades ago, Livermore scientists were among the first to show how adaptive optics can be used in astronomy to eliminate the effects of atmospheric turbulence, which cause the twinkle we see in stars when viewing them from Earth. Those effects also create blurring in images recorded by ground-based telescopes. Laboratory researchers have since applied adaptive optics to other fields, including lasers and medicine. For example, adaptive optics helped produce extremely high-resolution images of the retina with an instrument that won an R&D 100 Award in 2010. (See S&TR, October/November 2010, A Look inside the Living Eye.) Livermore teams are now working on an adaptive optics system to transport x-ray beams in a new generation of high-energy research facilities. In addition, outreach efforts by the Laboratory are strengthening educational opportunities in this field at U.S. colleges and universities.

    Designed for Exoplanet Imaging

    GPI is the first astronomical instrument designed and optimized for direct exoplanet imaging and analysis. Imaging planets directly is exceedingly difficult because most planets are at least 1 to 10 million times fainter than the parent stars they orbit. One way to improve image quality is to send telescopes into orbit, which boosts research costs enormously. A much less expensive approach is to equip a ground-based telescope with adaptive optics to compensate in real time for the distortions of light caused by Earth’s atmosphere.

    Livermore computational engineer David Palmer, GPI project manager and leader of its adaptive optics development effort, notes that GPI comprises several interconnected systems and components. In addition to adaptive optics, the imager includes an interferometer, coronagraph, spectrometer, four computers, and an optomechanical structure to which everything is attached. All are packaged into an enclosure 2 cubic meters in volume and flanked on either side by “pods” that hold the accompanying electronics.

    GPI hangs on the back end of Gemini South, a design that sharply constrains the imager’s volume, weight, and power requirements. While in use, it constantly faces the high winds and hostile environment at high altitude. As the telescope slews to track a star, the instrument flexes, making alignment more complicated. Nevertheless, says Palmer, GPI has maintained its alignment “phenomenally well” and performed superbly. “The precision requirements worked up by the GPI design team are almost staggering,” he says, “especially those for the adaptive optics system.”

    Laboratory electrical engineer Lisa Poyneer adds, “GPI features several new approaches that enable us to correct the atmosphere with precision never before achieved.” Poyneer developed the algorithms (mathematical procedures) that control two deformable mirrors and led adaptive optics system testing in the laboratory and at the telescope.

    GPI is an international project with former Livermore astrophysicist Bruce Macintosh (now a professor at Stanford University) serving as principal investigator. The Gemini South telescope is an international partnership as well, involving the U.S., Canada, Australia, Argentina, Brazil, and Chile. Macintosh says the first discussions concerning a ground-based instrument dedicated to the search for exoplanets began in 2001. “A lot of exoplanets were being discovered at that time, but the discoveries didn’t tell us much about the planets themselves,” he says. “There was a clear scientific need to incorporate adaptive optics, and the technology was progressing quickly.”

    After more than eight years in development, GPI components were tested and integrated at the Laboratory for Adaptive Optics at the University of California (UC) at Santa Cruz in 2012 and 2013. The imager was shipped to Chile in August 2013, with first light conducted in November.

    Scientists will use GPI over the next three years to discover and characterize dozens or more exoplanets circling stars located up to 230 light-years from Earth. In addition to resolving exoplanets from their parent stars, GPI uses a spectrometer to probe the composition of each exoplanet’s atmosphere. The instrument also studies disks around young stars with a technique called polarization differential imaging.

    two
    Lawrence Livermore engineer Lisa Poyneer (left) and Stanford University astrophysicist Bruce Macintosh (previously at Livermore) stand in front of the Gemini Planet Imager (GPI), which is installed on the Gemini South telescope in Chile. Two electronic pods (blue) on either side of the main enclosure hold GPI’s electronics. (Photograph by Jeff Chilcote, University of California at Los Angeles [UCLA].)

    Age of Exoplanet Discovery

    The discovery of exoplanets was a historic breakthrough in modern astronomy. More than 1,000 exoplanets have been identified, mainly through indirect techniques that infer a planet’s mass and orbit. Astronomers have been surprised by the diversity of planetary systems that differ from our solar system. GPI is expected to strengthen scientific understanding of how planetary systems form and evolve, how planet orbits change, and what comprises their atmospheres.

    GPI masks the light emitted by a parent star to reveal the faint light of young (up to 1-billion-year-old) giant planets in orbits a few times greater than Earth’s path around the Sun. These young gas giants (the size of Jupiter and larger) are detected through their thermal radiation (about 1.0 to 2.4 micrometers wavelength in the near-infrared region).

    GPI is not sensitive enough to see Earth-sized planets, which are 10,000 times fainter than giant planets. (See S&TR, July/August 2012, A Spectra-Tacular Sight.) However, it complements astronomical instruments that infer a planet’s mass and orbit by measuring the small gravitational tugs exerted on a parent star or, as with NASA’s Kepler Space Telescope, by blocking very small amounts of light emitted by the parent star as the planet passes in front of that star. “These indirect methods tell us a planet is there and a bit about its orbit and mass, but not much else,” says Macintosh. “Kepler can detect tiny planets similar to the size of Earth. With GPI, we can find much larger planets, the size of Jupiter, so the two instruments provide complementary information.”

    NASA Kepler Telescope
    NASA/Kepler

    The direct imaging of giant planets permits the use of spectroscopy to estimate their size, temperature, surface gravity, and atmospheric composition. Because different molecules absorb light at different wavelengths, scientists can correlate the light emitted from a planet to the molecules in its atmosphere.

    team
    In November 2013, members of the GPI first-light team celebrated when the system acquired its first images. The team includes: (from left to right) Pascale Hibon, Stephen Goodsell, Markus Hartung, and Fredrik Rantakyrö from Gemini Observatory; Jeffrey Chilcote, UCLA; Jennifer Dunn, National Research Council (NRC) Canada Herzberg Institute of Astrophysics; Sandrine Thomas, NASA Ames Research Center; Macintosh; David Palmer, Lawrence Livermore; Dmitry Savransky, Cornell University; Marshall Perrin, Space Telescope Science Institute.; and Naru Sadakuni, Gemini Observatory. (Photograph by Jeff Chilcote, UCLA.)

    In November 2013, members of the GPI first-light team celebrated when the system acquired its first images. The team includes: (from left to right) Pascale Hibon, Stephen Goodsell, Markus Hartung, and Fredrik Rantakyrö from Gemini Observatory; Jeffrey Chilcote, UCLA; Jennifer Dunn, National Research Council (NRC) Canada Herzberg Institute of Astrophysics; Sandrine Thomas, NASA Ames Research Center; Macintosh; David Palmer, Lawrence Livermore; Dmitry Savransky, Cornell University; Marshall Perrin, Space Telescope Science Institute; and Naru Sadakuni, Gemini Observatory. (Photograph by Jeff Chilcote, UCLA.)

    Extreme Adaptive Optics

    The heart of GPI is its highly advanced, high-contrast adaptive optics system (sometimes called extreme adaptive optics) that measures and corrects wavefront errors induced by atmospheric air motion and the inevitable tiny flaws in optics. As light passes through the Gemini South telescope, GPI measures its wavefront 1,000 times per second at nearly 2,000 locations. The system corrects the distortions within 1 millisecond by precisely changing the positions of thousands of actuators, which adjusts the shape of two mirrors. As the adaptive optics system operates, GPI typically takes about 60 consecutive, 1-minute exposures and can detect an exoplanet 70 times more rapidly than existing instruments.

    To meet GPI’s stringent requirements, the Livermore team developed several technologies specifically for exoplanet science. A self-optimizing computer system controls the actuators, with computationally efficient algorithms determining the best position for each actuator with nanometer-scale precision. A spatial filter prevents aliasing (artifacts).

    Livermore optical engineer Brian Bauman designed the innovative and compact adaptive optics for GPI. He has also worked on adaptive optics components for vision science and Livermore’s Atomic Vapor Laser Isotope System and has developed simpler systems for telescopes at the Lick Observatory and other observatories. Says Bauman, “We wanted GPI to provide much greater contrast and resolution than had been achieved in an adaptive optics system without producing artifacts that could mask a planet or be mistaken for one.”

    The system corrects aberrations by adjusting the shape of two deformable mirrors. Incoming light from the telescope is relayed to the first mirror, called the woofer. Measuring about 5 centimeters across, this mirror has 69 actuators to correct atmospheric components with low spatial frequencies.

    The woofer passes the corrected light to the tweeter—a 2.56-centimeter-square deformable mirror with 4,096 actuators for finer corrections. The tweeter is a microelectromechanical systems– (MEMS-) based device developed for GPI by Boston Micromachines. It is made of etched silicon, similar to the material used for microchips, rather than reflective glass. The tweeter’s actuators are spaced only 400 micrometers apart; a circular patch of 44 actuators in diameter is used to compensate for the high-spatial-frequency components of the atmosphere.

    GPI has 10 times the actuator density of a general-purpose adaptive optics system. Poyneer explains that the more actuators, the more accurately the mirror surface can correct for atmospheric turbulence. “MEMS was the only technology that could give us thousands of actuators and meet our space and power requirements,” she says. “Given the number of actuators, we had to design the system to measure all aberrations at the same resolution.” This precision in controlling the mirrors is accomplished by a wavefront sensor that breaks incoming light into smaller subregions, similar to the receptors on a fly’s compound eye.

    A major challenge to the increased number of actuators is that existing algorithms required far too much computation to adjust the mirrors as quickly as needed. In response, Poyneer developed a new algorithm that requires 45 times less computation. “GPI must continually perform all of its calculations within 1 millisecond,” says Palmer, who implemented the real-time software that achieves this goal. Remarkably, the system of algorithms is self-optimized. That is, says Poyneer, “A loop monitors how the operations are going and adjusts the control system every 8 seconds. If the atmospheric turbulence gets stronger, the system control will become more aggressive to give the best performance possible.”

    The mirrors forward the corrected light to a coronagraph, which blocks out much of the light from the parent star being observed, revealing the vastly fainter planets orbiting that star. Relay optics then reform the light onto a lenslet array, and a prism disperses the light into thousands of tiny spectra. The resulting pattern is transferred to a high-speed detector, and a few minutes of postprocessing removes the last remaining noise, or speckles.

    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    disk
    A 2.56-centimeter-square deformable mirror called a tweeter is used for fine-scale correction of the atmosphere. This microelectromechanical systems– (MEMS-) based device has 4,096 actuators and is made of etched silicon, similar to the material used for microchips. (Courtesy of Boston Micromachines.)

    First Light November 2013

    Researchers conducted the first observations with GPI in November 2013, when they trained the Gemini South telescope on two known planetary systems: the four-planet HR8799 system (codiscovered in 2008 by a Livermore-led team at the Gemini and Keck observatories) and the one-planet Beta Pictoris system. A highlight from the November observations was GPI recording the first-ever spectrum of the young planet Beta Pictoris b, which is visible as a small but distinct dot.

    bp
    This composite image represents the close environment of Beta Pictoris as seen in near infrared light. This very faint environment is revealed after a very careful subtraction of the much brighter stellar halo. The outer part of the image shows the reflected light on the dust disc, as observed in 1996 with the ADONIS instrument on ESO’s 3.6 m telescope; the inner part is the innermost part of the system, as seen at 3.6 microns with NACO on the Very Large Telescope. The newly detected source is more than 1000 times fainter than Beta Pictoris, aligned with the disc, at a projected distance of 8 times the Earth-Sun distance. Both parts of the image were obtained on ESO telescopes equipped with adaptive optics.
    Date 21 November 2008
    Source http://www.eso.org

    ESO 3.6m telescope & HARPS at LaSilla
    ESO 3.6 M Telescope at Cerro LaSilla

    ESO VLT Interferometer
    ESO VLT at Cerro Paranal

    Keck Observatory
    Keck Observatory Interior
    Keck

    Using the instrument’s polarization mode, the first-light team also detected starlight scattered by tiny particles and studied a faint ring of dust orbiting the young star HR4796A. The team released the images at the January 2014 meeting of the American Astronomical Society. “The first images were a factor of 10 better than those taken with the previous generation of instruments,” says Macintosh. “We could see a planet in the raw image, which was pretty amazing. In one minute, we found planets that used to take us an hour to detect.”

    Data from the first-light observations are allowing researchers to refine estimates of the orbit and size of Beta Pictoris b. To analyze the exoplanet, the Livermore team and their international collaborators looked at the two disks of dense gas and debris surrounding the parent star. They found that the planet is not aligned with the main debris disk but instead with an inner warped disk, with which it may interact. “If Beta Pictoris b is warping the disk, that helps us see how the planet-forming disk in our own solar system might have evolved long ago,” says Poyneer.

    Since first light, the Livermore adaptive optics team has been working to improve GPI’s performance by minimizing vibration caused by the coolers that chill the spectrometer to a very low temperature. Vibrations decrease the stability of the parent star on the coronagraph and inject a significant focusing error into the system as the telescope optics shake. In response, the team developed algorithms that effectively cancel the errors in a manner similar to noise-canceling headphones. The filters have reduced pointing vibrations to a mere one-thousandth of an arcsecond and decreased the focusing error by 30 times, from 90 to 3 nanometers.

    In November 2014, the GPI Exoplanet Survey—an international team that includes dozens of leading exoplanet scientists—will begin an 890-hour-long campaign to discover and characterize giant exoplanets orbiting 600 young stars. These planets are located between 5 and 50 astronomical units from their parent stars, or up to 50 times the distance of Earth from the Sun (nearly 150 million kilometers). The observing time is the largest amount allocated to one group at Gemini South and represents 10 to 15 percent of the time available for the next three years. In the meantime, GPI verification and commissioning efforts continue.

    spot
    (left) During its first observations, GPI captured this image within 60 seconds. It shows a planet orbiting the star Beta Pictoris, which is 63 light-years from Earth. (right) A series of 30 images was later combined to enhance the signal-to-noise ratio and remove spectral artifacts. The four spots equidistant from the star are fiducials, or reference points. (Image processing by Christian Marois, NRC Canada.)

    (left) During its first observations, GPI captured this image within 60 seconds. It shows a planet orbiting the star Beta Pictoris, which is 63 light-years from Earth. (right) A series of 30 images was later combined to enhance the signal-to-noise ratio and remove spectral artifacts. The four spots equidistant from the star are fiducials, or reference points. (Image processing by Christian Marois, NRC Canada.)

    deform

    GPI also records data using polarization differential imaging to more clearly capture scattered light. Images of the young star HR4796A revealed a narrow ring around the star, which could be dust from asteroids or comets left behind by planet formation. The left image shows normal light scattered by Earth’s turbulent atmosphere, including both the dust ring and the residual light from the central star. The right image shows only polarized light taken with GPI. (Image processing by Marshall Perrin, Space Telescope Science Institute.)
    GPI also records data using polarization differential imaging to more clearly capture scattered light. Images of the young star HR4796A revealed a narrow ring around the star, which could be dust from asteroids or comets left behind by planet formation. The left image shows normal light scattered by Earth’s turbulent atmosphere, including both the dust ring and the residual light from the central star. The right image shows only polarized light taken with GPI. (Image processing by Marshall Perrin, Space Telescope Science Institute.)

    graph
    The Livermore adaptive optics team has improved GPI’s performance by minimizing vibration caused by the coolers that chill the spectrometer. Vibrations inject a large focusing error into the system as the telescope optics shake. The team developed filters that reduced the focusing error by 30 times—from 90 nanometers to 3.

    Adaptive Control of X-Ray Beams

    Building on the adaptive optics expertise gained with GPI, the Laboratory has launched an effort, led by Poyneer, to design, fabricate, and test x-ray deformable mirrors equipped with adaptive optics. “We took some of the best adaptive optics people in the world and put them with our experts in x-ray mirrors,” says physicist Michael Pivovaroff, who initiated the program.

    Livermore researchers previously applied their expertise in x-ray optics to design and fabricate the six advanced mirrors for the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in Menlo Park, California. These mirrors transport the LCLS x-ray beam and control its size and direction. The brightest x-ray source in the world, LCLS can capture stop-action shots of moving molecules with a “shutter speed” measured in femtoseconds, or million-billionths of a second. With a wavelength about the size of an atom, it can image objects as small as the DNA helix. (See S&TR, January/February 2011, Groundbreaking Science with the World’s Brightest X Rays.)

    SLAC LCLS
    SLAC LCLS

    Despite the outstanding performance of current x-ray mirrors, further advances in their quality are required to take full advantage of the capabilities of LCLS and newer facilities, such as the Department of Energy’s (DOE’s) National Synchrotron Light Source II at Brookhaven National Laboratory and those under construction in Europe. “DOE is investing billions of dollars building x-ray light sources such as synchrotrons and x-ray lasers,” says Pivovaroff. “Scientists working with those systems need certain spatial and spectral characteristics for their experiments, but every x-ray optic distorts the photons in some way. We don’t want our mirrors to get in the way of the science.”

    BNL NSLS II
    nsls II interior
    BNL NSLS-II

    Combining adaptive optics with x-ray mirrors may lead to three significant benefits. First, active control is a potentially inexpensive way to achieve better surface flatness than is possible by polishing the mirrors alone. Second, the ability to change a mirror’s flatness allows for real-time correction of aberrations in an x-ray beamline. This capability includes self-correction of errors in the mirror itself (such as those caused by heat buildup) and correction of errors introduced by other optics. Finally, adaptive optics–corrected x-ray mirrors could widen the possible attributes of x-ray beams, leading to new kinds of experiments.

    Unlike mirrors used at visible and near-infrared wavelengths, x-ray mirrors must operate at a shallow angle called a grazing incidence. This requirement makes their design and profile quite different from deformable mirrors for astronomy. Traditional x-ray optics are rigid and have a longitudinal, or ribbon, profile up to 1 meter long. If adaptive optics systems can be designed to correct distortions in x-ray beams, next-generation research facilities could offer greater experimental flexibility and achieve close to their theoretical performance.

    “As with visible and infrared light, we want to manipulate the x-ray wavefront with mirrors while preserving coherence,” says Livermore optical engineer Tom McCarville, who was lead engineer for the LCLS x-ray mirrors. “The fabrication tolerances are much greater because x-ray wavelengths are so short. Technologies for diffracting and transmitting x rays are relatively limited compared to those available for visible light. Reflective x-ray technology is, however, mature enough to deploy for transporting x rays from source to experiment. Dynamically controlling the mirror’s surface figure will preserve the x-ray source’s properties during transport and thus enhance the precision of experimental results.”

    beam
    Extremely small adjustments to the surface height on the x-ray deformable mirror correct the incoming beam, as depicted in this artist’s rendering (not to scale). Unlike visible light, the x rays can only be reflected off the mirror at a very shallow incoming angle, called a grazing incidence. (Rendering by Kwei-Yu Chu.)

    First X-Ray Deformable Mirror

    With funding from the Laboratory Directed Research and Development (LDRD) Program, the Livermore team designed and built the first grazing-incidence adaptive optics x-ray mirror with demonstrated performance suitable for use at high-intensity DOE light sources. This x-ray deformable mirror, developed with partner Northrop-Grumman AOA Xinetics, was made from a superpolished single-crystal silicon bar measuring 45 centimeters long, 30 millimeters high, and 40 millimeters wide, the same dimensions of the three hard x-ray mirrors built for LCLS.

    A single row of 45 actuators bonded opposite the reflecting surface makes the mirror deformable. These 1-centimeter-wide actuators provide fine-scale control of the mirror’s surface figure (overall shape). Actuators respond to voltage changes by expanding or contracting in width along the mirror’s long axis to bend the reflecting surface. Seven internal temperature sensors and 45 strain gauges monitor the silicon bar, providing a method to self-correct for long-term drifts in the surface figure.

    As with all x-ray optics, the quality of the mirror’s surface is extremely important because the slightest bump or imperfection will scatter x rays. The substrate was thus fabricated and superpolished to nanometer-scale precision before assembly into a deformable mirror. The initial surface figure error for the deformable mirror was 19 nanometers. Although extremely small, it is substantially above the 1-nanometer-level required for best performance in an x-ray beamline.

    To meet that requirement, the team used high-precision visible light measurements of the mirror’s surface to “flatten” the mirror. With this approach, interferometer measurements are processed with specialized control algorithms. Specific voltages are then applied to the actuators to adjust the mirror’s surface. The resulting figure error was only 0.7 nanometers. “We demonstrated the first subnanometer active flattening of a substrate longer than 15 centimeters,” says Poyneer. “It was a very important step in validating our technological approach.”

    For deformable mirrors to be fully effective, scientists must develop better methods to analyze the x-ray beamline. “We need a sensor that won’t distort the beam,” says Pivovaroff. Such a sensor would provide a feedback loop that continuously feeds beam characteristics to the mirror actuators so they compensate for inconsistencies in the beam. Poyneer is working on new diagnostic techniques at Lawrence Berkeley National Laboratory’s Advanced Light Source (ALS), and the Livermore team is scheduled to begin testing the mirror on a beamline at ALS. The long-term goal of that testing will be to repeat the subnanometer flattening experiment, this time using x rays to measure the surface.

    Poyneer is hopeful the adaptive optics research effort will eventually result in a national capability that DOE next-generation x-ray light sources can draw on for new beamlines. She has shared the results with scientists at several DOE high-energy research centers and is working to better understand the needs of beamline engineers and the scientists who use those systems. “There’s a lot of interest and excitement in the community because deformable mirrors let us do better science,” says Pivovaroff. “The performance of our mirror has surprised many people. Controlling the surface of a half-meter-long optic to less than a nanometer is quite an accomplishment.”

    By enabling delivery of more coherent and better-focused x rays, the mirrors are expected to produce sharper images, which could lead to advances in physics, chemistry, and biology. The technology may enable new types of x-ray diagnostics for experiments at the National Ignition Facility.

    graph2
    In an experiment, high-precision visible light measurements were used to flatten the x-ray deformable mirror to a surface figure error of only 0.7 nanometers average deviation.

    art
    This artist’s concept illustrates the difference in reconstruction quality that adaptive optics could provide if installed at next-generation x-ray beamline facilities. At the top, a partially coherent x-ray beam hits the target object, producing a diffraction pattern on the detector and limiting the accuracy of the recovered image. At the bottom, adaptive optics provide a coherent beam with excellent wavefront quality, which improves resolution of the object. (Rendering by Kwei-Yu Chu.)

    Expanded Educational Outreach

    The Laboratory’s adaptive optics team is also dedicated to training the next generation of scientists and engineers for careers in adaptive optics and is working to disseminate expertise in adaptive optics technology to academia and industry. In a joint project between Lawrence Livermore National Security (the managing contractor for Lawrence Livermore) and UC, two graduate students from the UC Santa Cruz Department of Astronomy and Astrophysics are testing advanced algorithms that could further improve the performance of systems such as GPI. The algorithms are designed to predict wind-blown turbulence and further negate the effects of the atmosphere. Poyneer and astronomer Mark Ammons are mentoring the students, Alex Rudy and Sri Srinath.

    Poyneer says, “GPI has demonstrated how continued work on technology developments can lead to significantly improved instrument performance.” According to Ammon, “An important frontier in astronomy is pushing adaptive optics operation to visible wavelengths, which requires better control. GPI routinely meets these stringent performance requirements.”

    The lessons learned as part of the GPI experience will be critical input for next-generation adaptive optics on large telescopes, such as the W. M. Keck telescopes in Hawaii. Ammons adds, “While adaptive optics were first developed for military purposes, the loop has now closed—the advances made with GPI offer a wide range of potential applications for national security applications.”

    In addition, the Livermore team is applying its expertise to other fields, as exemplified by progress in the extremely flat x-ray deformable mirror. Thanks to adaptive optics, the universe—from planets to x rays—is coming into greater focus.

    See the full article here.

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 377 other followers

%d bloggers like this: