Tagged: DOE Office of Science Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:32 pm on October 5, 2018 Permalink | Reply
    Tags: DOE Office of Science, DOE Ofice of HIgh Energy Physics, , ORNL researchers advance quantum computing science through six DOE awards, ,   

    From Oak Ridge National Laboratory: “ORNL researchers advance quantum computing, science through six DOE awards” 

    i1

    From Oak Ridge National Laboratory

    October 3, 2018
    Scott Jones, Communications
    jonesg@ornl.gov
    865.241.6491

    1
    Oak Ridge National Laboratory will be working on new projects aimed at accelerating quantum information science. Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    2
    ORNL researchers will leverage various microscopy platforms for quantum computing projects. Credit: Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    The Department of Energy’s Oak Ridge National Laboratory is the recipient of six awards from DOE’s Office of Science aimed at accelerating quantum information science (QIS), a burgeoning field of research increasingly seen as vital to scientific innovation and national security.

    The awards, which were made in conjunction with the White House Summit on Advancing American Leadership in QIS, will leverage and strengthen ORNL’s established programs in quantum information processing and quantum computing.

    The application of quantum mechanics to computing and the processing of information has enormous potential for innovation across the scientific spectrum. Quantum technologies use units known as qubits to greatly increase the threshold at which information can be transmitted and processed. Whereas traditional “bits” have a value of either 0 or 1, qubits are encoded with values of both 0 and 1, or any combination thereof, at the same time, allowing for a vast number of possibilities for storing data.

    While in its infancy, the technology is being harnessed to develop computers that, when mature, will be exponentially more powerful than today’s leading systems. Beyond computing, however, quantum information science shows great promise to advance a vast array of research domains, from encryption to artificial intelligence to cosmology.

    The ORNL awards represent three Office of Science programs.

    “Software Stack and Algorithms for Automating Quantum-Classical Computing,” a new project supported by the Office of Advanced Scientific Computing Research, will develop methods for programming quantum computers. Led by ORNL’s Pavel Lougovski, the team of researchers from ORNL, Johns Hopkins University Applied Physics Lab, University of Southern California, University of Maryland, Georgetown University, and Microsoft, will tackle translating scientific applications into functional quantum programs that return accurate results when executed on real-world faulty quantum hardware. The team will develop an open-source algorithm and software stack that will automate the process of designing, executing, and analyzing the results of quantum algorithms, thus enabling new discovery across many scientific domains with an emphasis on applications in quantum field theory, nuclear physics, condensed matter, and quantum machine learning.

    ORNL’s Christopher M. Rouleau will lead the “Thin Film Platform for Rapid Prototyping Novel Materials with Entangled States for Quantum Information Science” project, funded by Basic Energy Sciences. The project aims to establish an agile AI-guided synthesis platform coupling reactive pulsed laser deposition with quick decision-making diagnostics to enable the rapid exploration of a wide spectrum of candidate thin-film materials for QIS; understand the dynamics of photonic states by combining a novel cathodoluminescence scanning electron microscopy platform with ultrafast laser spectroscopy; and enable understanding of entangled spin states for topological quantum computing by developing a novel scanning tunneling microscopy platform.

    ORNL’s Stephen Jesse will lead the “Understanding and Controlling Entangled and Correlated Quantum States in Confined Solid-State Systems Created via Atomic Scale Manipulation,” a new project supported by Basic Energy Sciences that includes collaborators from Harvard and MIT. The goal of the project is to use advanced electron microscopes to engineer novel materials on an atom-by-atom basis for use in QIS. These microscopes, along with other powerful instrumentation, will also be used to assess emerging quantum properties in-situ to aid the assembly process. Collaborators from Harvard will provide theoretical and computational effort to design quantum properties on demand using ORNL’s high-performance computing resources.

    ORNL is also partnering with Pacific Northwest National Laboratory, Berkeley Laboratory, and the University of Michigan on a project funded by the Office of Basic Energy Sciences titled “Embedding Quantum Computing into Many-Body Frameworks for Strongly-Correlated Molecular and Materials Systems.” The research team will develop methods for solving problems in computational chemistry for highly correlated electronic states. ORNL’s contribution, led by Travis Humble, will support this collaboration by translating applications of computational chemistry into the language needed for running on quantum computers and testing these ideas on experimental hardware.

    ORNL will support multiple projects awarded by the Office of High Energy Physics to develop methods for detecting high-energy particles using quantum information science. They include:

    “Quantum-Enhanced Detection of Dark Matter and Neutrinos,” in collaboration with the University of Wisconsin, Tufts, and San Diego State University. This project will use quantum simulation to calculate detector responses to dark matter particles and neutrinos. A new simulation technique under development will require extensive work in error mitigation strategies to correctly evaluate scattering cross sections and other physical quantities. ORNL’s effort, led by Raphael Pooser, will help develop these simulation techniques and error mitigation strategies for the new quantum simulator device, thus ensuring successful detector calculations.

    “Particle Track Pattern Recognition via Content Addressable Memory and Adiabatic Quantum Optimization: OLYMPUS Experiment Revisited,” a collaboration with John Hopkins Applied Physics Laboratory aimed at identifying rare events found in the data generated by experiments at particle colliders. ORNL principal investigator Travis Humble will apply new ideas for data analysis using experimental quantum computers that target faster response times and greater memory capacity for tracking signatures of high-energy particles.

    “HEP ML and Optimization Go Quantum,” in collaboration with Fermi National Accelerator Laboratory and Lockheed Martin Corporation, which will investigate how quantum machine learning methods may be applied to solving key challenges in optimization and data analysis. Advances in training machine learning networks using quantum computer promise greater accuracy and faster response times for data analysis. ORNL principal investigators Travis Humble and Alex McCaskey will help to develop these new methods for quantum machine learning for existing quantum computers by using the XACC programming tools, which offer a flexible framework by which to integrate quantum computing into scientific software.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 7:08 am on July 21, 2018 Permalink | Reply
    Tags: , DOE Office of Science, ,   

    From Exascale Computing Project: “ECP Announces New Co-Design Center to Focus on Exascale Machine Learning Technologies” 

    From Exascale Computing Project

    07/20/18

    The Exascale Computing Project has initiated its sixth Co-Design Center, ExaLearn, to be led by Principal Investigator Francis J. Alexander, Deputy Director of the Computational Science Initiative at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory.

    1
    Francis J. Alexander. BNL


    ExaLearn is a co-design center for Exascale Machine Learning (ML) Technologies and is a collaboration initially consisting of experts from eight multipurpose DOE labs.

    Brookhaven National Laboratory (Francis J. Alexander)
    Argonne National Laboratory (Ian Foster)
    Lawrence Berkeley National Laboratory (Peter Nugent)
    Lawrence Livermore National Laboratory (Brian van Essen)
    Los Alamos National Laboratory (Aric Hagberg)
    Oak Ridge National Laboratory (David Womble)
    Pacific Northwest National Laboratory (James Ang)
    Sandia National Laboratories (Michael Wolf)

    Rapid growth in the amount of data and computational power is driving a revolution in machine learning (ML) and artificial intelligence (AI). Beyond the highly visible successes in machine-based natural language translation, these new ML technologies have profound implications for computational and experimental science and engineering and the exascale computing systems that DOE is deploying to support those disciplines.

    To address these challenges, the ExaLearn co-design center will provide exascale ML software for use by ECP Applications projects, other ECP Co-Design Centers and DOE experimental facilities and leadership class computing facilities. The ExaLearn Co-Design Center will also collaborate with ECP PathForward vendors on the development of exascale ML software.

    The timeliness of ExaLearn’s proposed work ties into the critical national need to enhance economic development through science and technology. It is increasingly clear that advances in learning technologies have profound societal implications and that continued U.S. economic leadership requires a focused effort, both to increase the performance of those technologies and to expand their applications. Linking exascale computing and learning technologies represents a timely opportunity to address those goals.

    The practical end product will be a scalable and sustainable ML software framework that allows application scientists and the applied mathematics and computer science communities to engage in co-design for learning. The new knowledge and services to be provided by ExaLearn are imperative for the nation to remain competitive in computational science and engineering by making effective use of future exascale systems.

    “Our multi-laboratory team is very excited to have the opportunity to tackle some of the most important challenges in machine learning at the exascale,” Alexander said. “There is, of course, already a considerable investment by the private sector in machine learning. However, there is still much more to be done in order to enable advances in very important scientific and national security work we do at the Department of Energy. I am very happy to lead this effort on behalf of our collaborative team.”

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About ECP

    The ECP is a collaborative effort of two DOE organizations – the Office of Science and the National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 12:28 pm on September 4, 2016 Permalink | Reply
    Tags: , DOE Office of Science,   

    From DOE: “Packaging a wallop” 

    DOE Main

    Department of Energy

    ASCRDiscovery

    August 2016
    No writer credit found

    Lawrence Livermore National Laboratory’s time-saving HPC tool eases the way for next era of scientific simulations.

    1
    Technicians prepare the first row of cabinets for the pre-exascale Trinity supercomputer at Los Alamos National Laboratory, where a team from Lawrence Livermore National Laboratory deployed its new Spack software packaging tool. Photo courtesy of Los Alamos National Laboratory.

    From climate-change predictions to models of the expanding universe, simulations help scientists understand complex physical phenomena. But simulations aren’t easy to deploy. Computational models comprise millions of lines of code and rely on many separate software packages. For the largest codes, configuring and linking these packages can require weeks of full-time effort.

    Recently, a Lawrence Livermore National Laboratory (LLNL) team deployed a multiphysics code with 47 libraries – software packages that today’s HPC programs rely on – on Trinity, the Cray XC30 supercomputer being assembled at Los Alamos National Laboratory. A code that would have taken six weeks to deploy on a new machine required just a day and a half during an early-access period on part of Trinity, thanks to a new tool that automates the hardest parts of the process.

    LANL Cray XC30 Trinity supercomputer
    LANL Cray XC30 Trinity supercomputer

    This leap in efficiency was achieved using the Spack package manager. Package management tools are used frequently to deploy web applications and desktop software, but they haven’t been widely used to deploy high-performance computing (HPC) applications. Few package managers handle the complexities of an HPC environment and application developers frequently resort to building by hand. But as HPC systems and software become ever more complex, automation will be critical to keep things running smoothly on future exascale machines, capable of one million trillion calculations per second. These systems are expected to have an even more complicated software ecosystem.

    “Spack is like an app store for HPC,” says Todd Gamblin, its creator and lead developer. “It’s a bit more complicated than that, but it simplifies life for users in a similar way. Spack allows users to easily find the packages they want, it automates the installation process, and it allows contributors to easily share their own build recipes with others.” Gamblin is a computer scientist in LLNL’s Center for Applied Scientific Computing and works with the Development Environment Group at Livermore Computing. Spack was developed with support from LLNL’s Advanced Simulation and Computing program.

    Spack’s success relies on contributions from its burgeoning open-source community. To date, 71 scientists at more than 20 organizations are helping expand Spack’s growing repository of software packages, which number more than 500 so far. Besides LLNL, participating organizations include six national laboratories – Argonne, Brookhaven, Fermilab, Lawrence Berkeley (through the National Energy Research Scientific Computing Center), Los Alamos, Oak Ridge and Sandia – plus NASA, CERN and many other institutions worldwide.

    Spack is more than a repository for sharing applications. In the iPhone and Android app stores, users download pre-built programs that work out of the box. HPC applications often must be built directly on the supercomputer, letting programmers customize them for maximum speed. “You get better performance when you can optimize for both the host operating system and the specific machine you’re running on,” Gamblin says. Spack automates the process of fine-tuning an application and its libraries over many iterations, allowing users to quickly build many custom versions of codes and rapidly converge on a fast one.

    2
    Applications can share libraries when the applications are compatible with the same versions of their libraries (top). But if one application is updated and another is not, the first application won’t work with the second. Spack (bottom) allows multiple versions to coexist on the same system; here, for example, it simply builds a new version of the physics library and installs it alongside the old one. Schematic courtesy of Lawrence Livermore National Laboratory.

    Each new version of a large code may require rebuilding 70 or more libraries, also called dependencies. Traditional package managers typically allow installation of only one version of a package, to be shared by all installed software. This can be overly restrictive for HPC, where codes are constantly changed but must continue to work together. Picture two applications that share two dependencies: one for math and another for physics. They can share because the applications are compatible with the same versions of their dependencies. Suppose that application 2 is updated, and now requires version 2.0 of the physics library, but application 1 still only works with version 1.0. In a typical package manager, this would cause a conflict, because the two versions of the physics package cannot be installed at once. Spack allows multiple versions to coexist on the same system and simply builds a new version of the physics library and installs it alongside the old one.

    This four-package example is simple, Gamblin notes, but imagine a similar scenario with 70 packages, each with conflicting requirements. Most application users are concerned with generating scientific results, not with configuring software. With Spack, they needn’t have detailed knowledge of all packages and their versions, let alone where to find the optimal version of each, to begin the build. Instead, Spack handles the details behind the scenes and ensures that dependencies are built and linked with their proper relationships. It’s like selecting a CD player and finding it’s already connected to a compatible amplifier, speakers and headphones.

    Gamblin and his colleagues call Spack’s dependency configuration process concretization – filling in “the details to make an abstract specification concrete,” Gamblin explains. “Most people, when they say they want to build something, they have a very abstract idea of what they want to build. The main complexity of building software is all the details that arise when you try to hook different packages together.”

    During concretization, the package manager runs many checks, flagging inconsistencies among packages, such as conflicting versions. Spack also compares the user’s expectations against the properties of the actual codes and their versions and calls out and helps to resolve any mismatches. These automated checks save untold hours of frustration, avoiding cases in which a package wouldn’t have run properly.

    The complexity of building modern HPC software leads some scientists to avoid using libraries in their codes. They opt instead to write complex algorithms themselves, Gamblin says. This is time consuming and can lead to sub-optimal performance or incorrect implementations. Package management simplifies the process of sharing code, reducing redundant effort and increasing software reuse.

    Most important, Spack enables users to focus on the science they set out to do. “Users really want to be able to install an application and get it working quickly,” Gamblin says. “They’re trying to do science, and Spack frees them from the meta-problem of building and configuring the code.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
  • richardmitnick 9:40 pm on November 24, 2011 Permalink | Reply
    Tags: , , , , , DOE Office of Science, , INL, , , , , , , , , , , , , , , , , , ,   

    Advocate for Basic Research at D.O.E. Labs and NASA After the Deficit Reduction Debacle in Washington 

    The recent deficit super committee debacle in Washington means possible debilitating budget cuts to your D.O.E labs and NASA missions. Please get ready to write, email, phone, your congressional representatives and senators. It’s your tax dollars, folks.

    Here’s what’s at risk:


    D.O.E.:


    Argonne


    Ames


    Berkeley


    Brookhaven


    Fermilab


    INL


    Jefferson


    Livermore


    Los Alamos


    NSCL


    Oak Ridge


    Pacific Northwest


    Princeton Plasma Physics


    Sandia


    SLAC

    US/LHC


    The many other aspects of the D.O.E. Office of Science


    NASA:


    Hubble (Yes, there is still a budget for Hubble)


    Fermi

    i2
    Goddard


    Chandra

    nh
    Herschel


    JPL

    nk
    Kepler

    nsofia
    SOFIA


    Spitzer


    Webb


    WISE

    All of the other missions, current and future.

    Everything is at risk. The U.S. future as a leader in basic scientific research is at risk. Remember the Superconducting Super Collider? Killed off in 1993 by the dimwitted (then Democrat dominated) Congress?

    The tax dollars are yours. Visit the D.O.E lab and NASA mission web sites. Look around. See if you think that these are worthy of your tax dollars.

    Read back through past entries in this blog, you will see that it is not all High Energy Physics, Astronomy, and rocket science. It is also Biology, Chemistry, Medicine, Genetics, Clean and Renewable Energy, Ecology, Climate, you name it, our great labs and NASA missions are there to help make our lives better.

     
  • richardmitnick 2:29 pm on November 3, 2011 Permalink | Reply
    Tags: , , , DOE Office of Science,   

    From Argonne Lab: “Batteries get a quick charge with new anode technology” 

    News from Argonne National Laboratory

    Jared Sagoff
    November 2, 2011.

    “A team of researchers at the U.S. Department of Energy’s Argonne National Laboratory, led by Argonne nanoscientist Tijana Rajh and battery expert Christopher Johnson, discovered that nanotubes composed of titanium dioxide can switch their phase as a battery is cycled, gradually boosting their operational capacity. Laboratory tests showed that new batteries produced with this material could be recharged up to half of their original capacity in less than 30 seconds.

    By switching out conventional graphite anodes for ones composed of the titanium nanotubes, Rajh and her colleagues witnessed a surprising phenomenon. As the battery cycled through several charges and discharges, its internal structure began to orient itself in a way that dramatically improved the battery’s performance.

    ‘We did not expect this to happen when we first started working with the material, but the anode spontaneously adopted the best structure,’ Rajh said. ‘There’s an internal kind of plasticity to the system that allows it to change as the battery gets cycled.’

    According to Argonne nanoscientist Hui Xiong, who worked with Rajh to develop the new anode material, titanium dioxide seemed like it would be unlikely to adequately substitute for graphite. ‘We started with a material that we never thought would have provided a functional use, and it turned into something that gave us the best result possible,’ she said.”

    See the full article here.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    i3

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: