Tagged: D.O.E. Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:51 am on May 23, 2019 Permalink | Reply
    Tags: , , D.O.E., , , Sandia is planning another pair of launches this August.,   

    From Sandia Lab: “Sandia launches a bus into space” 

    From Sandia Lab

    May 23, 2019

    HOT SHOT sounding rocket program picks up flight pace.
    1
    A sounding rocket designed and launched by Sandia National Laboratories lifts off from the Kauai Test Facility in Hawaii on April 24. (Photo by Mike Bejarano and Mark Olona)

    Sandia National Laboratories recently launched a bus into space. Not the kind with wheels that go round and round, but the kind of device that links electronic devices (a USB cable, short for “universal serial bus,” is one common example).

    The bus was among 16 total experiments aboard two sounding rockets that were launched as part of the National Nuclear Security Administration’s HOT SHOT program, which conducts scientific experiments and tests developing technologies on non-weaponized rockets. The respective flights took place on April 23 and April 24 at the Kauai Test Facility in Hawaii.

    The pair of flights marked an increase in the program’s tempo.

    “Sandia’s team was able to develop, fabricate, and launch two distinct payloads in less than 11 months,” said Nick Leathe, who oversaw the payload development. The last HOT SHOT flight — a single rocket launched in May 2018 — took 16 months to develop.

    Sandia, Lawrence Livermore National Laboratory, Kansas City National Security Campus, and the U.K.-based Atomic Weapons Establishment provided experiments for this series of HOT SHOTs.

    The rockets also featured several improvements over the previous one launched last year, including new sensors to measure pressure, temperature, and acceleration. These additions provided researchers more details about the conditions their experiments endured while traveling through the atmosphere.

    The experimental bus, for example, was tested to find out whether components would be robust enough to operate during a rocket launch. The new technology was designed expressly for power distribution in national security applications and could make other electronic easier to upgrade. It includes Sandia-developed semiconductors and was made to withstand intense radiation.

    Sandia is planning another pair of launches this August. The name HOT SHOT comes from the term “high operational tempo,” which refers to the relatively high frequency of flights. A brisk flight schedule allows scientists and engineers to perform multiple tests in a highly specialized test environment in quick succession.

    For the recent flight tests, one Sandia team prepared two experiments, one for each flight, to observe in different ways the dramatic temperature and pressure swings that are normal in rocketry but difficult to reproduce on the ground. The researchers are aiming to improve software that models these conditions for national security applications, and they are now analyzing the flight data for discrepancies between what they observed and what their software predicted. Differences could lead to scientific insights that would help refine the program.

    Some experiments also studied potential further improvements for HOT SHOT itself, including additively manufactured parts that could be incorporated into future flights and instruments measuring rocket vibration.

    The sounding rockets are designed to achieve an altitude of about 1.2 million feet and to fly about 220 nautical miles down range into the Pacific Ocean. Sandia uses refurbished, surplus rocket engines, making these test flights more economical than conventional flight tests common at the end of a technology’s development.

    The HOT SHOT program enables accelerated cycles of learning for engineers and experimentalists. “Our goal is to take a 10-year process and truncate it to three years without losing quality in the resulting technologies. HOT SHOT is the first step in that direction,” said Todd Hughes, NNSA’s HOT SHOT Federal Program Manager.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

    6

     
  • richardmitnick 11:38 am on September 29, 2018 Permalink | Reply
    Tags: Actinide chemistry, , , , , Computational chemistry, D.O.E., , Microsoft Quantum Development Kit, NWChem an open source high-performance computational chemistry tool funded by DOE, ,   

    From Pacific Northwest National Lab: “PNNL’s capabilities in quantum information sciences get boost from DOE grant and new Microsoft partnership” 

    PNNL BLOC
    From Pacific Northwest National Lab

    September 28, 2018
    Susan Bauer, PNNL,
    susan.bauer@pnnl.gov
    (509) 372-6083

    1
    No image caption or credit

    On Monday, September 24, the U.S. Department of Energy announced $218 million in funding for dozens of research awards in the field of Quantum Information Science. Nearly $2 million was awarded to DOE’s Pacific Northwest National Laboratory for a new quantum computing chemistry project.

    “This award will be used to create novel computational chemistry tools to help solve fundamental problems in catalysis, actinide chemistry, and materials science,” said principal investigator Karol Kowalski. “By collaborating with the quantum computing experts at Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory, and the University of Michigan, we believe we can help reshape the landscape of computational chemistry.”

    Kowalski’s proposal was chosen along with 84 others to further the nation’s research in QIS and lay the foundation for the next generation of computing and information processing as well as an array of other innovative technologies.

    While Kowalski’s work will take place over the next three years, computational chemists everywhere will experience a more immediate upgrade to their capabilities in computational chemistry made possible by a new PNNL-Microsoft partnership.

    “We are working with Microsoft to combine their quantum computing software stack with our expertise on high-performance computing approaches to quantum chemistry,” said Sriram Krishnamoorthy who leads PNNL’s side of this collaboration.

    Microsoft will soon release an update to the Microsoft Quantum Development Kit which will include a new chemical simulation library developed in collaboration with PNNL. The library is used in conjunction with NWChem, an open source, high-performance computational chemistry tool funded by DOE. Together, the chemistry library and NWChem will help enable quantum solutions and allow researchers and developers a higher level of study and discovery.

    “Researchers everywhere will be able to tackle chemistry challenges with an accuracy and at a scale we haven’t experienced before,” said Nathan Baker, director of PNNL’s Advanced Computing, Mathematics, and Data Division. Wendy Shaw, the lab’s division director for physical sciences, agrees with Baker. “Development and applications of quantum computing to catalysis problems has the ability to revolutionize our ability to predict robust catalysts that mimic features of naturally occurring, high-performing catalysts, like nitrogenase,” said Shaw about the application of QIS to her team’s work.

    PNNL’s aggressive focus on quantum information science is driven by a research interest in the capability and by national priorities. In September, the White House published the National Strategic Overview for Quantum Information Science and hosted a summit on the topic. Through their efforts, researchers hope to unleash quantum’s unprecedented processing power and challenge traditional limits for scaling and performance.

    In addition to the new DOE funding, PNNL is also pushing work in quantum conversion through internal investments. Researchers are determining which software architectures allow for efficient use of QIS platforms, designing QIS systems for specific technologies, imagining what scientific problems can best be solved using QIS systems, and identifying materials and properties to build quantum systems. The effort is cross-disciplinary; PNNL scientists from its computing, chemistry, physics, and applied mathematics domains are all collaborating on quantum research and pushing to apply their discoveries. “The idea for this internal investment is that PNNL scientists will take that knowledge to build capabilities impacting catalysis, computational chemistry, materials science, and many other areas,” said Krishnamoorthy.

    Krishnamoorthy wants QIS to be among the priorities that researchers think about applying to all of PNNL’s mission areas. With continued investment from the DOE and partnerships with industry leaders like Microsoft, that just might happen.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

     
  • richardmitnick 12:44 pm on March 9, 2018 Permalink | Reply
    Tags: D.O.E., ECP LANL Cray XC 40 Trinity supercomputer, , , ,   

    From ECP: What is Exascale Computing and Why Do We Need It? 


    Exascale Computing Project


    Los Alamos National Lab


    The Trinity supercomputer, with both Xeon Haswell and the Xeon Phi Knights Landing processors, is the seventh fastest supercomputer on the TOP 500 list, and number three on the High Performance Conjugate Gradients Benchmark project.

    Meeting national security science challenges with reliable computing

    As part of the National Strategic Computing Initiative (NSCI), the Exascale Computing Project (ECP) was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of the U.S. Department of Energy (DOE) in the mid-2020s time frame.

    The goal of ECP is to deliver breakthrough modeling and simulation solutions that analyze more data in less time, providing insights and answers to the most critical U.S. challenges in scientific discovery, energy assurance, economic competitiveness and national security.

    The Trinity Supercomputer at Los Alamos National Laboratory was recently named as a top 10 supercomputer on two lists: it made number three on the High Performance Conjugate Gradients (HPCG) Benchmark project, and is number seven on the TOP500 list.

    “Trinity has already made unique contributions to important national security challenges, and we look forward to Trinity having a long tenure as one of the most powerful supercomputers in the world.” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos.

    Trinity, a Cray XC40 supercomputer at the Laboratory, was recently upgraded with Intel “Knights Landing” Xeon Phi processors, which propelled it from 8.10 petaflops six months ago to 14.14 petaflops.

    The Trinity Supercomputer Phase II project was completed during the summer of 2017, and the computer became fully operational during an unclassified “open science” run; it has now transitioned to classified mode. Trinity is designed to provide increased computational capability for the National Nuclear Security Agency in support of increasing geometric and physics fidelities in nuclear weapons simulation codes, while maintaining expectations for total time to solution.

    The capabilities of Trinity are required for supporting the NNSA Stockpile Stewardship program’s certification and assessments to ensure that the nation’s nuclear stockpile is safe, secure and effective.

    The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the Alliance for Computing at Extreme Scale (ACES) partnership. The system is located at the Nicholas Metropolis Center for Modeling and Simulation at Los Alamos and covers approximately 5,200 square feet of floor space.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LANL campus

    What is exascale computing?
    Exascale computing refers to computing systems capable of at least one exaflop or a billion billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

    The Los Alamos role

    In the run-up to developing exascale systems, at Los Alamos we will be taking the lead on a co-design center, the Co-Design Center for Particle-Based Methods: From Quantum to Classical, Molecular to Cosmological. The ultimate goal is the creation of scalable open exascale software platforms suitable for use by a variety of particle-based simulations.

    Los Alamos is leading the Exascale Atomistic capability for Accuracy, Length and Time (EXAALT) application development project. EXAALT will develop a molecular dynamics simulation platform that will fully utilize the power of exascale. The platform will allow users to choose the point in accuracy, length or time-space that is most appropriate for the problem at hand, trading the cost of one over another. The EXAALT project will be powerful enough to address a wide range of materials problems. For example, during its development, EXAALT will examine the degradation of UO2 fission fuel and plasma damage in tungsten under fusion first-wall conditions.

    In addition, Los Alamos and partnering organizations will be involved in key software development proposals that cover many components of the software stack for exascale systems, including programming models and runtime libraries, mathematical libraries and frameworks, tools, lower-level system software, data management and I/O, as well as in situ visualization and data analysis.

    A collaboration of partners

    ECP is a collaborative effort of two DOE organizations—the Office of Science and the National Nuclear Security Administration (NNSA). DOE formalized this long-term strategic effort under the guidance of key leaders from six DOE and NNSA National Laboratories: Argonne, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia. The ECP leads the formalized project management and integration processes that bridge and align the resources of the DOE and NNSA laboratories, allowing them to work with industry more effectively.

     
  • richardmitnick 11:57 am on November 8, 2017 Permalink | Reply
    Tags: At higher temperature snapshots at different times show the moments pointing in different random directions, D.O.E., Magnetic moments, , , , , , Rules of attraction   

    From ORNL OLCF via D.O.E.: “Rules of attraction” 

    i1

    Oak Ridge National Laboratory

    OLCF

    November 8, 2017
    No writer credit

    1
    A depiction of magnetic moments obtained using the hybrid WL-LSMS modeling technique inside nickel (Ni) as the temperature is increased from left to right. At low temperature (left), Ni atoms in their magnetic moments all point in one direction and align. At higher temperature (right) snapshots at different times show the moments pointing in different, random directions, and the individual atoms no longer perfectly align. Image courtesy of Oak Ridge National Laboratory.

    The atoms inside materials are not always perfectly ordered, as usually depicted in models. In magnetic, ferroelectric (or showing electric polarity) and alloy materials, there is competition between random arrangement of the atoms and their desire to align in a perfect pattern. The change between these two states, called a phase transition, happens at a specific temperature.

    Markus Eisenbach, a computational scientist at the Department of Energy’s Oak Ridge National Laboratory, heads a group of researchers who’ve set out to model the behavior of these materials using first principles – from fundamental physics without preset conditions that fit external data.

    “We’re just scratching the surface of comprehending the underlying physics of these three classes of materials, but we have an excellent start,” Eisenbach says. “The three are actually overlapping in that their modes of operation involve disorder, thermal excitations and resulting phase transitions – from disorder to order – to express their behavior.”

    Eisenbach says he’s fascinated by “how magnetism appears and then disappears at varying temperatures. Controlling magnetism from one direction to another has implications for magnetic recording, for instance, and all sorts of electric machines – for example, motors in automobiles or generators in wind turbines.”

    The researchers’ models also could help find strong, versatile magnets that don’t use rare earth elements as an ingredient. Located at the bottom of the periodic table, these 17 materials come almost exclusively from China and, because of their limited source, are considered critical. They are a mainstay in the composition of many strong magnets.

    Eisenbach and his collaborators, which includes his ORNL team and Yang Wang with the Pittsburgh Supercomputing Center, are in the second year of a DOE INCITE (Innovative and Novel Computational Impact on Theory and Experiment) award to model all three materials at the atomic level. They’ve been awarded 100 million processor hours on ORNL’s Titan supercomputer and already have impressive results in magnetics and alloys. Titan is housed at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science user facility.

    The researchers tease out atomic-scale behavior using, at times, a hybrid code that combines Wang-Landau (WL) Monte Carlo and locally self-consistent multiple scattering (LSMS) methods. WL is a statistical approach that samples the atomic energy landscape in terms of finite temperature effects; LSMS determines energy value. With LSMS alone, they’ve calculated the ground state magnetic properties of an iron-platinum particle. And without making any assumption beyond the chemical composition, they’ve determined the temperature at which copper-zinc alloy goes from a disordered state to an ordered one.

    Moreover, Eisenbach has co-authored two materials science papers in the past year, one in Leadership Computing, the other a letter in Nature, in which he and colleagues reported using the three-dimensional coordinates of a real iron-platinum nanoparticle with 6,560 iron and 16,627 platinum atoms to find its magnetic properties.

    “We’re combining the efficiency of WL sampling, the speed of the LSMS and the computing power of Titan to provide a solid first-principles thermodynamics description of magnetism,” Eisenbach says. “The combination also is giving us a realistic treatment of alloys and functional materials.”

    Alloys are comprised of at least two metals. Brass, for instance, is an alloy of copper and zinc. Magnets, of course, are used in everything from credit cards to MRI machines and in electric motors. Ferroelectric materials, such as barium titanate and zirconium titanate, form what’s known as an electric moment, in a transition phase, when temperatures drop beneath the ferroelectric Curie temperature – the point where atoms align, triggering spontaneous magnetism. The term – named after the French physicist Pierre Curie, who in the late 19th century described how magnetic materials respond to temperature changes – applies to both ferroelectric and ferromagnetic transitions. Eisenbach and his collaborators are interested in both phenomena.

    Eisenbach is particularly intrigued by high-entropy alloys, a relatively new sub-class discovered a decade ago that may hold useful mechanical properties. Conventional alloys have a dominant element – for instance, iron in stainless steel. High-entropy alloys, on the other hand, evenly spread out their elements on a crystal lattice. They don’t get brittle when chilled, remaining pliable at extremely low temperatures.

    To understand the configuration of high-entropy alloys, Eisenbach uses the analogy of a chess board sprinkled with black and white beads. In an ordered material, black beads occupy black squares and white beads, white squares. In high-entropy alloys, however, the beads are scattered randomly across the lattice regardless of color until the material reaches a low temperature, much lower than normal alloys, when it almost grudgingly orders itself.

    Eisenbach and his colleagues have modelled a material as large as 100,000 atoms using the Wang-Landau/LSMS method. “If I want to represent disorder, I want a simulation that calculates for hundreds if not thousands of atoms, rather than just two or three,” he says.

    To model an alloy, the researchers first deploy the Schrodinger equation to determine the state of electrons in the atoms. “Solving the equation lets you understand the electrons and their interactions, which is the glue that holds the material together and determines their physical properties.”

    All of a material’s properties and energies are calculated by many hundreds of thousands of calculations over many possible configurations and over varying temperatures to give a rendering so that modelers can determine at what temperature a material loses or gains its magnetism, or at what temperature an alloy goes from a disordered state to a perfectly ordered one.

    Eisenbach eagerly awaits the arrival of the Summit supercomputer – five to six times more powerful than Titan – to OLCF in late 2018.

    Two views of Summit-

    ORNL IBM Summit Supercomputer

    ORNL IBM Summit supercomputer depiction

    “Ultimately, we can do larger simulations and possibly look at even more complex disordered materials with more components and widely varying compositions, where the chemical disorder might lead to qualitatively new physical behaviors.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 6:15 pm on September 18, 2017 Permalink | Reply
    Tags: , , , , , D.O.E., , , ,   

    From BNL: “Three Brookhaven Lab Scientists Selected to Receive Early Career Research Program Funding” 

    Brookhaven Lab

    August 15, 2017 [Just caught up with this via social media.]
    Karen McNulty Walsh,
    kmcnulty@bnl.gov
    (631) 344-8350
    Peter Genzer,
    genzer@bnl.gov
    (631) 344-3174

    Three scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have been selected by DOE’s Office of Science to receive significant research funding through its Early Career Research Program.

    The program, now in its eighth year, is designed to bolster the nation’s scientific workforce by providing support to exceptional researchers during the crucial early career years, when many scientists do their most formative work. The three Brookhaven Lab recipients are among a total of 59 recipients selected this year after a competitive review of about 700 proposals.

    The scientists are each expected to receive grants of up to $2.5 million over five years to cover their salary plus research expenses. A list of the 59 awardees, their institutions, and titles of research projects is available on the Early Career Research Program webpage.

    This year’s Brookhaven Lab awardees include:

    1
    Sanjaya Senanayake

    Brookhaven Lab chemist Sanjaya D. Senanayake was selected by DOE’s Office of Basic Energy Sciences to receive funding for “Unraveling Catalytic Pathways for

    Low Temperature Oxidative Methanol Synthesis from Methane.” His overarching goal is to study and improve catalysts that enable the conversion of methane (CH4), the primary component of natural gas, directly into methanol (CH3OH), a valuable chemical intermediate and potential renewable fuel.

    This research builds on the recent discovery of a single step catalytic process for this reaction that proceeds at low temperatures and pressures using inexpensive earth abundant catalysts. The reaction promises to be more efficient than current multi-step processes, which are energy-intensive, and a significant improvement over other attempts at one-step reactions where higher temperatures convert most of the useful hydrocarbon building blocks into carbon monoxide and carbon dioxide rather than methanol. With Early Career funding, Senanayake’s team will explore the nature of the reaction, and build on ways to further improve catalytic performance and specificity.

    The project will exploit unique capabilities of facilities at Brookhaven Lab, particularly at the National Synchrotron Light Source II (NSLS-II), that make it possible to study catalysts in real-world reaction environments (in situ) using x-ray spectroscopy, electron imaging, and other in situ methods.

    BNL NSLS-II


    BNL NSLS II

    Experiments using well defined model surfaces and powders will reveal atomic level catalytic structures and reaction dynamics. When combined with theoretical modeling, these studies will help the scientists identify the essential interactions that take place on the surface of the catalyst. Of particular interest are the key features that activate stable methane molecules through “soft” oxidative activation of C-H bonds so methane can be converted to methanol using oxygen (O2) and water (H2O) as co-reactants.

    This work will establish and experimentally validate principles that can be used to design improved catalysts for synthesizing fuel and other industrially relevant chemicals from abundant natural gas.

    “I am grateful for this funding and the opportunity to pursue this promising research,” Senanayake said. “These fundamental studies are an essential step toward overcoming key challenges for the complex conversion of methane into valued chemicals, and for transforming the current model catalysts into practical versions that are inexpensive, durable, selective, and efficient for commercial applications.”

    Sanjaya Senanayake earned his undergraduate degree in material science and Ph.D. in chemistry from the University of Auckland in New Zealand in 2001 and 2006, respectively. He worked as a research associate at Oak Ridge National Laboratory from 2005-2008, and served as a local scientific contact at beamline U12a at the National Synchrotron Light Source (NSLS) at Brookhaven Lab from 2005 to 2009. He joined the Brookhaven staff as a research associate in 2008, was promoted to assistant chemist and associate chemist in 2014, while serving as the spokesperson for NSLS Beamline X7B. He has co-authored over 100 peer reviewed publications in the fields of surface science and catalysis, and has expertise in the synthesis, characterization, reactivity of catalysts and reactions essential for energy conversion. He is an active member of the American Chemical Society, North American Catalysis Society, the American Association for the Advancement of Science, and the New York Academy of Science.

    3
    Alessandro Tricoli

    Brookhaven Lab physicist Alessandro Tricoli will receive Early Career Award funding from DOE’s Office of High Energy Physics for a project titled “Unveiling the Electroweak Symmetry Breaking Mechanism at ATLAS and at Future Experiments with Novel Silicon Detectors.”

    CERN/ATLAS detector

    His work aims to improve, through precision measurements, the search for exciting new physics beyond what is currently described by the Standard Model [SM], the reigning theory of particle physics.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The discovery of the Higgs boson at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) in Switzerland confirmed how the quantum field associated with this particle generates the masses of other fundamental particles, providing key insights into electroweak symmetry breaking—the mass-generating “Higgs mechanism.”

    CERN ATLAS Higgs Event

    But at the same time, despite direct searches for “new physics” signals that cannot be explained by the SM, scientists have yet to observe any evidence for such phenomena at the LHC—even though they know the SM is incomplete (for example it does not include an explanation for gravity).

    Tricoli’s research aims to make precision measurements to test fundamental predictions of the SM to identify anomalies that may lead to such discoveries. He focuses on the analysis of data from the LHC’s ATLAS experiment to comprehensively study electroweak interactions between the Higgs and particles called W and Z bosons. Any discovery of anomalies in such interactions could signal new physics at very high energies, not directly accessible by the LHC.

    This method of probing physics beyond the SM will become even more stringent once the high-luminosity upgrade of ATLAS, currently underway, is completed for longer-term LHC operations planned to begin in 2026.

    Tricoli’s work will play an important role in the upgrade of ATLAS’s silicon detectors, using novel state-of-the art technology capable of precision particle tracking and timing so that the detector will be better able to identify primary particle interactions and tease out signals from the background events. Designing these next-generation detector components could also have a profound impact on the development of future instruments that can operate in high radiation environments, such as in future colliders or in space.

    “This award will help me build a strong team around a research program I feel passionate about at ATLAS and the LHC, and for future experiments,” Tricoli said.

    “I am delighted and humbled by the challenge given to me with this award to take a step forward in science.”

    Alessandro Tricoli received his undergraduate degree in physics from the University of Bologna, Italy, in 2001, and his Ph.D. in particle physics from Oxford University in 2007. He worked as a research associate at Rutherford Appleton Laboratory in the UK from 2006 to 2009, and as a research fellow and then staff member at CERN from 2009 to 2015, receiving commendations on his excellent performance from both institutions. He joined Brookhaven Lab as an assistant physicist in 2016. A co-author on multiple publications, he has expertise in silicon tracker and detector design and development, as well as the analysis of physics and detector performance data at high-energy physics experiments. He has extensive experience tutoring and mentoring students, as well as coordinating large groups of physicists involved in research at ATLAS.

    4
    Chao Zhang

    Brookhaven Lab physicist Chao Zhang was selected by DOE’s Office of High Energy Physics to receive funding for a project titled, “Optimization of Liquid Argon TPCs for Nucleon Decay and Neutrino Physics.” Liquid Argon TPCs (for Time Projection Chambers) form the heart of many large-scale particle detectors designed to explore fundamental mysteries in particle physics.

    Among the most compelling is the question of why there’s a predominance of matter over antimatter in our universe. Though scientists believe matter and antimatter were created in equal amounts during the Big Bang, equal amounts would have annihilated one another, leaving only light. The fact that we now have a universe made almost entirely of matter means something must have tipped the balance.

    A US-hosted international experiment scheduled to start collecting data in the mid-2020s, called the Deep Underground Neutrino Experiment (DUNE), aims to explore this mystery through the search for two rare but necessary conditions for the imbalance: 1) evidence that some processes produce an excess of matter over antimatter, and 2) a sizeable difference in the way matter and antimatter behave.

    FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA


    FNAL DUNE Argon tank at SURF


    Surf-Dune/LBNF Caverns at Sanford



    SURF building in Lead SD USA

    The DUNE experiment will look for signs of these conditions by studying how protons (one of the two “nucleons” that make up atomic nuclei) decay as well as how elusive particles called neutrinos oscillate, or switch identities, among three known types.

    The DUNE experiment will make use of four massive 10-kiloton detector modules, each with a Liquid Argon Time Projection Chamber (LArTPC) at its core. Chao’s aim is to optimize the performance of the LArTPCs to fully realize their potential to track and identify particles in three dimensions, with a particular focus on making them sensitive to the rare proton decays. His team at Brookhaven Lab will establish a hardware calibration system to ensure their ability to extract subtle signals using specially designed cold electronics that will sit within the detector. They will also develop software to reconstruct the three-dimensional details of complex events, and analyze data collected at a prototype experiment (ProtoDUNE, located at Europe’s CERN laboratory) to verify that these methods are working before incorporating any needed adjustments into the design of the detectors for DUNE.

    “I am honored and thrilled to receive this distinguished award,” said Chao. “With this support, my colleagues and I will be able to develop many new techniques to enhance the performance of LArTPCs, and we are excited to be involved in the search for answers to one of the most intriguing mysteries in science, the matter-antimatter asymmetry in the universe.”

    Chao Zhang received his B.S. in physics from the University of Science and Technology of China in 2002 and his Ph.D. in physics from the California Institute of Technology in 2010, continuing as a postdoctoral scholar there until joining Brookhaven Lab as a research associate in 2011. He was promoted to physics associate III in 2015. He has actively worked on many high-energy neutrino physics experiments, including DUNE, MicroBooNE, Daya Bay, PROSPECT, JUNO, and KamLAND, co-authoring more than 40 peer reviewed publications with a total of over 5000 citations. He has expertise in the field of neutrino oscillations, reactor neutrinos, nucleon decays, liquid scintillator and water-based liquid scintillator detectors, and liquid argon time projection chambers. He is an active member of the American Physical Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 4:35 pm on July 26, 2017 Permalink | Reply
    Tags: D.O.E.,   

    From ESnet: “SLAC, AIC and Zetta Move Petabyte Datasets at Unprecedented Speed via ESnet” 

    1

    ESnet

    2017-07-26
    ESNETWORK

    Twice a year, ESnet staff meet with managers and researchers associated with each of the DOE Office of Science program offices to look toward the future of networking requirements and then take the planning steps to keep networking capabilities out in front of those demands.

    Network engineers and researchers at DOE national labs take a similar forward-looking approach. Earlier this year, DOE’s SLAC National Accelerator Laboratory (SLAC) teamed up with AIC and Zettar and tapped into ESnet’s 100G backbone network to repeatedly transfer 1-petabyte files in 1.4 days over a 5,000-mile portion of ESnet’s production network. Even with the transfer bandwidth capped at 80Gbps, the milestone demo resulted in transfer rates five times faster than other technologies. The demo data accounted for a third of all ESnet traffic during the tests. Les Cottrel from SLAC presented the results at the ESnet Site Coordinators meeting (ESCC) held at Lawrence Berkeley National Laboratory in May 2017.

    1
    No image caption or credit

    Read the AICCI/Zettar news release.

    Read the story in insideHPC.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

     
  • richardmitnick 3:19 pm on May 9, 2017 Permalink | Reply
    Tags: $3.9 Million to Help Industry Address High Performance Computing Challenges, D.O.E.,   

    From ORNL via energy.gov: “Energy Department Announces $3.9 Million to Help Industry Address High Performance Computing Challenges” 

    i1

    Oak Ridge National Laboratory

    ENERGY.GOV

    May 8, 2017
    Today, the U.S. Department of Energy announced nearly $3.9 million for 13 projects designed to stimulate the use of high performance supercomputing in U.S. manufacturing. The Office of Energy Efficiency and Renewable Energy (EERE) Advanced Manufacturing Office’s High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology relevant to manufacturing. HPC4Mfg aims to increase the energy efficiency of manufacturing processes, advance energy technology, and reduce energy’s impact on the environment through innovation.

    The 13 new project partnerships include application of world-class computing resources and expertise of the national laboratories including Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Lawrence Berkley National Laboratory, National Renewable Energy Laboratory, and Argonne National Laboratory. These projects will address key challenges in U.S. manufacturing proposed in partnership with companies and improve energy efficiency across the manufacturing industry through applied research and development of energy technologies.

    Each of the 13 newly selected projects will receive up to $300,000 to support work performed by the national lab partners and allow the partners to use HPC compute cycles.

    The 13 projects selected for awards are led by:

    7AC Technologies
    8 Rivers Capital
    Applied Materials, Inc.
    Arconic Inc.*
    Ford Motor Company
    General Electric Global Research Center*
    LanzaTech
    Samsung Semiconductor, Inc.
    Sierra Energy
    The Timken Company
    United Technologies Research Corporation

    *Awarded two projects

    Read more about the individual projects.

    The Advanced Manufacturing Office (AMO) recently published a draft of its Multi-year Program Plan that identifies the technology, research and development, outreach, and crosscutting activities that AMO plans to focus on over the next five years. Some of the technical focus areas in the plan align with the high-priority, energy-related manufacturing activities that the HPC4Mfg program also aims to address.

    Led by Lawrence Livermore National Laboratory, with Lawrence Berkeley National Laboratory and Oak Ridge National Laboratory as strong partners, the HPC4Mfg program has a diverse portfolio of small and large companies, consortiums, and institutes within varying industry sectors that span the country. Established in 2015, it currently supports 28 projects that range from improved turbine blades for aircraft engines and reduced heat loss in electronics, to steel-mill energy efficiency and improved fiberglass production.

    ORNL Cray XK7 Titan Supercomputer

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 1:37 pm on January 12, 2017 Permalink | Reply
    Tags: Argo project, , D.O.E., , Hobbes project, , , XPRESS project   

    From ASCRDiscovery via D.O.E. “Upscale computing” 

    DOE Main

    Department of Energy

    ASCRDiscovery

    ASCRDiscovery

    January 2017
    No writer credit

    National labs lead the push for operating systems that let applications run at exascale.

    2
    Image courtesy of Sandia National Laboratories.

    For high-performance computing (HPC) systems to reach exascale – a billion billion calculations per second – hardware and software must cooperate, with orchestration by the operating system (OS).

    But getting from today’s computing to exascale requires an adaptable OS – maybe more than one. Computer applications “will be composed of different components,” says Ron Brightwell, R&D manager for scalable systems software at Sandia National Laboratories.

    “There may be a large simulation consuming lots of resources, and some may integrate visualization or multi-physics.” That is, applications might not use all of an exascale machine’s resources in the same way. Plus, an OS aimed at exascale also must deal with changing hardware. HPC “architecture is always evolving,” often mixing different kinds of processors and memory components in heterogeneous designs.

    As computer scientists consider scaling up hardware and software, there’s no easy answer for when an OS must change. “It depends on the application and what needs to be solved,” Brightwell explains. On top of that variability, he notes, “scaling down is much easier than scaling up.” So rather than try to grow an OS from a laptop to an exascale platform, Brightwell thinks the other way. “We should try to provide an exascale OS and runtime environment on a smaller scale – starting with something that works at a higher scale and then scale down.”

    To explore the needs of an OS and conditions to run software for exascale, Brightwell and his colleagues conducted a project called Hobbes, which involved scientists at four national labs – Oak Ridge (ORNL), Lawrence Berkeley, Los Alamos and Sandia – plus seven universities. To perform the research, Brightwell – with Terry Jones, an ORNL computer scientist, and Patrick Bridges, a University of New Mexico associate professor of computer science – earned an ASCR Leadership Computing Challenge allocation of 30 million processor hours on Titan, ORNL’s Cray XK7 supercomputer.

    ORNL Cray Titan Supercomputer
    ORNL Cray XK7 Titan Supercomputer

    2
    The Hobbes OS supports multiple software stacks working together, as indicated in this diagram of the Hobbes co-kernel software stack. Image courtesy of Ron Brightwell, Sandia National Laboratories.

    Brightwell made a point of including the academic community in developing Hobbes. “If we want people in the future to do OS research from an HPC perspective, we need to engage the academic community to prepare the students and give them an idea of what we’re doing,” he explains. “Generally, OS research is focused on commercial things, so it’s a struggle to get a pipeline of students focusing on OS research in HPC systems.”

    The Hobbes project involved a variety of components, but for the OS side, Brightwell describes it as trying to understand applications as they become more sophisticated. They may have more than one simulation running in a single OS environment. “We need to be flexible about what the system environment looks like,” he adds, so with Hobbes, the team explored using multiple OSs in applications running at extreme scale.

    As an example, Brightwell notes that the Hobbes OS envisions multiple software stacks working together. The OS, he says, “embraces the diversity of the different stacks.” An exascale system might let data analytics run on multiple software stacks, but still provide the efficiency needed in HPC at extreme scales. This requires a computer infrastructure that supports simultaneous use of multiple, different stacks and provides extreme-scale mechanisms, such as reducing data movement.

    Part of Hobbes also studied virtualization, which uses a subset of a larger machine to simulate a different computer and operating system. “Virtualization has not been used much at extreme scale,” Brightwell says, “but we wanted to explore it and the flexibility that it could provide.” Results from the Hobbes project indicate that virtualization for extreme scale can provide performance benefits at little cost.

    Other HPC researchers besides Brightwell and his colleagues are exploring OS options for extreme-scale computing. For example, Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering at Argonne National Laboratory, runs the Argo project.

    A team of 25 collaborators from Argonne, Lawrence Livermore National Laboratory and Pacific Northwest National Laboratory, plus four universities created Argo, an OS that starts with a single Linux-based OS and adapts it to extreme scale.

    When comparing the Hobbes OS to Argo, Brightwell says, “we think that without getting in that Linux box, we have more freedom in what we do, other than design choices already made in Linux. Both of these OSs are likely trying to get to the same place but using different research vehicles to get there.” One distinction: The Hobbes project uses virtualization to explore the use of multiple OSs working on the same simulation at extreme scale.

    As the scale of computation increases, an OS must also support new ways of managing a systems’ resources. To explore some of those needs, Thomas Sterling, director of Indiana University’s Center for Research in Extreme Scale Technologies, developed ParalleX, an advanced execution model for computations. Brightwell leads a separate project called XPRESS to support the ParalleX execution model. Rather than computing’s traditional static methods, ParalleX implementations use dynamic adaptive techniques.

    More work is always necessary as computation works toward extreme scales. “The important thing in going forward from a runtime and OS perspective is the ability to evaluate technologies that are developing in terms of applications,” Brightwell explains. “For high-end applications to pursue functionality at extreme scales, we need to build that capability.” That’s just what Hobbes and XPRESS – and the ongoing research that follows them – aim to do.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
  • richardmitnick 8:37 pm on September 14, 2015 Permalink | Reply
    Tags: , , D.O.E., ,   

    From D.O.E.: “Women @ Energy: Ingrid Fang” 

    DOE Main

    Department of Energy

    1
    Ingrid Fang is an engineering analyst at Fermi National Accelerator Laboratory (FNAL). She attended Northern Illinois University, where she earned a master of science degree in mechanical engineering.

    Ingrid Fang is an engineering analyst at Fermi National Accelerator Laboratory (Fermilab). She attended Northern Illinois University, where she earned a master of science degree in mechanical engineering. Ingrid says she enjoys her part in helping make history.

    1) What inspired you to work in STEM?

    I had 10 years of professional ballet training, started when I was 4. I traded my ballet slippers for an engineering degree to please my dad. Dad is the biggest fan of Fermi lab and its groundbreaking work. I turn an idea into a reality by analyzing complex experimental equipment under mechanical and thermal loads.

    2) What excites you about your work at the Department of Energy?

    Working with ultra-dedicated and competent professionals.

    3) How can our country engage more women, girls, and other underrepresented groups in STEM?

    Studies have shown that when told that men score better on math tests than women, women tend to score worse. When told that this isn’t true, the two genders score equally well. I think women and girls should not focus on this internal bias toward underrating their intelligence. There are so many role models to follow if they empower themselves with an intense desire. Madame Curie and her daughter set a good example for me.

    4) Do you have tips you’d recommend for someone looking to enter your field of work?

    A career can be intimidating at first, a great unknown hidden by your own anxiety and inexperience. But with passion and determination, you can succeed. Through this process you will build strong character, which money cannot buy. And when you look back on your life, those memories will be your most treasured, because those experiences defined who you really are.

    5) When you have free time, what are your hobbies?

    I like to read and study everything about life and its meaning. I am always searching to better myself, so I can make a positive contribution to our society, and in turn make a better life for everyone around me. I love to dance, and have volunteered to teach ballet to girls six years old and up. I have helped grownups get on the dance floor to feel young and alive again. I’ve helped people in poor health to find a meaning in their lives. I love to help people in need.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: