Tagged: Exascale computing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:39 pm on December 9, 2022 Permalink | Reply
    Tags: "Nuclear Physics Gets a Boost for High-Performance Computing", , , , Exascale computing, Jefferson Lab’s Center for Theoretical and Computational Physics., , ,   

    From The DOE’s Thomas Jefferson National Accelerator Facility : “Nuclear Physics Gets a Boost for High-Performance Computing” 

    From The DOE’s Thomas Jefferson National Accelerator Facility

    12.6.22
    Kandice Carter
    Jefferson Lab Communications Office
    kcarter@jlab.org

    1
    Jefferson Lab’s Data Center, JLab photo: Aileen Devlin

    2
    The Frontier Supercomputer, OLCF at The DOE’s Oak Ridge National Lab photo.

    Efforts to harness the power of supercomputers to better understand the hidden worlds inside the nucleus of the atom recently received a big boost. A project led by the DOE’s Thomas Jefferson National Accelerator Facility is one of three to split $35 million in grants from the DOE via a partnership program of DOE’s Scientific Discovery through Advanced Computing (SciDAC).

    Each of the projects receiving the grants are joint projects between DOE’s Nuclear Physics (NP) and Advanced Scientific Computing Research (ASCR) programs via a partnership program of SciDAC.

    Making the Most of Advanced Computational Resources

    As supercomputers become ever-more powerful, scientists need advanced tools to take full advantage of their capabilities. For example, the Oak Ridge Leadership Computing Facility (OLCF) at DOE’s Oak Ridge National Lab now hosts the world’s first public exascale supercomputer. Its Frontier supercomputer has achieved 1 exaFLOPS in capability by demonstrating it can perform one billion-billion calculations per second.

    “Nuclear physics is a rich, diverse and exciting area of research explaining the origins of visible matter. And in nuclear physics, high-performance computing is a critically important tool in our efforts to unravel the origins of nuclear matter in our universe,” said Robert Edwards, a senior staff scientist and deputy group leader of Jefferson Lab’s Center for Theoretical and Computational Physics.

    Edwards is the principal investigator for one of the three projects. His project, “Fundamental nuclear physics at the exascale and beyond,” will build a solid foundation of software resources for nuclear physicists to address key questions regarding the building blocks of the visible universe. The project seeks to help nuclear physicists tease out questions about the basic properties of particles, such as the ubiquitous proton.

    “One of the key research questions that we hope to one day answer is what is the origin of a particle’s mass, what is the origin of its spin, and what are the emerging properties of a dense system of particles?” explained Edwards.

    The $13 million project includes key scientists based at six DOE national labs and two universities, including Jefferson Lab, The DOE’s Argonne National Lab, The DOE’s Brookhaven National Lab, Oak Ridge National Lab, The DOE’sLawrence Berkeley National Lab, The DOE’s Los Alamos National Lab, The Massachusetts Institute of Technology and The College of William & Mary.

    It aims to optimize the software tools needed for calculations of quantum chromodynamics (“QCD”). QCD is the theory that describes the structure of protons and neutrons – the particles that make up atomic nuclei – as well as provide insight to other particles that help build our universe. Protons are built of smaller particles called quarks held together by a force-fed glue manifesting as gluon particles. What’s not clear is how the proton’s properties arise from quarks and gluons.

    “The evidence points to the mass of quarks as extremely tiny, only 1%. The rest is from the glue. So, what part does glue play in that internal structure?” he said.

    Modeling the Subatomic Universe

    The goal of the supercomputer calculations is to mimic how quarks and gluons experience the real world at their own teensy scale in a way that can be calculated by computers. To do that, the nuclear physicists use supercomputers to first generate a snapshot of the environment inside a proton where these particles live for the calculations. Then, they mathematically drop in some quarks and glue and use supercomputers to predict how they interact. Averaging over thousands of these snapshots gives physicists a way to emulate the particles’ lives in the real world.

    Solutions from these calculations will provide input for experiments taking place today at Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF)[below] and Brookhaven Lab’s Relativistic Heavy Ion Collider (RHIC).

    CEBAF and RHIC are both DOE Office of Science user facilities.

    “While we did not base this proposal on the requirements of the future Electron-Ion Collider, many of the problems that we are trying to address now, such as code infrastructures and methodology, will impact the EIC,” Edwards added.

    The project will use a four-pronged approach to help streamline these calculations for better use on supercomputers, while also preparing for ever-more-powerful machines to come online.

    The first two approaches relate to generation of the quarks’ and gluons’ little slice of the universe. The researchers aim to make this task easier for computers by streamlining the process with upgraded software and by using software to break down this process into smaller chunks of calculations that will be easier for a computer to calculate. The second part of this project will then bring in machine learning to see if the existing algorithms can be improved by additional computer modelling.

    The third approach involves exploring and testing out new techniques for the portion of the calculations that model how quarks and gluons interact in their computer-generated universe.

    The fourth and last approach will collect all of the information from the first three prongs and begin to scale them for use on next-generation supercomputers.

    All three SciDAC projects awarded grants by DOE span efforts in nuclear physics research. Together, the projects address fundamental questions about the nature of nuclear matter, including the properties of nuclei, nuclear structure, nucleon imaging, and discovering exotic states of quarks and gluons.

    “The SciDAC partnership projects deploy high-performance computing and enable world-leading science discoveries in our nuclear physics facilities,” said Timothy Hallman, DOE’s associate director of science for NP.

    The total funding announced by DOE includes $35 million lasting five years, with $7.2 million in Fiscal Year 2022 and outyear funding contingent on congressional appropriations.

    See the full article here .

    Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use “Reply”.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    JLab campus
    The DOE’s Thomas Jefferson National Accelerator Facility is supported by The Office of Science of the U.S. Department of Energy. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

    Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE Applied Technologies, manages and operates the Thomas Jefferson National Accelerator Facility for the U.S. Department of Energy’s Office of Science.

    History

    The DOE’s Thomas Jefferson National Accelerator Facility was established in 1984 (first initial funding by the Department of Energy) as the Continuous Electron Beam Accelerator Facility (CEBAF); the name was changed to Thomas Jefferson National Accelerator Facility in 1996. The full funding for construction was appropriated by US Congress in 1986 and on February 13, 1987, the construction of the main component, the CEBAF accelerator begun. First beam was delivered to experimental area on 1 July 1994. The design energy of 4 GeV for the beam was achieved during the year 1995. The laboratory dedication took place 24 May 1996 (at this event the name was also changed). Full initial operations with all three initial experiment areas online at the design energy was achieved on June 19, 1998. On August 6, 2000 the CEBAF reached “enhanced design energy” of 6 GeV. In 2001, plans for an energy upgrade to 12 GeV electron beam and plans to construct a fourth experimental hall area started. The plans progressed through various DOE Critical Decision-stages in the 2000s decade, with the final DOE acceptance in 2008 and the construction on the 12 GeV upgrade beginning in 2009. May 18, 2012 the original 6 GeV CEBAF accelerator shut down for the replacement of the accelerator components for the 12 GeV upgrade. 178 experiments were completed with the original CEBAF.

    In addition to the accelerator, the laboratory has housed and continues to house a free electron laser (FEL) instrument. The construction of the FEL started 11 June 1996. It achieved first light on June 17, 1998. Since then, the FEL has been upgraded numerous times, increasing its power and capabilities substantially.

    Jefferson Lab was also involved in the construction of the Spallation Neutron Source (SNS) at DOE’s Oak Ridge National Laboratory . Jefferson built the SNS superconducting accelerator and helium refrigeration system. The accelerator components were designed and produced 2000–2005.

    Accelerator

    The laboratory’s main research facility is the CEBAF accelerator, which consists of a polarized electron source and injector and a pair of superconducting RF linear accelerators that are 7/8-mile (1400 m) in length and connected to each other by two arc sections that contain steering magnets.

    As the electron beam makes up to five successive orbits, its energy is increased up to a maximum of 6 GeV (the original CEBAF machine worked first in 1995 at the design energy of 4 GeV before reaching “enhanced design energy” of 6 GeV in 2000; since then, the facility has been upgraded into 12 GeV energy). This leads to a design that appears similar to a racetrack when compared to the classical ring-shaped accelerators found at sites such as The European Organization for Nuclear Research [La Organización Europea para la Investigación Nuclear][Organization européenne pour la recherche nucléaire] [Europäische Organization für Kernforschung](CH)[CERN] or DOE’s Fermi National Accelerator Laboratory. Effectively, CEBAF is a linear accelerator, similar to The DOE’s SLAC National Accelerator Laboratory at Stanford University, that has been folded up to a tenth of its normal length.

    The design of CEBAF allows the electron beam to be continuous rather than the pulsed beam typical of ring-shaped accelerators. (There is some beam structure, but the pulses are very much shorter and closer together.) The electron beam is directed onto three potential targets (see below). One of the distinguishing features of Jefferson Lab is the continuous nature of the electron beam, with a bunch length of less than 1 picosecond. Another is Jefferson Lab’s use of superconducting Radio Frequency (SRF) technology, which uses liquid helium to cool niobium to approximately 4 K (−452.5 °F), removing electrical resistance and allowing the most efficient transfer of energy to an electron. To achieve this, Jefferson Lab houses the world’s largest liquid helium refrigerator, and it was one of the first large-scale implementations of SRF technology. The accelerator is built 8 meters below the Earth’s surface, or approximately 25 feet, and the walls of the accelerator tunnels are 2 feet thick.

    The beam ends in four experimental halls, labelled Hall A, Hall B, Hall C, and Hall D. Each hall contains specialized spectrometers to record the products of collisions between the electron beam or with real photons and a stationary target. This allows physicists to study the structure of the atomic nucleus, specifically the interaction of the quarks that make up protons and neutrons of the nucleus.

    With each revolution around the accelerator, the beam passes through each of the two LINAC accelerators, but through a different set of bending magnets in semi-circular arcs at the ends of the linacs. The electrons make up to five passes through the linear accelerators.

    When a nucleus in the target is hit by an electron from the beam, an “interaction”, or “event”, occurs, scattering particles into the hall. Each hall contains an array of particle detectors that track the physical properties of the particles produced by the event. The detectors generate electrical pulses that are converted into digital values by analog-to-digital converters (ADCs), time to digital converters (TDCs) and pulse counters (scalers).

    This digital data is gathered and stored so that the physicist can later analyze the data and reconstruct the physics that occurred. The system of electronics and computers that perform this task is called a data acquisition system.

    12 GeV upgrade

    As of June 2010, construction began on a $338 million upgrade to add an end station, Hall D, on the opposite end of the accelerator from the other three halls, as well as to double beam energy to 12 GeV. Concurrently, an addition to the Test Lab, (where the SRF cavities used in CEBAF and other accelerators used worldwide are manufactured) was constructed.

    As of May 2014, the upgrade achieved a new record for beam energy, at 10.5 GeV, delivering beam to Hall D.

    As of December 2016, the CEBAF accelerator delivered full-energy electrons as part of commissioning activities for the ongoing 12 GeV Upgrade project. Operators of the Continuous Electron Beam Accelerator Facility delivered the first batch of 12 GeV electrons (12.065 Giga electron Volts) to its newest experimental hall complex, Hall D.

    In September 2017, the official notification from the DOE of the formal approval of the 12 GeV upgrade project completion and start of operations was issued. By spring 2018, all fours research areas were successfully receiving beam and performing experiments. On 2 May 2018 the CEBAF 12 GeV Upgrade Dedication Ceremony took place.

    As of December 2018, the CEBAF accelerator delivered electron beams to all four experimental halls simultaneously for physics-quality production running.

     
  • richardmitnick 10:57 am on November 14, 2022 Permalink | Reply
    Tags: "Compile-o-matic", "OpenMP" (open multiprocessing) compilers, , Exascale computing, ,   

    From The DOE’s ASCR Discovery: “Compile-o-matic” 

    From The DOE’s ASCR Discovery

    November 2022

    Computer scientist automates the most vexing aspects of traditional scientific software development for supercomputers.

    Computational tools that go by Enzyme, Polygeist, Tapir and Zygote might make the world of scientific software development sound fun. Johannes Doerfert, of The DOE’s Lawrence Livermore National Laboratory, knows better.

    1
    Johannes Doerfert.

    As a computer scientist, Doerfert – who recently moved from The DOE’s Argonne National Laboratory, where he received a 2022 DOE Early Career Award – enjoys delving into the technical details of runtime systems and “OpenMP” (open multiprocessing) compilers. But he realizes that scientists from other fields might prefer to avoid those finer points and concentrate on their research instead.

    Compilers translate code from one programming language to another. But they never really made it easy for users to convert their code to run faster in parallel rather than in a series of steps. Runtime is when a program operates; this typically follows compile time.

    “We’re trying to help people avoid manually optimizing their parallelism,” says Doerfert, whose Early Career Award will advance the work over the next five years. “People have had to do it manually if it were doable at all.”

    The DOE early-career honor closely followed the Hans Meuer Award, which Doerfert shared with Atmn Patel, a former intern at The DOE’s Argonne National Laboratory and now a Northwestern University graduate student. They received the prize for “Remote OpenMP Offloading,” deemed the most outstanding paper submitted at the ISC High Performance 2022 Conference in Hamburg, Germany.

    Doerfert aims to tailor scientific software development for non-expert users who can’t afford the decades it would take to manually perform basic computational tasks. It’s an ongoing problem that arises whenever a new machine comes online, with the introduction of a new parallel programming model, with every major software update or with every new scientist who dares to pursue her work on a supercomputer.

    “Let’s build the tools properly once and for all,” Doerfert says. “We have to make this entire environment of compilation and runtimes better to support the problems that they have and help them to get where they want to go.”

    Doerfert received his Ph.D. at Saarland University in Germany in 2018. At a social event while still a graduate student, he pitched his ideas to Hal Finkel, who was then the lead for compiler technology and programming at Argonne National Laboratory. Doerfert later joined Finkel’s Argonne group as a postdoctoral scientist.

    Just before Finkel left Argonne to become a program manager for The DOE’s Office of Advanced Scientific Computing Research in October 2020, he told Doerfert, “I’ll give you all of the compiler projects. Good luck.” (Doerfert likes to tell the story that way but admits to some poetic license.)

    Doerfert began contributing to the LLVM compiler project in 2014. Although no longer used as an acronym, LLVM formerly stood for low-level virtual machine.

    “Johannes essentially shepherds much of OpenMP, runs numerous LLVM workshops, and more,” said William Moses, an MIT Ph.D. student (see sidebar, “Compiling achievements”). He says he and Doerfert have devoted much of their work to applying optimizing parallel code, writing new parallel-specific optimizations and applying “parallelism from one framework to a different target or piece of hardware.

    Doerfert’s achievements in research and mentoring, Moses says, “inspire me to try to do the same.”

    People have used that research on some of the world’s most powerful machines.

    Doerfert has ensured that all of DOE’s supercomputing facilities have a recent copy of LLVM. The Perlmutter supercomputer at The DOE’s Lawrence Berkeley National Laboratory, for instance, has benefited from the OpenMP offloading feature, which enables users to move their data and computation to another device. So have the DOE’s Crusher and Polaris, test machines for exascale computing, capable of a quintillion calculations per second.

    Crusher helped scientists prepare their codes for the Oak Ridge Leadership Computing Facility’s Frontier, which became the world’s first operational exascale supercomputer earlier this year.

    Polaris does likewise for the Aurora exascale machine, which will soon begin operating at the Argonne Leadership Computing Facility.

    Doerfert’s DOE national lab career has given him the chance to work with many talented people who are interested in solving real-world, big-picture problems. What’s more, he enjoys developing compiler technologies that help scientists solve their problems.

    “As long as it’s fun,” he says, “I might stick around a bit longer.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCR Discovery is a publication of The U.S. Department of Energy

    The United States Department of Energy (DOE) is a cabinet-level department of the United States Government concerned with the United States’ policies regarding energy and safety in handling nuclear material. Its responsibilities include the nation’s nuclear weapons program; nuclear reactor production for the United States Navy; energy conservation; energy-related research; radioactive waste disposal; and domestic energy production. It also directs research in genomics. the Human Genome Project originated in a DOE initiative. DOE sponsors more research in the physical sciences than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The agency is led by the United States Secretary of Energy, and its headquarters are located in Southwest Washington, D.C., on Independence Avenue in the James V. Forrestal Building, named for James Forrestal, as well as in Germantown, Maryland.

    Formation and consolidation

    In 1942, during World War II, the United States started the Manhattan Project, a project to develop the atomic bomb, under the eye of the U.S. Army Corps of Engineers. After the war in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. In 1974, the AEC gave way to the Nuclear Regulatory Commission, which was tasked with regulating the nuclear power industry and the Energy Research and Development Administration, which was tasked to manage the nuclear weapon; naval reactor; and energy development programs.

    The 1973 oil crisis called attention to the need to consolidate energy policy. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub.L. 95–91, 91 Stat. 565, enacted August 4, 1977), which created the Department of Energy. The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary.

    President Carter created the Department of Energy with the goal of promoting energy conservation and developing alternative sources of energy. He wanted to not be dependent on foreign oil and reduce the use of fossil fuels. With international energy’s future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three-Mile Island disaster, Carter was able to intervene with the help of the department. Carter made switches within the Nuclear Regulatory Commission in this case to fix the management and procedures. This was possible as nuclear energy and weapons are responsibility of the Department of Energy.

    Recent

    On March 28, 2017, a supervisor in the Office of International Climate and Clean Energy asked staff to avoid the phrases “climate change,” “emissions reduction,” or “Paris Agreement” in written memos, briefings or other written communication. A DOE spokesperson denied that phrases had been banned.

    In a May 2019 press release concerning natural gas exports from a Texas facility, the DOE used the term ‘freedom gas’ to refer to natural gas. The phrase originated from a speech made by Secretary Rick Perry in Brussels earlier that month. Washington Governor Jay Inslee decried the term “a joke”.

    Facilities
    Supercomputing

    The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows:

    Ames Laboratory
    Argonne National Laboratory
    Brookhaven National Laboratory
    Fermi National Accelerator Laboratory
    Idaho National Laboratory
    Lawrence Berkeley National Laboratory
    Lawrence Livermore National Laboratory
    Los Alamos National Laboratory
    National Renewable Energy Laboratory
    Oak Ridge National Laboratory
    Pacific Northwest National Laboratory
    Princeton Plasma Physics Laboratory
    Sandia National Laboratories
    Savannah River National Laboratory
    SLAC National Accelerator Laboratory
    Thomas Jefferson National Accelerator Facility
    Other major DOE facilities include:
    Albany Research Center
    Bannister Federal Complex
    Bettis Atomic Power Laboratory – focuses on the design and development of nuclear power for the U.S. Navy
    Kansas City Plant
    Knolls Atomic Power Laboratory – operates for Naval Reactors Program Research under the DOE (not a National Laboratory)
    National Petroleum Technology Office
    Nevada Test Site
    New Brunswick Laboratory
    Office of Fossil Energy
    Office of River Protection
    Pantex
    Radiological and Environmental Sciences Laboratory
    Y-12 National Security Complex
    Yucca Mountain nuclear waste repository
    Other:

    Pahute Mesa Airstrip – Nye County, Nevada, in supporting Nevada National Security Site

     
  • richardmitnick 9:27 pm on October 31, 2022 Permalink | Reply
    Tags: "LLNL scientists eagerly anticipate El Capitan’s potential impact", , , El Capitan promises more than 15 times the peak compute capability on over LLNL’s current flagship supercomputer-the 125-petaflop IBM/NVIDIA Sierra currently No.5 in the world., Exascale computing, , ,   

    From The DOE’s Lawrence Livermore National Laboratory: “LLNL scientists eagerly anticipate El Capitan’s potential impact” 

    From The DOE’s Lawrence Livermore National Laboratory

    10.18.22
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    While Lawrence Livermore National Laboratory is eagerly awaiting the arrival of its first exascale-class supercomputer, El Capitan, physicists and computer scientists running scientific applications on testbeds for the machine are getting a taste of what to expect.

    “I’m not exactly sure we’ve wrapped our head around exactly about how much compute power [El Capitan] is going to have, because it is so much of a jump from what we have now,” said Brian Ryujin, a computer scientist in the Applications, Simulations, and Quality (ASQ) division of LLNL’s Computing directorate. “I’m very interested to see what our users will do with it, because this machine is going to be simply enormous.”

    Ryujin is one of the LLNL researchers who are using the third generation of early access (EAS3) machines for El Capitan — Hewlett Packard Enterprise (HPE)/AMD systems with predecessor nodes to those that will make up El Capitan — to port codes over to the future exascale system. Despite being a mere fraction of the El Capitan’s size and containing earlier generation components, the EAS3 systems rzVernal, Tenaya and Tioga currently rank
    among the top 200 of the world’s most powerful supercomputers. All three contain HPE Cray EX235a accelerator blades with 3rd generation AMD EPYC 64-core CPUs and AMD Instinct MI250X accelerators, identical nodes to what comprises the DOE’s Oak Ridge National Laboratory’s Frontier system that holds the No. 1 spot on the Top500 List and the title of the world’s first exascale system.

    By incorporating next-generation processors — including AMD’s cutting-edge MI300a accelerated processing units (APUs) — and thousands more nodes than the EAS3 machines, El Capitan promises more than 15 times the peak compute capability on average over LLNL’s current flagship supercomputer, the 125-petaflop IBM/NVIDIA Sierra, surpassing two exaFLOPs (2 quintillion calculations per second) at peak.

    “El Capitan has the potential of enabling more than 10x increase in problem throughput,” said Teresa Bailey, associate program director for computation physics in the Weapon Simulation and Computing program. “This will enable 3D ensembles, which will allow LLNL to perform previously unimagined uncertainty quantification (UQ) and machine learning (ML) studies.”

    For months, Ryujin has been running the multi-physics code Ares on the EAS3 platforms, and if the code’s performance to date is any indication, El Capitan’s advantages over Sierra might be nothing short of astronomical.

    “Having a very healthy amount of memory gives us a lot more flexibility on how we run calculations and really opens up the possibilities for bigger and more complex multi-physics problems,” Ryujin said. “The really exciting thing is that we’re going to be able to run much more efficiently on El Capitan. I expect El Capitan to be used for great multi-physics problems, in addition to kind of the bread-and-butter calculations that we’ve been doing on Sierra.”

    LLNL Weapons and Complex Integration (WCI) computational physicist Aaron Skinner and ASQ computer scientist Tom Stitt have been using rzVernal to run MARBL, a multi-physics (magneto-radiation hydrodynamics) code focused on inertial confinement fusion and pulsed power science. As one of the newer codes at LLNL, researchers are developing many aspects of MARBL, including adding more modeling capabilities and making it perform on El Capitan and other next-generation machines.

    Skinner said he and other physicists are doing a large number and variety of “highly turbulent” physics calculations on rzVernal that are extremely sensitive to very small spatial scales, hence the ability for increased resolution and higher dimensionality are highly desired. The EAS machine’s expanded memory has at least doubled MARBL’s performance compared to Sierra on a per-node basis. Additionally, the ability to “oversubscribe” GPUs (assign multiple tasks to single GPUs) has resulted in additional performance increases, according to Stitt.

    “The huge increase in available memory, which was a bottleneck on Sierra, and the more powerful GPUs, is really exciting,” Stitt said. “Even though rzVernal is a small machine, it has eight times the memory per node, so we can run a very big experiment on a small number of nodes and get that allocation a lot more easily. The simulation running now is the highest resolution we’ve ever been able to do for this type of problem.”

    Having a bigger and faster Advanced Technology system (ATS) in El Capitan will mean that physicists, who have traditionally used large ensembles of 1D and 2D calculations to form surrogate models, will now be able to create surrogates from large ensembles of 2D and 3D calculations, expanding the design space and simulating physics to a degree they haven’t been able to before, researchers said.

    “If a machine comes along that allows you to do 2.5 times better resolution for basically the same cost, then you can get a lot more science done out of that same amount of resources. It allows physicists to do a better job at what they’re trying to do, and sometimes it opens doors that were not previously possible,” Skinner explained.

    Going beyond 2D, El Capitan also will shift the idea of regularly running 3D multi-physics simulations from a “pipe dream” to reality, researchers said. When the exascale era arrives at LLNL, researchers will be able to model physics with a level of detail and realism not possible before, unlocking new pathways to scientific discovery.

    “As we really get into exascale, it’s not inconceivable anymore that we could start doing massive ensembles of 3D models,” Skinner said. “The physics really do change as you increase the dimensionality of the models. There are physical phenomena, especially in turbulent flows, that just can’t be properly modeled in a lower-dimensional simulation; that really does require that three-dimensional aspect.”

    In addition to MARBL, Skinner, Stitt and computational physicist/project lead Rob Rieben recently used 80 of the AMD MI250X GPUs on rzVernal to run a radiation-hydrodynamics simulation that modeled a high-energy density experiment done at the Omega Laser Facility.

    The researchers were impressed to discover the code, which was developed for Sierra, ran well on the EAS machine without any additional changes. LLNL codes largely rely on the RAJA Portability Suite to attain performance portability across diverse GPU- and CPU-based systems, a strategy that has given them confidence in the portability of the codes for El Capitan.

    With a background in astrophysics, star-forming clouds and supernovae, Skinner added that he’s looking forward to using El Capitan, whose processors are designed to integrate with artificial intelligence (AI) and machine learning-assisted data analysis, to combine AI with simulation — a process LLNL has dubbed “cognitive simulation.”

    The technique could create more accurate and more predictive surrogate models for complex multi-physics problems such as inertial confinement fusion (ICF) at the National Ignition Facility [below], which in 2021 set a record for a fusion yield in an experiment and brought the world to the threshold of ignition. In short, Skinner said, physicists will get better answers to their questions, and potentially— in the case of ICF science — save millions of dollars on fusion target fabrication.

    “What really makes me smile is getting the computer to act like something that can’t easily be experimented on in a laboratory, and the more computing power you can throw at it, the more realistically it behaves,” Skinner said. “I’m really excited about the doors this is going to open up, and the new approaches to scientific discovery that are starting to be explored and enabled by machines like El Capitan that we couldn’t even envision doing before. We’re entering a time where we don’t have to limit ourselves anymore.”

    Ryujin, who said he is seeing similar doubling, or greater, node-to-node performance boost over Sierra with the Ares code on rzVernal and Tenaya, said El Capitan will allow scientists to get rapid turnaround on their modeling and simulation jobs. It will enable scientists to address problems that take massive amounts of resources and run orders of magnitude more simulations at once without interfering with other jobs, opening up new possibilities for uncertainty quantification, parameter studies, design exploration and evaluations of models across large sets of experiments, he added.

    “The sheer size of the machine is going to be something to look forward to, both for throughput and the ability to do massive problems,” Ryujin said. “Each generation of nodes is getting significantly more powerful and more capable than the previous generation, so we’ll be able to do the same calculations that we used to do on far fewer resources. El Capitan is going to be a gigantic machine, and so these simulation jobs that were really huge before, are going to require just a small percentage of the machine.”

    Ryujin said scientists are eagerly calculating how large their computing runs will get on El Capitan and is excited at the prospects of researchers at the National Nuclear Security Administration’s three national security laboratories [Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories] being able to model phenomena at resolutions they never could before.

    “One of the joys of working in this space is getting to use these huge machines and cutting-edge technology, so I think it’s also just very cool to be able to get to build and develop and run on these really advanced architectures,” Ryujin said. “I’m looking forward to running record-setting calculations; I really like getting the feedback from our physicists that we ran the biggest calculation ever because every time we do that, we learn something, it spawns something new in the program and sparks new directions of inquiry.”

    Bailey, who oversees the code development teams for El Capitan, said her goal is to develop a useful computational capability for the machine’s future users.

    “The best part of my job is hearing when they use our entire High Performance Computing capability, both the machines and the codes, to solve a complex problem or learn something new about the underlying physics of the systems they are modeling,” she explained. “We are working hard now so that our users will be able to make significant breakthroughs using El Capitan.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The DOE’s Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California-Berkeley in 1952. A Federally Funded Research and Development Center (FFRDC), it is primarily funded by The U.S. Department of Energy and managed and operated by Lawrence Livermore National Security, LLC (LLNS), a partnership of the University of California, Bechtel, BWX Technologies, AECOM, and Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it.

    LLNL is self-described as “a premier research and development institution for science and technology applied to national security.” Its principal responsibility is ensuring the safety, security and reliability of the nation’s nuclear weapons through the application of advanced science, engineering and technology. The Laboratory also applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.
    The National Ignition Facility, is a large laser-based inertial confinement fusion (ICF) research device, located at The DOE’s Lawrence Livermore National Laboratory in Livermore, California. NIF uses lasers to heat and compress a small amount of hydrogen fuel with the goal of inducing nuclear fusion reactions. NIF’s mission is to achieve fusion ignition with high energy gain, and to support nuclear weapon maintenance and design by studying the behavior of matter under the conditions found within nuclear weapons. NIF is the largest and most energetic ICF device built to date, and the largest laser in the world.

    Construction on the NIF began in 1997 but management problems and technical delays slowed progress into the early 2000s. Progress after 2000 was smoother, but compared to initial estimates, NIF was completed five years behind schedule and was almost four times more expensive than originally budgeted. Construction was certified complete on 31 March 2009 by the U.S. Department of Energy, and a dedication ceremony took place on 29 May 2009. The first large-scale laser target experiments were performed in June 2009 and the first “integrated ignition experiments” (which tested the laser’s power) were declared completed in October 2010.

    Bringing the system to its full potential was a lengthy process that was carried out from 2009 to 2012. During this period a number of experiments were worked into the process under the National Ignition Campaign, with the goal of reaching ignition just after the laser reached full power, sometime in the second half of 2012. The Campaign officially ended in September 2012, at about 1⁄10 the conditions needed for ignition. Experiments since then have pushed this closer to 1⁄3, but considerable theoretical and practical work is required if the system is ever to reach ignition. Since 2012, NIF has been used primarily for materials science and weapons research.

    National Igniton Facility- NIF at LLNL

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration


     
  • richardmitnick 11:31 am on October 19, 2022 Permalink | Reply
    Tags: "Harnessing the power of the world’s fastest computer", "PIConGPU": Particle-in-Cell algorithm, , , Exascale computing, , , , UD Prof. Sunita Chandrasekaran and students play key roles in exascale computing.   

    From The University of Delaware : “Harnessing the power of the world’s fastest computer” 

    U Delaware bloc

    From The University of Delaware

    10.18.22
    Tracey Bryant

    UD Prof. Sunita Chandrasekaran and students play key roles in exascale computing.

    1
    UD’s Sunita Chandrasekaran, David L. and Beverly J.C. Mills Career Development Chair in the Department of Computer and Information Sciences, and her students have been working to ensure that key software will be ready to run on Frontier — the fastest computer in the world — when it “opens for business” to the scientific community in 2023.

    From fast food to rapid COVID tests, the world has an unrelenting “need for speed.”

    The fastest drive-thru in the U.S. this year, with the shortest average service time from placing your order to getting your food, was Taco Bell at 221.99 seconds.

    The fastest car, the Bugatti Chiron Super Sport 300+, sped into the record books at 304.7 miles per hour in 2019 and, as of this writing, still holds the title.

    And then there is Frontier, the supercomputer at the U.S. Department of Energy’s Oak Ridge National Lab in Oak Ridge, Tennessee. In May 2022, it was named the fastest computer in the world, clocking in at 1.1 exaflops, which is more than a quintillion calculations per second. That’s a whole lot of math problems to solve — more than 1,000,000,000,000,000,000 of them — in the blink of an eye, a feat that earned Frontier the coveted status as the first computer to achieve exascale computing power.

    Scientists are eager to harness Frontier for a broad range of studies, from mapping the brain to creating more realistic climate models, exploring fusion energy, improving our understanding of new materials at the nanoscience level, bolstering national security, and achieving a clearer, deeper view of the universe, from particle physics to star formation. And that’s barely scratching the surface.

    At the University of Delaware, Sunita Chandrasekaran, associate professor and David L. and Beverly J.C. Mills Career Development Chair in the Department of Computer and Information Sciences, and her students have been working to ensure that key software will be ready to run on Frontier when the exascale computer is “open for business” to the scientific community in 2023.

    Because existing computer codes don’t automatically port over to exascale, she has worked with a team of researchers in the U.S. and at HZDR in Germany to stress-test a workhorse computer application called “Particle in Cell” (PIConGPU).

    A key tool in plasma physics, the Particle-in-Cell algorithm describes the dynamics of a plasma — matter rich in charged particles (ions and electrons) — by computing the motion of these charged particles based on Maxwell’s equations. (James Maxwell was a 19th-century physicist best known for using four equations to describe electromagnetic theory. Albert Einstein said Maxwell’s impact on physics was the most profound since Sir Issac Newton.) Such tools are critical to evolving radiation therapies for cancer, as well as expanding the use of X-rays to probe the structure of materials.

    “I tell my students, imagine your laptop connected to millions of other laptops and being able to harness all of that power,” Chandrasekaran said. “But then in comes exascale — that’s a 1 followed by 18 zeros. Think about how big and powerful such a massive system can be. Such a system could potentially light up an entire city.”

    3
    Sunita Chandrasekaran and her research group have been working on coding and testing that will help validate tools for the world’s newest, biggest, fastest supercomputer called Frontier. Pictured are doctoral students Fabian Mora and Mauricio Ferrato, Christian Munley, undergraduate student in physics; Jaydon Reap and Michael Carr, undergraduate students in computer and information sciences; Nolan Baker, undergraduate student in computer engineering; and Thomas Huber, who recently graduated with his master’s degree in computer science.

    Executing instructions on an exascale system requires a “different programming framework” from other systems, Chandrasekaran explained, given the unique architectural design consisting of many parallel processing units and unique high-performance graphics processing units.

    Overall, Frontier contains 9,408 central processing units (CPUs), 37,632 graphics processing units (GPUs) and 8,730,112 cores, all connected by more than 90 miles of networking cables. All of this computing power helped Frontier leap the exascale barrier, and Chandrasekaran is working to ensure that the software will make the leap, too.

    To take advantage of the system’s specialized architecture, she and her fellow researchers are working to make sure the computer code in high-priority software is literally up to Frontier’s speed — and that it’s bug-free — some of the key components of the Exascale Computing Project SOLLVE, which Chandrasekaran now leads. It is a collaboration of The DOE’s Brookhaven National Laboratory, The DOE’s Argonne National Laboratory, The DOE’s Oak Ridge National Laboratory, The DOE’s Lawrence Livermore National Laboratory, Georgia Tech and UD.

    “Our team has been working together since 2017 to stress-test the software to improve the system,” Chandrasekaran said, noting that the work involves collaborations with several compiler developers that provide implementations for Frontier.

    “The machine is so new that the tools we need for operating it are also immature,” Chandrasekaran said. “Our goal is to have programs ready for scientists to use. We assist by filing bugs, offering fixes, testing beta versions, and helping vendors prepare robust tools for the scientists to use.”

    UD students de-bug vital programming tools

    Thomas Huber, who earned his bachelor’s degree at UD, worked on the project with Chandrasekaran for more than two years before graduating with his master of science in computer and information sciences from the University this past May. A native of Linwood, New Jersey, he is now employed as a software engineer at Cornelis Networks, a computer hardware company.

    “When we started working on this a few years ago, we knew we had Frontier coming at exascale speed, and that required getting a ton of people together to work on the 20 or so core applications that had been deemed mission critical,” Huber said. “All of this software needs to run flawlessly.”

    Thanks to this unique opportunity that Chandrasekaran made possible, Huber gained valuable research and real-world experience. He also trained four undergrads on the project, as they worked together to validate that OpenMP, a popular programming tool, could run on Frontier.

    As the group’s work progressed in assessing the compilers that provide implementations for novel programming features, they found a few bugs, and then a few more bugs. And that’s when they decided to start a GitHub — a software developer forum — to share their findings and open source code, as part of ECP–SOLLVE.

    “We started a GitHub to review the OpenMP specification releases. They come out every few years, and they are like new features — 600 pages of what you can and can’t do,” Huber said. “Most importantly, the section at the end states all the differences among the versions of the program. We take the list of all the new features and go through and create test cases for all of them. We write code that no one else has written before, and we make all of our code public.”

    Huber estimates that the UD team, in collaboration with Oak Ridge National Lab, has written 500 or so tests, and 50,000 lines of code, so far.

    “The whole thing with high-performance computing is parallel programming,” Huber said. “Imagine you’re in a ton of traffic heading to a toll booth with only one EZ pass lane. Parallel programming allows you to split into many EZ pass lanes. OpenMP allows you to do that parallel work and run extremely fast. What we’ve done with OpenMP ensures that scientists and others will be able to use the program on Frontier. We’re the guinea pigs for it.”

    Huber was attracted to the research through the Vertically Integrated Program (VIP) in the College of Engineering. Chandrasekaran was the group leader for the project. He stuck around for a semester, got to work on a research paper (“That was amazing,” he said) and met colleagues who became best friends. They even won a poster competition.

    He credits Chandrasekaran for engaging him in the field.

    “Being so enthusiastic and emphasizing how important this stuff is to helping researchers, and the real world, she made the difference,” Huber said. “She’s a top-tier professor in high-performance computing.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Delaware campus

    The University of Delaware is a public land-grant research university located in Newark, Delaware. University of Delaware (US) is the largest university in Delaware. It offers three associate’s programs, 148 bachelor’s programs, 121 master’s programs (with 13 joint degrees), and 55 doctoral programs across its eight colleges. The main campus is in Newark, with satellite campuses in Dover, the Wilmington area, Lewes, and Georgetown. It is considered a large institution with approximately 18,200 undergraduate and 4,200 graduate students. It is a privately governed university which receives public funding for being a land-grant, sea-grant, and space-grant state-supported research institution.

    The University of Delaware is classified among “R1: Doctoral Universities – Very high research activity”. According to The National Science Foundation, UD spent $186 million on research and development in 2018, ranking it 119th in the nation. It is recognized with the Community Engagement Classification by the Carnegie Foundation for the Advancement of Teaching.

    The University of Delaware is one of only four schools in North America with a major in art conservation. In 1923, it was the first American university to offer a study-abroad program.

    The University of Delaware traces its origins to a “Free School,” founded in New London, Pennsylvania in 1743. The school moved to Newark, Delaware by 1765, becoming the Newark Academy. The academy trustees secured a charter for Newark College in 1833 and the academy became part of the college, which changed its name to Delaware College in 1843. While it is not considered one of the colonial colleges because it was not a chartered institution of higher education during the colonial era, its original class of ten students included George Read, Thomas McKean, and James Smith, all three of whom went on to sign the Declaration of Independence. Read also later signed the United States Constitution.

    Science, Technology and Advanced Research (STAR) Campus

    On October 23, 2009, The University of Delaware signed an agreement with Chrysler to purchase a shuttered vehicle assembly plant adjacent to the university for $24.25 million as part of Chrysler’s bankruptcy restructuring plan. The university has developed the 272-acre (1.10 km^2) site into the Science, Technology and Advanced Research (STAR) Campus. The site is the new home of University of Delaware (US)’s College of Health Sciences, which includes teaching and research laboratories and several public health clinics. The STAR Campus also includes research facilities for University of Delaware (US)’s vehicle-to-grid technology, as well as Delaware Technology Park, SevOne, CareNow, Independent Prosthetics and Orthotics, and the East Coast headquarters of Bloom Energy. In 2020 [needs an update], University of Delaware expects to open the Ammon Pinozzotto Biopharmaceutical Innovation Center, which will become the new home of the UD-led National Institute for Innovation in Manufacturing Biopharmaceuticals. Also, Chemours recently opened its global research and development facility, known as the Discovery Hub, on the STAR Campus in 2020. The new Newark Regional Transportation Center on the STAR Campus will serve passengers of Amtrak and regional rail.

    Academics

    The university is organized into nine colleges:

    Alfred Lerner College of Business and Economics
    College of Agriculture and Natural Resources
    College of Arts and Sciences
    College of Earth, Ocean and Environment
    College of Education and Human Development
    College of Engineering
    College of Health Sciences
    Graduate College
    Honors College

    There are also five schools:

    Joseph R. Biden, Jr. School of Public Policy and Administration (part of the College of Arts & Sciences)
    School of Education (part of the College of Education & Human Development)
    School of Marine Science and Policy (part of the College of Earth, Ocean and Environment)
    School of Nursing (part of the College of Health Sciences)
    School of Music (part of the College of Arts & Sciences)

     
  • richardmitnick 7:58 pm on June 30, 2022 Permalink | Reply
    Tags: "ExaSMR Models Small Modular Reactors Throughout Their Operational Lifetime", , , Current advanced reactor design approaches leverage decades of experimental and operational experience with the US nuclear fleet., Exascale computing, Exascale supercomputers give us a tool to model SMRs with higher resolution than possible on smaller supercomputers., ExaSMR integrates the most reliable and high-confidence numerical methods for modeling operational reactors., Investing in computer design capability means we can better evaluate and refine the designs to come up with the most efficacious solutions., Many different designs are being studied for next-generation reactors., , , The ExaSMR team has adapted their algorithms and code to run on GPUs to realize an orders-of-magnitude increase in performance., The proposed SMR designs are generally simpler and require no human intervention or external power or the application of external force to shut down., We are already seeing significant improvements now on pre-exascale systems.   

    From The DOE’s Exascale Computing Project: “ExaSMR Models Small Modular Reactors Throughout Their Operational Lifetime” 

    From The DOE’s Exascale Computing Project

    June 8, 2022 [Just now in social media.]
    Rob Farber

    Technical Introduction

    Small modular reactors (SMRs) are advanced nuclear reactors that can be incrementally added to a power grid to provide carbon-free energy generation to match increasing energy demand.[1],[2] Their small size and modular design make them a more affordable option because they can be factory assembled and transported to an installation site as prefabricated units.

    Compared to existing nuclear reactors, proposed SMR designs are generally simpler and require no human intervention or external power or the application of external force to shut down. SMRs are designed to rely on passive systems that utilize physical phenomena, such as natural circulation, convection, gravity, and self-pressurization to eliminate or significantly lower the potential for unsafe releases of radioactivity in case of an accident.[3] Computer models are used to ensure that the SMR passive systems can safely operate the reactor regardless of the reactor’s operational mode—be it at idle, during startup, or running at full power.

    Current advanced reactor design approaches leverage decades of experimental and operational experience with the US nuclear fleet and are informed by calibrated numerical models of reactor phenomena. The exascale SMR (ExaSMR) project generates datasets of virtual reactor design simulations based on high-fidelity, coupled physics models for reactor phenomena that are truly predictive and reflect as much ground truth as experimental and operational reactor data.[4]

    An Integrated Toolkit

    The Exascale Computing Project’s (ECP’s) ExaSMR team is working to build a highly accurate, exascale-capable integrated tool kit that couples high-fidelity neutronics and computational fluid dynamics (CFD) codes to model the operational behavior of SMRs over the complete reactor lifetime. This includes accurately modeling the full-core multiphase thermal hydraulics and the fuel depletion. Even with exascale performance, reduced-order mesh numerical methodologies are required to achieve sufficient accuracy with reasonable runtimes to make these simulations tractable.

    According to Steven Hamilton (Figure 2), a senior researcher at The DOE’s Oak Ridge National Laboratory (ORNL) and PI of the project, ExaSMR integrates the most reliable and high-confidence numerical methods for modeling operational reactors.

    Specifically, ExaSMR is designed to leverage exascale systems to accurately and efficiently model the reactor’s neutron state with Monte Carlo (MC) neutronics and the reactor’s thermal fluid heat transfer efficiency with high-resolution CFD.[5] The ExaSMR team’s goal is to achieve very high spatial accuracy using models that contain 40 million spatial elements and exhibit 22 billion degrees of freedom.[6]

    Hamilton notes that high-resolution models are essential because they are used to reflect the presence of spacer grids and the complex mixing promoted by mixing vanes (or the equivalent) in the reactor. The complex fluid flows around these regions in the reactor (Figure 1) require high spatial resolution so engineers can understand the neutron distribution and the reactor’s thermal heat transfer efficiency. Of particular interest is the behavior of the reactor during low-power conditions as well as the initiation of coolant flow circulation through the SMR reactor core and its primary heat exchanger during startup.

    1
    Figure 1. Complex fluid flows and momentum cause swirling.

    To make the simulations run in reasonable times even when using an exascale supercomputer, the results of the high accuracy model are adapted so they can be utilized in a reduced order methodology. This methodology is based on momentum sources that can mimic the mixing caused by the vanes in the reactor. [7] Hamilton notes, “Essentially, we use the full core simulation on a small model that is replicated over the reactor by mapping to a coarser mesh. This coarser mesh eliminates the time-consuming complexity of the mixing vane calculations while still providing an accurate-enough representation for the overall model.” The data from the resulting virtual reactor simulations are used to fill in critical gaps in experimental and operational reactor data. These results give engineers the ability to accelerate the currently cumbersome advanced reactor concept-to-design-to-build cycle that has constrained the nuclear energy industry for decades. ExaSMR can also provide an avenue for validating existing industry design and regulatory tools.[8]

    2
    Figure 2. Steven Hamilton, PI of the ExaSMR project and Senior researcher at ORNL.

    “The importance,” Hamilton states, “is that many different designs are being studied for next-generation reactors. Investing in computer design capability means we can better evaluate and refine the designs to come up with the most efficacious solutions. Exascale supercomputers give us a tool to model SMRs with higher resolution than possible on smaller supercomputers. These resolution improvements make our simulations more predictive of the phenomena we are modeling. We are already seeing significant improvements now on pre-exascale systems and expect a similar jump in performance once we are running on the actual exascale hardware.” He concludes by noting, “Many scientists believe that nuclear is the only carbon-free energy source that is suitable for bulk deployment to meet primary energy needs with a climate-friendly technology.”

    The First Full-Core, Pin-Resolved CFD Simulations

    To achieve their goal of generating high-fidelity, coupled-physics models for truly predictive reactor models, the team must overcome limitations in computing power that have constrained past efforts to modeling only specific regions of a reactor core.[9] To this end, the ExaSMR team has adapted their algorithms and code to run on GPUs to realize an orders-of-magnitude increase in performance when running a challenge problem on the pre-exascale Summit supercomputer.

    Hamilton explains, “We were able to perform the simulations between 170× and 200× faster on the Summit supercomputer compared to the previous Titan ORNL supercomputer.

    Much of this is owed to ECP’s investment in the ExaSMR project and the Center for Efficient Exascale Discretizations (CEED) along with larger, higher performance GPU hardware. The CEED project has been instrumental for improving the algorithms we used in this simulation.”

    In demonstrating this new high watermark in performance, the team also performed (to their knowledge) the first ever full-core, pin-resolved CFD simulation that modeled coolant flow around the fuel pins in a light water reactor core. These fluid flows play a critical role in determining the reactor’s safety and performance. Hamilton notes, “This full core spacer grids and the mixing vanes (SGMV) simulation provides a high degree of spatial resolution that allows simultaneous capture of local and global effects. Capturing the effect of mixing vanes on flow and heat transfer is vital to predictive simulations.”

    The complexity of these flows can be seen in streamlines in Figure 1. Note the transition from parallel to rotating flows caused by simulation of the CFD momentum sources.

    A Two-Step Approach to Large-Scale Simulations

    A two-step approach was taken to implement a GPU-oriented CFD code using Reynolds-Averaged Navier-Stokes (RANS) equations to model the behavior in this SGMV challenge problem.

    Small simulations are performed using the more accurate yet computationally expensive large eddy simulation (LES) code. Hamilton notes these are comparatively small and do not need to be performed on the supercomputer.
    The accurate LES results are then imposed on a coarser mesh, which is used for modeling the turbulent flow at scale on the supercomputer’s GPUs. The RANS approach is needed because the Reynolds number in the core is expected to be high.[10]

    Jun Fang, an author of the study in which these results were published, reflects on the importance of these pre-exascale results by observing, “As we advance toward exascale computing, we will see more opportunities to reveal large-scale dynamics of these complex structures in regimes that were previously inaccessible, thereby giving us real information that can reshape how we approach the challenges in reactor designs.”[11]

    This basis for this optimism is reflected in the strong scaling behavior of NekRES, a GPU-enabled branch of the Nek5000 CFD code contributed by the ExaSMR team.[12] NekRS utilizes optimized finite-element flow solver kernels from the libParanumal library developed by CEED. The ExaSMR code is portable owing in part to the team’s use of the ECP-supported exascale-capable OCCA performance portability library. The OCCA library provides programmers with the ability to write portable kernels that can run on a variety of hardware platforms or be translated to backend-specific code such as OpenCL and CUDA.

    3
    Figure 3. NekRS strong scaling on Summit.

    Development of Novel Momentum Sources to Model Auxiliary Structures in the Core

    Even with the considerable computational capability of exascale hardware, the team was forced to develop a reduced-order methodology that mimics the mixing of the vanes to make the full core simulation tractable. “This methodology,” Hamilton notes, “allows the impact of mixing vanes on flow to be captured without requiring an explicit model of vanes. The objective is to model the fluid flow without the need of an expensive body-fitted mesh.” Instead, as noted in the paper, “The effects of spacer grid, mixing vanes, springs, dimples, and guidance/maintaining vanes are taken into account in the form of momentum sources and pressure drop.”[13]

    Validation of the Challenge Results

    To ensure adequate accuracy of the reduced order methodology, the momentum sources are carefully calibrated by the team with detailed LES of spacer grids performed with Nek5000.[14] The Nek5000 reference was used because it is a trusted reference in the literature.

    “The combination of RANS (full core) and LES,” the team wrote in their paper, “forms a flexible strategy that balances both efficiency and the accuracy.” Furthermore, “Continuous validation and verification studies have been conducted over years for Nek5000 for various geometries of interest to nuclear engineers, including the rod bundles with spacer grid and mixing vanes.”[15]

    Expanding on the text in the paper, Hamilton points out that “the momentum source method (MSM) was implemented in NekRS using the same approach developed in Nek5000, thereby leveraging as much as possible the same routines.”

    Validation of the simulation results includes the demonstration of the momentum sources shown in Figure 1 as well as validation of the pressure drop. Both are discussed in detail in the team’s peer-reviewed paper, which includes a numerical quantification of results by various figures of merit. Based on the success reflected in the validation metrics, the team concludes that they “clearly demonstrated that the RANS momentum sources developed can successfully reproduce the time-averaged macroscale flow physics revealed by the high-fidelity LES reference.”[16]

    The Groundwork has been Laid to Expand the Computational Domain

    Improved software, GPU acceleration, and reduced-order mesh numerical methodologies have laid the groundwork for further development of the integrated ExaSMR toolkit. In combination with operational exascale hardware, the ExaSMR team can expand their capabilities to simulate and study the system behavior concerning the neutronics and thermal–hydraulics of these small reactors.

    The implications are significant because the passive design and ease of installation means that SMRs offer a solution where the United States and the world can meet essential carbon-neutral climate goals while also addressing the need to augment existing electricity generation capacity.

    This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the US Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.

    [1] https://www.iaea.org/newscenter/news/what-are-small-modular-reactors-smrs

    [2] https://www.energy.gov/ne/articles/4-key-benefits-advanced-small-modular-reactors

    [3] https://www.iaea.org/newscenter/news/what-are-small-modular-reactors-smrs

    [4] https://www.ornl.gov/project/exasmr-coupled-monte-carlo-neutronics-and-fluid-flow-simulation-small-modular-reactors

    [5] https://www.ornl.gov/project/exasmr-coupled-monte-carlo-neutronics-and-fluid-flow-simulation-small-modular-reactors

    [6] https://www.exascaleproject.org/research-project/exasmr/

    [7] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [8] https://www.exascaleproject.org/research-project/exasmr/

    [9] https://www.ans.org/news/article-2968/argonneled-team-models-fluid-dynamics-of-entire-smr-core/

    [10] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [11] https://www.ans.org/news/article-2968/argonneled-team-models-fluid-dynamics-of-entire-smr-core/

    [12] https://www.exascaleproject.org/research-project/exasmr/

    [13] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [14] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [15] https://www.sciencedirect.com/science/article/abs/pii/S0029549321000959?via%3Dihub

    [16] https://www.osti.gov/biblio/1837194-feasibility-full-core-pin-resolved-cfd-simulations-small-modular-reactor-momentum-sources

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    About The DOE’s Exascale Computing Project
    The ECP is a collaborative effort of two DOE organizations – the The DOE’s Office of Science and theThe DOE’s National Nuclear Security Administration. As part of the National Strategic Computing initiative, ECP was established to accelerate delivery of a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the early-2020s time frame.

    About the Office of Science

    The DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    About The NNSA

    Established by Congress in 2000, NNSA is a semi-autonomous agency within the DOE responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the United States and abroad. https://nnsa.energy.gov

    The Goal of ECP’s Application Development focus area is to deliver a broad array of comprehensive science-based computational applications that effectively utilize exascale HPC technology to provide breakthrough simulation and data analytic solutions for scientific discovery, energy assurance, economic competitiveness, health enhancement, and national security.

    Awareness of ECP and its mission is growing and resonating—and for good reason. ECP is an incredible effort focused on advancing areas of key importance to our country: economic competiveness, breakthrough science and technology, and national security. And, fortunately, ECP has a foundation that bodes extremely well for the prospects of its success, with the demonstrably strong commitment of the US Department of Energy (DOE) and the talent of some of America’s best and brightest researchers.

    ECP is composed of about 100 small teams of domain, computer, and computational scientists, and mathematicians from DOE labs, universities, and industry. We are tasked with building applications that will execute well on exascale systems, enabled by a robust exascale software stack, and supporting necessary vendor R&D to ensure the compute nodes and hardware infrastructure are adept and able to do the science that needs to be done with the first exascale platforms.the science that needs to be done with the first exascale platforms.

     
  • richardmitnick 4:10 pm on May 30, 2022 Permalink | Reply
    Tags: "Frontier supercomputer debuts as world’s fastest-breaking exascale barrier", , , , Exascale computing, , , ,   

    From The DOE’s Oak Ridge National Laboratory: “Frontier supercomputer debuts as world’s fastest-breaking exascale barrier” 

    From The DOE’s Oak Ridge National Laboratory

    May 30, 2022

    Media Contacts:

    Sara Shoemaker
    shoemakerms@ornl.gov,
    865.576.9219

    Secondary Media Contact
    Katie Bethea
    Oak Ridge Leadership Computing Facility
    betheakl@ornl.gov
    757.817.2832


    Frontier: The World’s First Exascale Supercomputer Has Arrived

    The Frontier supercomputer [below] at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.

    Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system [below]. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

    “Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges,” ORNL Director Thomas Zacharia said. “This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE’s Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale.”

    Rankings were announced at the International Supercomputing Conference 2022 in Hamburg, Germany, which gathers leaders from around the world in the field of high-performance computing, or HPC. Frontier’s speeds surpassed those of any other supercomputer in the world, including ORNL’s Summit, which is also housed at ORNL’s Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility.

    Frontier, a HPE Cray EX supercomputer, also claimed the number one spot on the Green500 list, which rates energy use and efficiency by commercially available supercomputing systems, with 62.68 gigaflops performance per watt. Frontier rounded out the twice-yearly rankings with the top spot in a newer category, mixed-precision computing, that rates performance in formats commonly used for artificial intelligence, with a performance of 6.88 exaflops.

    The work to deliver, install and test Frontier began during the COVID-19 pandemic, as shutdowns around the world strained international supply chains. More than 100 members of a public-private team worked around the clock, from sourcing millions of components to ensuring deliveries of system parts on deadline to carefully installing and testing 74 HPE Cray EX supercomputer cabinets, which include more than 9,400 AMD-powered nodes and 90 miles of networking cables.

    “When researchers gain access to the fully operational Frontier system later this year, it will mark the culmination of work that began over three years ago involving hundreds of talented people across the Department of Energy and our industry partners at HPE and AMD,” ORNL Associate Lab Director for computing and computational sciences Jeff Nichols said. “Scientists and engineers from around the world will put these extraordinary computing speeds to work to solve some of the most challenging questions of our era, and many will begin their exploration on Day One.”

    3

    Frontier’s overall performance of 1.1 exaflops translates to more than one quintillion floating point operations per second, or flops, as measured by the High-Performance Linpack Benchmark test. Each flop represents a possible calculation, such as addition, subtraction, multiplication or division.

    Frontier’s early performance on the Linpack benchmark amounts to more than seven times that of Summit at 148.6 petaflops. Summit continues as an impressive, highly ranked workhorse machine for open science, listed at number four on the TOP500.

    Frontier’s mixed-precision computing performance clocked in at roughly 6.88 exaflops, or more than 6.8 quintillion flops per second, as measured by the High-Performance Linpack-Accelerator Introspection, or HPL-AI, test. The HPL-AI test measures calculation speeds in the computing formats typically used by the machine-learning methods that drive advances in artificial intelligence.

    Detailed simulations relied on by traditional HPC users to model such phenomena as cancer cells, supernovas, the coronavirus or the atomic structure of elements require 64-bit precision, a computationally demanding form of computing accuracy. Machine-learning algorithms typically require much less precision — sometimes as little as 32-, 24- or 16-bit accuracy — and can take advantage of special hardware in the graphic processing units, or GPUs, relied on by machines like Frontier to reach even faster speeds.

    ORNL and its partners continue to execute the bring-up of Frontier on schedule. Next steps include continued testing and validation of the system, which remains on track for final acceptance and early science access later in 2022 and open for full science at the beginning of 2023.

    4
    Credit: Laddy Fields/ORNL, U.S. Dept. of Energy.

    FACTS ABOUT FRONTIER

    The Frontier supercomputer’s exascale performance is enabled by some of the world’s most advanced pieces of technology from HPE and AMD:

    Frontier has 74 HPE Cray EX supercomputer cabinets, which are purpose-built to support next-generation supercomputing performance and scale, once open for early science access.

    Each node contains one optimized EPYC™ processor and four AMD Instinct™ accelerators, for a total of more than 9,400 CPUs and more than 37,000 GPUs in the entire system. These nodes provide developers with easier capabilities to program their applications, due to the coherency enabled by the EPYC processors and Instinct accelerators.

    HPE Slingshot, the world’s only high-performance Ethernet fabric designed for next-generation HPC and AI solutions, including larger, data-intensive workloads, to address demands for higher speed and congestion control for applications to run smoothly and boost performance.

    An I/O subsystem from HPE that will come online this year to support Frontier and the OLCF. The I/O subsystem features an in-system storage layer and Orion, a Lustre-based enhanced center-wide file system that is also the world’s largest and fastest single parallel file system, based on the Cray ClusterStor E1000 storage system. The in-system storage layer will employ compute-node local storage devices connected via PCIe Gen4 links to provide peak read speeds of more than 75 terabytes per second, peak write speeds of more than 35 terabytes per second, and more than 15 billion random-read input/output operations per second. The Orion center-wide file system will provide around 700 petabytes of storage capacity and peak write speeds of 5 terabytes per second.

    As a next-generation supercomputing system and the world’s fastest for open science, Frontier is also energy-efficient, due to its liquid-cooled capabilities. This cooling system promotes a quieter data center by removing the need for a noisier, air-cooled system.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    Established in 1942, The DOE’s Oak Ridge National Laboratory is the largest science and energy national laboratory in the Department of Energy system (by size) and third largest by annual budget. It is located in the Roane County section of Oak Ridge, Tennessee. Its scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

    ORNL has several of the world’s top supercomputers, including Summit [below], ranked by the TOP500 as Earth’s second-most powerful.

    ORNL OLCF IBM Q AC922 SUMMIT supercomputer, was No.1 on the TOP500..

    The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor.

    ORNL Spallation Neutron Source annotated.

    It hosts the Center for Nanophase Materials Sciences, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light Water Nuclear Reactors.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    Areas of research

    ORNL conducts research and development activities that span a wide range of scientific disciplines. Many research areas have a significant overlap with each other; researchers often work in two or more of the fields listed here. The laboratory’s major research areas are described briefly below.

    Chemical sciences – ORNL conducts both fundamental and applied research in a number of areas, including catalysis, surface science and interfacial chemistry; molecular transformations and fuel chemistry; heavy element chemistry and radioactive materials characterization; aqueous solution chemistry and geochemistry; mass spectrometry and laser spectroscopy; separations chemistry; materials chemistry including synthesis and characterization of polymers and other soft materials; chemical biosciences; and neutron science.
    Electron microscopy – ORNL’s electron microscopy program investigates key issues in condensed matter, materials, chemical and nanosciences.
    Nuclear medicine – The laboratory’s nuclear medicine research is focused on the development of improved reactor production and processing methods to provide medical radioisotopes, the development of new radionuclide generator systems, the design and evaluation of new radiopharmaceuticals for applications in nuclear medicine and oncology.
    Physics – Physics research at ORNL is focused primarily on studies of the fundamental properties of matter at the atomic, nuclear, and subnuclear levels and the development of experimental devices in support of these studies.
    Population – ORNL provides federal, state and international organizations with a gridded population database, called Landscan, for estimating ambient population. LandScan is a raster image, or grid, of population counts, which provides human population estimates every 30 x 30 arc seconds, which translates roughly to population estimates for 1 kilometer square windows or grid cells at the equator, with cell width decreasing at higher latitudes. Though many population datasets exist, LandScan is the best spatial population dataset, which also covers the globe. Updated annually (although data releases are generally one year behind the current year) offers continuous, updated values of population, based on the most recent information. Landscan data are accessible through GIS applications and a USAID public domain application called Population Explorer.

     
  • richardmitnick 2:10 pm on September 28, 2021 Permalink | Reply
    Tags: "The co-evolution of particle physics and computing", , , Exascale computing, , , ,   

    From Symmetry: “The co-evolution of particle physics and computing” 

    Symmetry Mag

    From Symmetry

    09/28/21
    Stephanie Melchor

    1
    Illustration by Sandbox Studio, Chicago with Ariel Davis.

    Over time, particle physics and astrophysics and computing have built upon one another’s successes. That co-evolution continues today.

    In the mid-twentieth century, particle physicists were peering deeper into the history and makeup of the universe than ever before. Over time, their calculations became too complex to fit on a blackboard—or to farm out to armies of human “computers” doing calculations by hand.

    To deal with this, they developed some of the world’s earliest electronic computers.

    Physics has played an important role in the history of computing. The transistor—the switch that controls the flow of electrical signal within a computer—was invented by a group of physicists at Bell Labs. The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible. They have encouraged the development of new technologies to handle tasks from dealing with avalanches of data to simulating interactions on the scales of both the cosmos and the quantum realm.

    But this influence doesn’t just go one way. Computing plays an essential role in particle physics and astrophysics as well. As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs.

    2
    Illustration by Sandbox Studio, Chicago with Ariel Davis.

    Managing an onslaught of data

    In 1973, scientists at DOE’s Fermi National Accelerator Laboratory (US) in Illinois got their first big mainframe computer: a 7-year-old hand-me-down from DOE’s Lawrence Berkeley National Laboratory (US). Called the CDC 6600, it weighed about 6 tons. Over the next five years, Fermilab added five more large mainframe computers to its collection.

    Then came the completion of the Tevatron—at the time, the world’s highest-energy particle accelerator—which would provide the particle beams for numerous experiments at the lab.

    _________________________________________________________________________________________________________

    FNAL/Tevatron map

    Tevatron Accelerator

    FNAL/Tevatron

    FNAL/Tevatron CDF detector

    FNAL/Tevatron DØ detector

    ______________________________________________________________________________________________________________

    By the mid-1990s, two four-story particle detectors would begin selecting, storing and analyzing data from millions of particle collisions at the Tevatron per second. Called the Collider Detector at Fermilab and the DØ detector, these new experiments threatened to overpower the lab’s computational abilities.

    In December of 1983, a committee of physicists and computer scientists released a 103-page report highlighting the “urgent need for an upgrading of the laboratory’s computer facilities.” The report said the lab “should continue the process of catching up” in terms of computing ability, and that “this should remain the laboratory’s top computing priority for the next few years.”

    Instead of simply buying more large computers (which were incredibly expensive), the committee suggested a new approach: They recommended increasing computational power by distributing the burden over clusters or “farms” of hundreds of smaller computers.

    Thanks to Intel’s 1971 development of a new commercially available microprocessor the size of a domino, computers were shrinking. Fermilab was one of the first national labs to try the concept of clustering these smaller computers together, treating each particle collision as a computationally independent event that could be analyzed on its own processor.

    Like many new ideas in science, it wasn’t accepted without some pushback.

    Joel Butler, a physicist at Fermilab who was on the computing committee, recalls, “There was a big fight about whether this was a good idea or a bad idea.”

    A lot of people were enchanted with the big computers, he says. They were impressive-looking and reliable, and people knew how to use them. And then along came “this swarm of little tiny devices, packaged in breadbox-sized enclosures.”

    The computers were unfamiliar, and the companies building them weren’t well-established. On top of that, it wasn’t clear how well the clustering strategy would work.

    As for Butler? “I raised my hand [at a meeting] and said, ‘Good idea’—and suddenly my entire career shifted from building detectors and beamlines to doing computing,” he chuckles.

    Not long afterward, innovation that sparked for the benefit of particle physics enabled another leap in computing. In 1989, Tim Berners-Lee, a computer scientist at European Organization for Nuclear Research [Organisation européenne pour la recherche nucléaire] [Europäische Organisation für Kernforschung](CH) [CERN], launched the World Wide Web to help CERN physicists share data with research collaborators all over the world.

    To be clear, Berners-Lee didn’t create the internet—that was already underway in the form the ARPANET, developed by the US Department of Defense.

    3
    ARPANET

    But the ARPANET connected only a few hundred computers, and it was difficult to share information across machines with different operating systems.

    The web Berners-Lee created was an application that ran on the internet, like email, and started as a collection of documents connected by hyperlinks. To get around the problem of accessing files between different types of computers, he developed HTML (HyperText Markup Language), a programming language that formatted and displayed files in a web browser independent of the local computer’s operating system.

    Berners-Lee also developed the first web browser, allowing users to access files stored on the first web server (Berners-Lee’s computer at CERN).

    4
    NCSA MOSAIC Browser

    3
    Netscape.

    He implemented the concept of a URL (Uniform Resource Locator), specifying how and where to access desired web pages.

    What started out as an internal project to help particle physicists share data within their institution fundamentally changed not just computing, but how most people experience the digital world today.

    Back at Fermilab, cluster computing wound up working well for handling the Tevatron data. Eventually, it became industry standard for tech giants like Google and Amazon.

    Over the next decade, other US national laboratories adopted the idea, too. DOE’s SLAC National Accelerator Laboratory (US)—then called Stanford Linear Accelerator Center—transitioned from big mainframes to clusters of smaller computers to prepare for its own extremely data-hungry experiment, BaBar.

    SLAC National Accelerator Laboratory(US) BaBar

    Both SLAC and Fermilab also were early adopters of Lee’s web server. The labs set up the first two websites in the United States, paving the way for this innovation to spread across the continent.

    In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division. The head of a division reports directly to the lab director, making it easier to get resources and set priorities. Physicist Tom Nash formed the new Computing Division, along with Butler and two other scientists, Irwin Gaines and Victoria White. Butler led the division from 1994 to 1998.

    High-performance computing in particle physics and astrophysics

    These computational systems worked well for particle physicists for a long time, says Berkeley Lab astrophysicist Peter Nugent. That is, until Moore’s Law started grinding to a halt.

    Moore’s Law is the idea that the number of transistors in a circuit will double, making computers faster and cheaper, every two years. The term was first coined in the mid-1970s, and the trend reliably proceeded for decades. But now, computer manufacturers are starting to hit the physical limit of how many tiny transistors they can cram onto a single microchip.

    Because of this, says Nugent, particle physicists have been looking to take advantage of high-performance computing instead.

    Nugent says high-performance computing is “something more than a cluster, or a cloud-computing environment that you could get from Google or AWS, or at your local university.”

    What it typically means, he says, is that you have high-speed networking between computational nodes, allowing them to share information with each other very, very quickly. When you are computing on up to hundreds of thousands of nodes simultaneously, it massively speeds up the process.

    On a single traditional computer, he says, 100 million CPU hours translates to more than 11,000 years of continuous calculations. But for scientists using a high-performance computing facility at Berkeley Lab, DOE’s Argonne National Laboratory (US) or DOE’s Oak Ridge National Laboratory (US), 100 million hours is a typical, large allocation for one year at these facilities.

    Although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, Nugent says they are now using it for their data analysis as well.

    This includes rapid image-processing computations that have enabled the observations of several supernovae, including SN 2011fe, captured just after it began. “We found it just a few hours after it exploded, all because we were able to run these pipelines so efficiently and quickly,” Nugent says.

    According to Berkeley Lab physicist Paolo Calafiura, particle physicists also use high-performance computing for simulations—for modeling not the evolution of the cosmos, but rather what happens inside a particle detector. “Detector simulation is significantly the most computing-intensive problem that we have,” he says.

    Scientists need to evaluate multiple possibilities for what can happen when particles collide. To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” Calafiura says, “you want to simulate 10 billion collision events.”

    Calafiura says that right now, he’s more worried about finding a way to store all of the simulated and actual detector data than he is about producing it, but he knows that won’t last.

    “When does physics push computing?” he says. “When computing is not good enough… We see that in five years, computers will not be powerful enough for our problems, so we are pushing hard with some radically new ideas, and lots of detailed optimization work.”

    That’s why The Department of Energy’s Exascale Computing Project aims to build, in the next few years, computers capable of performing a quintillion (that is, a billion billion) operations per second. The new computers will be 1000 times faster than the current fastest computers.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer, to be built at DOE’s Argonne National Laboratory.

    The exascale computers will also be used for other applications ranging from precision medicine to climate modeling to national security.

    Machine learning and quantum computing

    Innovations in computer hardware have enabled astrophysicists to push the kinds of simulations and analyses they can do. For example, Nugent says, the introduction of graphics processing units [GPU’s] has sped up astrophysicists’ ability to do calculations used in machine learning, leading to an explosive growth of machine learning in astrophysics.

    With machine learning, which uses algorithms and statistics to identify patterns in data, astrophysicists can simulate entire universes in microseconds.

    Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

    The same way a computer can be trained to tell the difference between cats and dogs in pictures, it can learn how to identify particles from physics datasets, distinguishing between things like pions and photons.

    Tran says using computation this way can accelerate discovery. “As physicists, we’ve been able to learn a lot about particle physics and nature using non-machine-learning algorithms,” he says. “But machine learning can drastically accelerate and augment that process—and potentially provide deeper insight into the data.”

    And while teams of researchers are busy building exascale computers, others are hard at work trying to build another type of supercomputer: the quantum computer.

    Remember Moore’s Law? Previously, engineers were able to make computer chips faster by shrinking the size of electrical circuits, reducing the amount of time it takes for electrical signals to travel. “Now our technology is so good that literally the distance between transistors is the size of an atom,” Tran says. “So we can’t keep scaling down the technology and expect the same gains we’ve seen in the past.”

    To get around this, some researchers are redefining how computation works at a fundamental level—like, really fundamental.

    The basic unit of data in a classical computer is called a bit, which can hold one of two values: 1, if it has an electrical signal, or 0, if it has none. But in quantum computing, data is stored in quantum systems—things like electrons, which have either up or down spins, or photons, which are polarized either vertically or horizontally. These data units are called “qubits.”

    Here’s where it gets weird. Through a quantum property called superposition, qubits have more than just two possible states. An electron can be up, down, or in a variety of stages in between.

    What does this mean for computing? A collection of three classical bits can exist in only one of eight possible configurations: 000, 001, 010, 100, 011, 110, 101 or 111. But through superposition, three qubits can be in all eight of these configurations at once. A quantum computer can use that information to tackle problems that are impossible to solve with a classical computer.

    Fermilab scientist Aaron Chou likens quantum problem-solving to throwing a pebble into a pond. The ripples move through the water in every possible direction, “simultaneously exploring all of the possible things that it might encounter.”

    In contrast, a classical computer can only move in one direction at a time.

    But this makes quantum computers faster than classical computers only when it comes to solving certain types of problems. “It’s not like you can take any classical algorithm and put it on a quantum computer and make it better,” says University of California, Santa Barbara physicist John Martinis, who helped build Google’s quantum computer.

    Although quantum computers work in a fundamentally different way than classical computers, designing and building them wouldn’t be possible without traditional computing laying the foundation, Martinis says. “We’re really piggybacking on a lot of the technology of the last 50 years or more.”

    The kinds of problems that are well suited to quantum computing are intrinsically quantum mechanical in nature, says Chou.

    For instance, Martinis says, consider quantum chemistry. Solving quantum chemistry problems with classical computers is so difficult, he says, that 10 to 15% of the world’s supercomputer usage is currently dedicated to the task. “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful”—because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved.

    Because making better quantum computers would be so useful in physics research, and because building them requires skills and knowledge that physicists possess, physicists are ramping up their quantum efforts. In the United States, the National Quantum Initiative Act of 2018 called for the The National Institute of Standards and Technology (US), The National Science Foundation (US) and The Department of Energy (US) to support programs, centers and consortia devoted to quantum information science.

    Coevolution requires cooperation

    In the early days of computational physics, the line between who was a particle physicist and who was a computer scientist could be fuzzy. Physicists used commercially available microprocessors to build custom computers for experiments. They also wrote much of their own software—ranging from printer drivers to the software that coordinated the analysis between the clustered computers.

    Nowadays, roles have somewhat shifted. Most physicists use commercially available devices and software, allowing them to focus more on the physics, Butler says. But some people, like Anshu Dubey, work right at the intersection of the two fields. Dubey is a computational scientist at DOE’s Argonne National Laboratory (US) who works with computational physicists.

    When a physicist needs to computationally interpret or model a phenomenon, sometimes they will sign up a student or postdoc in their research group for a programming course or two and then ask them to write the code to do the job. Although these codes are mathematically complex, Dubey says, they aren’t logically complex, making them relatively easy to write.

    A simulation of a single physical phenomenon can be neatly packaged within fairly straightforward code. “But the real world doesn’t want to cooperate with you in terms of its modularity and encapsularity,” she says.

    Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey, “you start to require people who are creative in their own right—in terms of being able to architect software.”

    That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

    Hiring computer scientists for research projects in physics and other fields of science can be a challenge, Dubey says. Most funding agencies specify that research money can be used for hiring students and postdocs, but not paying for software development or hiring dedicated engineers. “There is no viable career path in academia for people whose careers are like mine,” she says.

    In an ideal world, universities would establish endowed positions for a team of research software engineers in physics departments with a nontrivial amount of computational research, Dubey says. These engineers would write reliable, well-architected code, and their institutional knowledge would stay with a team.

    Physics and computing have been closely intertwined for decades. However the two develop—toward new analyses using artificial intelligence, for example, or toward the creation of better and better quantum computers—it seems they will remain on this path together.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.


    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 11:57 am on August 26, 2021 Permalink | Reply
    Tags: "Motion detectors", , , , , Earthquake Simulation (EQSIM) project, Earthquake simulators angle to use exascale computers to detail site-specific ground movement., Exascale computing, , , , The San Francisco Bay area serves as EQSIM’s subject for testing computational models of the Hayward fault., The University of Nevada-Reno (US)   

    From DOE’s ASCR Discovery (US) : “Motion detectors” 

    From DOE’s ASCR Discovery (US)

    DOE’s Lawrence Berkeley National Laboratory (US)-led earthquake simulators angle to use exascale computers to detail site-specific ground movement.

    1
    Models can now couple ground-shaking duration and intensity along the Hayward Fault with damage potential to skyscrapers and smaller residential and commercial buildings (red = most damaging, green = least). Image courtesy of David McCallen/Berkeley Lab.

    This research team wants to make literal earthshaking discoveries every day.

    “Earthquakes are a tremendous societal problem,” says David McCallen, a senior scientist at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory who heads the Earthquake Simulation (EQSIM) project. “Whether it’s the Pacific Northwest or the Los Angeles Basin or San Francisco or the New Madrid Zone in the Midwest, they’re going to happen.”

    A part of the DOE’s Exascale Computing Project, the EQSIM collaboration comprises researchers from Berkeley Lab, DOE’s Lawrence Livermore National Laboratory and The University of Nevada-Reno (US).

    The San Francisco Bay area serves as EQSIM’s subject for testing computational models of the Hayward fault. Considered a major threat, the steadily creeping fault runs throughout the East Bay area.

    “If you go to Hayward and look at the sidewalks and the curbs, you see little offsets because the earth is creeping,” McCallen says. As the earth moves it stores strain energy in the rocks below. When that energy releases, seismic waves radiate from the fault, shaking the ground. “That’s what you feel when you feel an earthquake.

    The Hayward fault ruptures every 140 or 150 years, on average. The last rupture came in 1868 – 153 years ago.

    2
    Historically speaking, the Bay Area may be due for a major earthquake along the Hayward Fault. Image courtesy of Geological Survey (US).

    “Needless to say, we didn’t have modern seismic instruments measuring that rupture,” McCallen notes. “It’s a challenge having no data to try to predict what the motions will be for the next earthquake.”

    That data dearth led earth scientists to try a work-around. They assumed that data taken from earthquakes elsewhere around the world would apply to the Hayward fault.

    That helps to an extent, McCallen says. “But it’s well-recognized that earthquake motions tend to be very specific in a region and at any specific site as a result of the geologic setting.” That has prompted researchers to take a new approach: focusing on data most relevant to a specific fault like Hayward.

    “If you have no data, that’s hard to do,” McCallen says. “That’s the promise of advanced simulations: to understand the site-specific character of those motions.”

    Part of the project has advanced earthquake models’ computational workflow from start to finish. This includes syncing regional-scale models and with structural ones to refine earthquake wave forms’ three-dimensional complexity as they strike buildings and infrastructure.

    “We’re coupling multiple codes to be able to do that efficiently,” McCallen says. “We’re at the phase now where those advanced algorithm developments are being finished.”

    Developing the workflow presents many challenges to ensure that every step is efficient and effective. The software tools that DOE is developing for exascale platforms have helped optimize EQSIM’s ability to store and retrieve massive datasets.

    The process includes creating a computational representation of Earth that may contain 200 billion grid points. (If those grid points were seconds, that would equal 6,400 years.) With simulations this size, McCallen says, inefficiencies become obvious immediately. “You really want to make sure that the way you set up that grid is optimized and matched closely to the natural variation of the Earth’s geologic properties.”

    The project’s earthquake simulations cut across three disciplines. The process starts with seismology. That covers the rupture of an earthquake fault and seismic wave propagation through highly varied rock layers. Next, the waves arrive at a building. “That tends to transition into being both a geotechnical and a structural-engineering problem,” McCallen notes. Geotechnical engineers can analyze quake-affected soils’ complex behavior near the surface. Finally, seismic waves impinge upon a building and the soil island that supports it. That’s the structural engineer’s domain.

    EQSIM researchers have already improved their geophysics code’s performance to simulate Bay Area ground motions at a regional scale. “We’re trying to get to what we refer to as higher-frequency resolution. We want to generate the ground motions that have the dynamics in them relevant to engineered structures.”

    Early simulations at 1 or 2 hertz – vibration cycles per second – couldn’t approximate the ground motions at 5 to 10 hertz that rock buildings and bridges. Using the DOE’s Oak Ridge National Laboratory’s Summit supercomputer, EQSIM has now surpassed 5 hertz for the entire Bay Area. More work remains to be done at the exascale, however, to simulate the area’s geologic structure at the 10-hertz upper end.

    Livermore’s SW4 code for 3-D seismic modeling served as EQSIM’s foundation. The team boosted the code’s speed and efficiency to optimize performance on massively parallel machines, which deploy many processors to perform multiple calculations simultaneously. Even so, an earthquake simulation can take 20 to 30 hours to complete, but the team hopes to reduce that time by harnessing the full power of exascale platforms – performing a quintillion operations a second – that DOE is completing this year at its leadership computing facilities. The first exascale systems will operate at 5 to 10 times the capability of today’s most powerful petascale systems.

    The potential payoff, McCallen says: saved lives and reduced economic loss. “We’ve been fortunate in this country in that we haven’t had a really large earthquake in a long time, but we know they’re coming. It’s inevitable.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

    The United States Department of Energy (DOE)(US) is a cabinet-level department of the United States Government concerned with the United States’ policies regarding energy and safety in handling nuclear material. Its responsibilities include the nation’s nuclear weapons program; nuclear reactor production for the United States Navy; energy conservation; energy-related research; radioactive waste disposal; and domestic energy production. It also directs research in genomics. the Human Genome Project originated in a DOE initiative. DOE sponsors more research in the physical sciences than any other U.S. federal agency, the majority of which is conducted through its system of National Laboratories. The agency is led by the United States Secretary of Energy, and its headquarters are located in Southwest Washington, D.C., on Independence Avenue in the James V. Forrestal Building, named for James Forrestal, as well as in Germantown, Maryland.

    Formation and consolidation

    In 1942, during World War II, the United States started the Manhattan Project, a project to develop the atomic bomb, under the eye of the U.S. Army Corps of Engineers. After the war in 1946, the Atomic Energy Commission (AEC) was created to control the future of the project. The Atomic Energy Act of 1946 also created the framework for the first National Laboratories. Among other nuclear projects, the AEC produced fabricated uranium fuel cores at locations such as Fernald Feed Materials Production Center in Cincinnati, Ohio. In 1974, the AEC gave way to the Nuclear Regulatory Commission, which was tasked with regulating the nuclear power industry and the Energy Research and Development Administration, which was tasked to manage the nuclear weapon; naval reactor; and energy development programs.

    The 1973 oil crisis called attention to the need to consolidate energy policy. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub.L. 95–91, 91 Stat. 565, enacted August 4, 1977), which created the Department of Energy(US). The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration; the Energy Research and Development Administration; the Federal Power Commission; and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary.

    President Carter created the Department of Energy with the goal of promoting energy conservation and developing alternative sources of energy. He wanted to not be dependent on foreign oil and reduce the use of fossil fuels. With international energy’s future uncertain for America, Carter acted quickly to have the department come into action the first year of his presidency. This was an extremely important issue of the time as the oil crisis was causing shortages and inflation. With the Three-Mile Island disaster, Carter was able to intervene with the help of the department. Carter made switches within the Nuclear Regulatory Commission in this case to fix the management and procedures. This was possible as nuclear energy and weapons are responsibility of the Department of Energy.

    Recent

    On March 28, 2017, a supervisor in the Office of International Climate and Clean Energy asked staff to avoid the phrases “climate change,” “emissions reduction,” or “Paris Agreement” in written memos, briefings or other written communication. A DOE spokesperson denied that phrases had been banned.

    In a May 2019 press release concerning natural gas exports from a Texas facility, the DOE used the term ‘freedom gas’ to refer to natural gas. The phrase originated from a speech made by Secretary Rick Perry in Brussels earlier that month. Washington Governor Jay Inslee decried the term “a joke”.

    Facilities

    The Department of Energy operates a system of national laboratories and technical facilities for research and development, as follows:

    Ames Laboratory
    Argonne National Laboratory
    Brookhaven National Laboratory
    Fermi National Accelerator Laboratory
    Idaho National Laboratory
    Lawrence Berkeley National Laboratory
    Lawrence Livermore National Laboratory
    Los Alamos National Laboratory
    National Energy Technology Laboratory
    National Renewable Energy Laboratory
    Oak Ridge National Laboratory
    Pacific Northwest National Laboratory
    Princeton Plasma Physics Laboratory
    Sandia National Laboratories
    Savannah River National Laboratory
    SLAC National Accelerator Laboratory
    Thomas Jefferson National Accelerator Facility

    Other major DOE facilities include:
    Albany Research Center
    Bannister Federal Complex
    Bettis Atomic Power Laboratory – focuses on the design and development of nuclear power for the U.S. Navy
    Kansas City Plant
    Knolls Atomic Power Laboratory – operates for Naval Reactors Program Research under the DOE (not a National Laboratory)
    National Petroleum Technology Office
    Nevada Test Site
    New Brunswick Laboratory
    Office of Fossil Energy[32]
    Office of River Protection[33]
    Pantex
    Radiological and Environmental Sciences Laboratory
    Y-12 National Security Complex
    Yucca Mountain nuclear waste repository
    Other:

    Pahute Mesa Airstrip – Nye County, Nevada, in supporting Nevada National Security Site

     
  • richardmitnick 2:00 pm on June 15, 2021 Permalink | Reply
    Tags: "Forthcoming revolution will unveil the secrets of matter", , , , European High Performance Computer Joint Undertaking (EU), Exaflop computers, Exascale computing, ,   

    From CNRS-The National Center for Scientific Research [Centre national de la recherche scientifique] (FR) : “Forthcoming revolution will unveil the secrets of matter” 

    From CNRS-The National Center for Scientific Research [Centre national de la recherche scientifique] (FR)

    06.15.2021
    Martin Koppe

    1
    ©Sikov /Stock.Adobe.com

    Provided adapted software can be developed, exascale computing, a new generation of supercomputers, will offer massive power to model the properties of molecules and materials, while taking into account their fundamental interactions and quantum mechanics. The TREX-Targeting Real Chemical accuracy at the EXascale (EU) project is set to meet the challenge.

    One quintillion operations per second. Exaflop computers – from the prefix -exa or 10^18, and flops, the number of floating-point operations that a computer can perform in one second – will offer this colossal computing power, as long as specifically designed programs and codes are available. An international race is thus underway to produce these impressive machines, and to take full advantage of their capacities. The European Commission is financing ambitious projects that are preparing the way for exascale, which is to say any form of high-performance computing that reaches an exaflop. The Targeting Real chemical precision at the EXascale (TREX)[1] programme focuses on highly precise computing methods in the fields of chemistry and materials physics.

    2
    Compute nodes of the Jean Zay supercomputer, the first French converged supercomputer between intensive calculations and artificial intelligence. After its extension in the summer of 2020, it attained 28 petaflops, or 28 quintillion operations per second, thanks to its 86,344 cores supported by 2,696 GPU accelerators.
    © Cyril FRESILLON / Idris: A Language for Type-Driven Development / CNRS Photothèque.

    Officially inaugurated in October 2020, TREX is part of the broader European High Performance Computing (European High Performance Computer Joint Undertaking (EU)) joint undertaking, whose goal is to ensure Europe is a player alongside the United States and China in exascale computing. “The Japanese have already achieved exascale by lowering computational precision,” enthuses Anthony Scemama, a researcher at the LCPQ-Laboratoire de Chimie et Physique Quantiques (FR),[2] and one of the two CNRS coordinators of TREX. “A great deal of work remains to be done on codes if we want to take full advantage of these future machines.”

    Exascale computing will probably use GPUs as well as traditional processors, or CPUs. These graphics processors were originally developed for video games, but they have enjoyed increasing success in data-intensive computing applications. Here again, their use will entail rewriting programs to fully harness their power for those applications that will need it.

    “Chemistry researchers already have various computing techniques for producing simulations, such as modelling the interaction of light with a molecule,” Scemama explains. “TREX focuses on cases where the computing methods for a realistic and predictive description of the physical phenomena controlling chemical reactions are too costly.”

    “TREX is an interdisciplinary project that also includes physicists,” stresses CNRS researcher and project coordinator Michele Casula, at the Institute of Mineralogy, Material Physics and Cosmochemistry [Institut de minéralogie, de physique des matériaux et de cosmochimie (FR).[3] “Our two communities need computing methods that are powerful enough to accurately predict the behaviour of matter, which often requires far too much computation time for conventional computers.”
    The TREX team has identified several areas for applications. First of all, and surprising though it may seem, the physicochemical properties of water have not been sufficiently modelled. The best ab initio simulations – those based on fundamental interactions – are wrong by a few degrees when trying to estimate its boiling point.

    Improved water models will enable us to more effectively simulate the behaviour of proteins, which continually evolve in aqueous environments. The applications being developed in connection with the TREX project could have a significant impact on research in biology and pharmacy. For example, nitrogenases, which make essential contributions to life, transform nitrogen gas into ammonia, a form that can be used by organisms. However, the theoretical description of the physicochemical mechanisms used by this enzyme is not accurate enough under current models. Exascale computing should also improve experts’ understanding of highly correlated materials such as superconductors, which are characterised by the substantial interactions between the electrons they are made of.

    “The microscopic understanding of their functioning remains an unresolved issue, one that has nagged scientists ever since the 1980s,” Casula points out. “It is one of the major open problems in condensed matter physics. When mastered, these materials will, among other things, be able to transport electricity with no loss of energy.” 2D materials are also involved, especially those used in solar panels to convert light into power.

    “To model matter using quantum mechanics means relying on equations that become exponentially more complex, such as the Schrödinger equation, whose number of coordinates increases with the system, ” Casula adds. “In order to solve them in simulations, we either have to use quantum computers, or further explore the power of silicon analogue chips with exascale computing, along with suitable algorithms.”

    To achieve this, TREX members are counting on Quantum Monte Carlo (QMC), and developing libraries to integrate it into existing codes. “We are fortunate to have a method that perfectly matches exascale machines,” Scemama exclaims. QMC is particularly effective at digitally calculating observable values – the quantum equivalent of classical physical values – bringing into play quantum interactions between multiple particles.

    3
    Modelling of electron trajectories in an aggregate of water, created by the QMC programme developed at the LCPQ in Toulouse (southwestern France). © Anthony Scemama / Laboratoire de Chimie et Physique Quantiques.

    “The full computation of these observables is too complex,” Casula stresses. “Accurately estimating them using deterministic methods could take more time than the age of the Universe. Simply put, QMC will not solve everything, but instead provides a statistical sampling of results. Exaflop computers could draw millions of samples per second, and thanks to statistical tools such as the central limit theorem, the more of these values we have, the closer we get to the actual result. We can thus obtain an approximation that is accurate enough to help researchers, all within an acceptable amount of time.”

    With regard to the study of matter, an exascale machine can provide a good description of the electron cloud and its interaction with nuclei. That is not the only advantage. “When configured properly, these machines may use thirty times more energy than classical supercomputers, but in return will produce a thousand times more computing power,” Scemama believes. “Researchers could launch very costly calculations, and use the results to build simpler models for future use.”

    The TREX team nevertheless insists that above all else, it creates technical and predictive tools for other researchers, who will then seek to develop concrete applications. Ongoing exchanges have made it possible to share best practices and feedback among processor manufacturers, physicists, chemists, researchers in high-performance computing, and TREX’s two computing centres.

    Footnotes:

    1.
    In addition to the CNRS, the project includes the universities of Versailles Saint-Quentin-en-Yvelines University [Université de Versailles Saint-Quentin-en-Yvelines – UVSQ] (FR); University of Twente [ Universiteit Twente] (NL), University of Vienna [Universität Wien] (AT)(Austria), Lodz University of Technology [Politechnika Łódzka] (PL) (Poland), the International School for Advanced Studies [Scuola Internazionale Superiore di Studi Avanzati] (IT) (Italy), the MPG Institutes (DE)(Germany), the Slovak University of Technology in Bratislava [Slovenská technická univerzita v Bratislave](STU)(SK) (Slovakia), as well as the Cineca (IT) (Italy) and Jülich Supercomputing Centre [Forschungszentrum Jülich ] (DE) (Germany) supercomputing centres, the MEGWARE [Deutsche Megware] Computer HPC Systems & Solutions (DE) and Trust-IT Services | Phidias (FR) companies.
    2.
    Laboratoire de chimie et physique quantiques (CNRS / Université Toulouse III – Paul Sabatier.
    3.
    CNRS / National Museum of Natural History [Muséum National d’Histoire Naturelle] (MNHN) (FR) / Sorbonne University [Sorbonne Université] (FR).

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    CNRS-The National Center for Scientific Research [Centre national de la recherche scientifique](FR) is the French state research organisation and is the largest fundamental science agency in Europe.

    In 2016, it employed 31,637 staff, including 11,137 tenured researchers, 13,415 engineers and technical staff, and 7,085 contractual workers. It is headquartered in Paris and has administrative offices in Brussels; Beijing; Tokyo; Singapore; Washington D.C.; Bonn; Moscow; Tunis; Johannesburg; Santiago de Chile; Israel; and New Delhi.

    The CNRS was ranked No. 3 in 2015 and No. 4 in 2017 by the Nature Index, which measures the largest contributors to papers published in 82 leading journals.

    The CNRS operates on the basis of research units, which are of two kinds: “proper units” (UPRs) are operated solely by the CNRS, and “joint units” (UMRs – French: Unité mixte de recherche)[9] are run in association with other institutions, such as universities or INSERM. Members of joint research units may be either CNRS researchers or university employees (maîtres de conférences or professeurs). Each research unit has a numeric code attached and is typically headed by a university professor or a CNRS research director. A research unit may be subdivided into research groups (“équipes”). The CNRS also has support units, which may, for instance, supply administrative, computing, library, or engineering services.

    In 2016, the CNRS had 952 joint research units, 32 proper research units, 135 service units, and 36 international units.

    The CNRS is divided into 10 national institutes:

    Institute of Chemistry (INC)
    Institute of Ecology and Environment (INEE)
    Institute of Physics (INP)
    Institute of Nuclear and Particle Physics (IN2P3)
    Institute of Biological Sciences (INSB)
    Institute for Humanities and Social Sciences (INSHS)
    Institute for Computer Sciences (INS2I)
    Institute for Engineering and Systems Sciences (INSIS)
    Institute for Mathematical Sciences (INSMI)
    Institute for Earth Sciences and Astronomy (INSU)

    The National Committee for Scientific Research, which is in charge of the recruitment and evaluation of researchers, is divided into 47 sections (e.g. section 41 is mathematics, section 7 is computer science and control, and so on).Research groups are affiliated with one primary institute and an optional secondary institute; the researchers themselves belong to one section. For administrative purposes, the CNRS is divided into 18 regional divisions (including four for the Paris region).

    Some selected CNRS laboratories

    APC laboratory
    Centre d’Immunologie de Marseille-Luminy
    Centre d’Etude Spatiale des Rayonnements
    Centre européen de calcul atomique et moléculaire
    Centre de Recherche et de Documentation sur l’Océanie
    CINTRA (joint research lab)
    Institut de l’information scientifique et technique
    Institut de recherche en informatique et systèmes aléatoires
    Institut d’astrophysique de Paris
    Institut de biologie moléculaire et cellulaire
    Institut Jean Nicod
    Laboratoire de Phonétique et Phonologie
    Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier
    Laboratory for Analysis and Architecture of Systems
    Laboratoire d’Informatique de Paris 6
    Laboratoire d’informatique pour la mécanique et les sciences de l’ingénieur
    Observatoire océanologique de Banyuls-sur-Mer
    SOLEIL
    Mistrals

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: