Tagged: ORNL Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:38 pm on December 19, 2019 Permalink | Reply
    Tags: , , , , ORNL, Simulations on Summit, ,   

    From Oak Ridge National Laboratory: “With ADIOS, Summit processes celestial data at scale of massive future telescope” 

    i1

    From Oak Ridge National Laboratory

    December 19, 2019
    Scott S Jones
    jonesg@ornl.gov
    865.241.6491

    Researchers
    Scott A Klasky
    klasky@ornl.gov
    865.241.9980

    Ruonan Wang
    wangr1@ornl.gov
    865.574.8984

    Norbert Podhorszki
    pnb@ornl.gov
    865.574.7159

    For nearly three decades, scientists and engineers across the globe have worked on the Square Kilometre Array (SKA), a project focused on designing and building the world’s largest radio telescope.

    SKA Square Kilometer Array

    Although the SKA will collect enormous amounts of precise astronomical data in record time, scientific breakthroughs will only be possible with systems able to efficiently process that data.

    Because construction of the SKA is not scheduled to begin until 2021, researchers cannot collect enough observational data to practice analyzing the huge quantities experts anticipate the telescope will produce. Instead, a team from the International Centre for Radio Astronomy Research (ICRAR) in Australia, the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) in the United States, and the Shanghai Astronomical Observatory (SHAO) in China recently used Summit, the world’s most powerful supercomputer, to simulate the SKA’s expected output. Summit is located at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    3
    An artist rendering of the SKA’s low-frequency, cone-shaped antennas in Western Australia. Credit: SKA Project Office.

    “The Summit supercomputer provided a unique opportunity to test a simple SKA dataflow at the scale we are expecting from the telescope array,” said Andreas Wicenec, director of Data Intensive Astronomy at ICRAR.

    To process the simulated data, the team relied on the ORNL-developed Adaptable IO System (ADIOS), an open-source input/output (I/O) framework led by ORNL’s Scott Klasky, who also leads the laboratory’s scientific data group. ADIOS is designed to speed up simulations by increasing the efficiency of I/O operations and to facilitate data transfers between high-performance computing systems and other facilities, which would otherwise be a complex and time-consuming task.

    The SKA simulation on Summit marks the first time radio astronomy data have been processed at such a large scale and proves that scientists have the expertise, software tools, and computing resources that will be necessary to process and understand real data from the SKA.

    “The scientific data group is dedicated to researching next-generation technology that can be developed and deployed for the most scientifically demanding applications on the world’s fastest computers,” Klasky said. “I am proud of all the hard work the ADIOS team and the SKA scientists have done with ICRAR, ORNL, and SHAO.”

    Using two types of radio receivers, the telescope will detect radio light waves emanating from galaxies, the surroundings of black holes, and other objects of interest in outer space to help astronomers answer fundamental questions about the universe. Studying these weak, elusive waves requires an army of antennas.

    The first phase of the SKA will have more than 130,000 low-frequency, cone-shaped antennas located in Western Australia and about 200 higher frequency, dish-shaped antennas located in South Africa. The international project team will eventually manage close to a million antennas to conduct unprecedented studies of astronomical phenomena.

    To emulate the Western Australian portion of the SKA, the researchers ran two models on Summit—one of the antenna array and one of the early universe—through a software simulator designed by scientists from the University of Oxford that mimics the SKA’s data collection. The simulations generated 2.6 petabytes of data at 247 gigabytes per second.

    “Generating such a vast amount of data with the antenna array simulator requires a lot of power and thousands of graphics processing units to work properly,” said ORNL software engineer Ruonan Wang. “Summit is probably the only computer in the world that can do this.”

    Although the simulator typically runs on a single computer, the team used a specialized workflow management tool Wang helped ICRAR develop called the Data Activated Flow Graph Engine (DALiuGE) to efficiently scale the modeling capability up to 4,560 compute nodes on Summit. DALiuGE has built-in fault tolerance, ensuring that minor errors do not impede the workflow.

    “The problem with traditional resources is that one problem can make the entire job fall apart,” Wang said. Wang earned his doctorate degree at the University of Western Australia, which manages ICRAR along with Curtin University.

    The intense influx of data from the array simulations resulted in a performance bottleneck, which the team solved by reducing, processing, and storing the data using ADIOS. Researchers usually plug ADIOS straight into the I/O subsystem of a given application, but the simulator’s unusually complicated software meant the team had to customize a plug-in module to make the two resources compatible.

    “This was far more complex than a normal application,” Wang said.

    Wang began working on ADIOS1, the first iteration of the tool, 6 years ago during his time at ICRAR. Now, he serves as one of the main developers of the latest version, ADIOS2. His team aims to position ADIOS as a superior storage resource for the next generation of astronomy data and the default I/O solution for future telescopes beyond even the SKA’s gargantuan scope.

    “The faster we can process data, the better we can understand the universe,” he said.

    Funding for this work comes from DOE’s Office of Science.

    The International Centre for Radio Astronomy Research (ICRAR) is a joint venture between Curtin University and The University of Western Australia with support and funding from the State Government of Western Australia. ICRAR is helping to design and build the world’s largest radio telescope, the Square Kilometre Array.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 6:24 pm on December 16, 2019 Permalink | Reply
    Tags: "GODDESS detector sees the origins of elements", ATLAS-Argonne Tandem Linear Accelerator System, , Insight into astrophysical nuclear reactions that produce the elements heavier than hydrogen., ORNL, ORRUBA-Oak Ridge Rutgers University Barrel Array, , Products of nuclear transmutations are spotted with unprecedented detail.,   

    From Oak Ridge National Laboratory and Rutgers University: “GODDESS detector sees the origins of elements” 

    i1

    From Oak Ridge National Laboratory

    with

    Rutgers smaller
    Our Great Seal.

    Rutgers University

    December 17, 2019
    Dawn M Levy
    levyd@ornl.gov
    865.576.6448

    1
    ORNL GODDESS Detector

    2
    GODDESS is shown coupled to GRETINA with experimenters, from left, Heather Garland, Chad Ummel and Gwen Seymour, all of Rutgers University, and Rajesh Ghimire of University of Tennessee–Knoxville and ORNL; and from left (back row), Josh Hooker of UTK and Steven Pain of ORNL. Credit: Andrew Ratkiewicz/Oak Ridge National Laboratory, U.S. Dept. of Energy

    Products of nuclear transmutations are spotted with unprecedented detail.

    Ancient Greeks imagined that everything in the natural world came from their goddess Physis; her name is the source of the word physics. Present-day nuclear physicists at the Department of Energy’s Oak Ridge National Laboratory have created a GODDESS of their own—a detector providing insight into astrophysical nuclear reactions that produce the elements heavier than hydrogen (this lightest of elements was created right after the Big Bang).

    Researchers developed a state-of-the-art charged particle detector at ORNL called the Oak Ridge Rutgers University Barrel Array, or ORRUBA, to study reactions with beams of astrophysically important radioactive nuclei.

    5
    Schematic of how ORRUBA would be coupled to the 100-unit Gammasphere Compton-suppressed Ge detector array. The barrel array would be augmented by up to 4 annular strip detectors to be placed at forward and backward angles in the laboratory. All electronics signals and preamplifier boxes would be downstream of ORRUBA and before the quadrupole magnet of the Fragment Mass Analyzer. Provided by Ratkiewicz and Shand.

    Recently, its silicon detectors were upgraded and tightly packed to prepare it to work in concert with large germanium-based gamma-ray detectors, such as Gammasphere, and the next-generation gamma-ray tracking detector system, GRETINA. The result is GODDESS—Gammasphere/GRETINA ORRUBA: Dual Detectors for Experimental Structure Studies. [Watch a time-lapse video below of one day of work to couple GODDESS with Gammasphere for the first time.]


    GODDESS day 4 video

    With millimeter position resolution, GODDESS records emissions from reactions taking place as energetic beams of radioactive nuclei gain or lose protons and neutrons and emit gamma rays or charged particles, such as protons, deuterons, tritons, helium-3 or alpha particles.

    “The charged particles in the silicon detectors tell us how the nucleus was formed, and the gamma rays tell us how it decayed,” explained Steven Pain of ORNL’s Physics Division. “We merge the two sets of data and use them as if they were one detector for a complete picture of the reaction.”

    Earlier this year, Pain led more than 50 scientists from 12 institutions in GODDESS experiments to understand the cosmic origins of the elements. He is principal investigator of two experiments and co-principal investigator of a third. Data analysis of the complex experiments is expected to take two years.

    “Almost all heavy stable nuclei in the universe are created through unstable nuclei reacting and then coming back to stability,” Pain said.

    A century of nuclear transmutation

    In 1911 Ernest Rutherford was astounded to observe that alpha particles—heavy and positively charged—sometimes bounced backward. He concluded they must have hit something extremely dense and positively charged—possible only if almost all an atom’s mass were concentrated in its center. He had discovered the atomic nucleus. He went on to study the nucleons—protons and neutrons—that make up the nucleus and that are surrounded by shells of orbiting electrons.

    One element can turn into another when nucleons are captured, exchanged or expelled. When this happens in stars, it’s called nucleosynthesis. Rutherford stumbled upon this process in the lab through an anomalous result in a series of particle-scattering experiments. The first artificial nuclear transmutation reacted nitrogen-14 with an alpha particle to create oxygen-17 and a proton. The feat was published in 1919, seeding advances in the newly invented cloud chamber, discoveries about short-lived nuclei (which make up 90% of nuclei), and experiments that continue to this day as a top priority for physics.

    “A century ago, the first nuclear reaction of stable isotopes was inferred by human observers counting flashes of light with a microscope,” noted Pain, who is Rutherford’s “great-great-grandson” in an academic sense: his PhD thesis advisor was Wilton Catford, whose advisor was Kenneth Allen, whose advisor was William Burcham, whose advisor was Rutherford. “Today, advanced detectors like GODDESS allow us to explore, with great sensitivity, reactions of the difficult-to-access unstable radioactive nuclei that drive the astrophysical explosions generating many of the stable elements around us.”

    Understanding thermonuclear runaway

    One experiment Pain led focused on phosphorus-30, which is important for understanding certain thermonuclear runaways. “We’re looking to understand nucleosynthesis in nova explosions—the most common stellar explosions,” he said. A nova occurs in a binary system in which a white dwarf gravitationally pulls hydrogen-rich material from a nearby “companion” star until thermonuclear runaway occurs and the white dwarf’s surface layer explodes. The ashes of these explosions change the chemical composition of the galaxy.

    University of Tennessee graduate student Rajesh Ghimire is analyzing the data from the phosphorus experiment, which transferred a neutron from deuterium in a target onto an intense beam of the short-lived radioactive isotope phosphorus-30. The particle and gamma-ray detectors spotted what emerged, correlating times, places and energies of proton and gamma ray production.

    The phosphorus-30 nucleus strongly affects the ratios of most of the heavier elements produced during a nova explosion. If the phosphorus-30 reactions are understood, the elemental ratios can be used to measure the peak temperature that the nova reached. “That’s an observable that somebody with a telescope could see,” Pain said.

    Illuminating heavy-element creation

    The second experiment Pain led transmuted a much heavier isotope, tellurium-134. “This nucleus is involved in the rapid neutron capture process, or r process, which is the way that half the elements heavier than iron are formed in the universe,” Pain related. It occurs in an environment with many free neutrons—perhaps supernovae or neutron star mergers. “We know it happens, because we see the elements around us, but we still don’t know exactly where and how it occurs.”

    Understanding r-process nucleosynthesis will be a major activity at the Facility for Rare Isotope Beams (FRIB), a DOE Office of Science user facility scheduled to open at Michigan State University (MSU) in 2022. FRIB will enable discoveries about rare isotopes, nuclear astrophysics and fundamental interactions, and applications in medicine, homeland security and industry.

    “The r process is a very, very complicated network of reactions; many, many pieces go into it,” Pain emphasized. “You can’t do one experiment and have the answer.”

    The tellurium-134 experiment starts with radioactive californium made at ORNL and installed at the Argonne Tandem Linear Accelerator System (ATLAS), a DOE Office of Science user facility at Argonne National Laboratory.

    4
    Argonne Tandem Linear Accelerator System (ATLAS)

    The californium fissions spontaneously, with tellurium-134 among the products. A beam of tellurium-134 is accelerated into a deuterium target and absorbs a neutron, spitting out a proton in the process. “Tellurium-134 comes in, but tellurium-135 goes out,” Pain summed up.

    “We detect that proton in the silicon detectors of GODDESS. The tellurium-135 continues down the beam line. The energy and angle of the proton tell us about the tellurium-135 we’ve created—it could be in its ground state or in any one of a number of excited states. The excited states decay by emitting a gamma ray.” The germanium detectors reveal the energy of the gamma rays with unprecedented resolution to show how the nucleus decayed. Then the nucleus enters a gas detector, creating a track of ionized gas from which the removed electrons are collected. Measuring the energy deposited in different regions of the detector allows researchers to definitively identify the nucleus.

    Rutgers graduate student Chad Ummel is focusing on the experiment’s analysis. Said Pain, “We’re trying to understand the role of this tellurium-134 nucleus in the r process in different potential astrophysical sites. The reaction flow in this network of neutron capture reactions affects the abundances of the elements created. We need to understand this network to understand the origin of the heavy elements.”

    Future of the GODDESS

    The researchers will continue developing equipment and techniques for current use of GODDESS at Argonne and MSU and future use at FRIB, which will give unprecedented access to many unstable nuclei currently out of reach. Future experiments will employ two strategies.

    One uses fast beams of nuclei that have been fragmented into other nuclei. Pain likens the diverse nuclear products to a whole zoo hurtling down the beam line in chaos. The fast-moving nuclei pass through a series of magnets that select desired “zebras” and discard unwanted “giraffes,” “gnus” and “hippos.”

    The other approach stops the ions with a material, re-ionizes them, then reaccelerates them before they can radioactively decay. Explained Pain, “It allows you to corral all zebras, calm them down, then orderly bring them out in the direction, rate and speed that you want.”

    Taming the elements that make planets and people possible—that’s indeed the domain of a physics GODDESS.

    DOE’s Office of Science supports Pain’s research. DOE’s National Nuclear Security Administration funded some past detector research.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 7:51 am on October 31, 2019 Permalink | Reply
    Tags: "How will we land on Mars?", FUN3D-computational fluid dynamics (CFD) code, , NASA expects humans to voyage to Mars by the mid- to late 2030s, ORNL, Retropropulsion-powered descent to Mars' surface, ,   

    From Science Node: “How will we land on Mars?” 

    Science Node bloc
    From Science Node

    23 Oct, 2019
    Katie Elyce Jones

    The type of vehicle that will carry people to the Red Planet is shaping up to be “like a two-story house you’re trying to land on another planet. The heat shield on the front of the vehicle is just over 16 meters in diameter, and the vehicle itself, during landing, weighs tens of metric tons. It’s huge,” said Ashley Korzun, a research aerospace engineer at NASA’s Langley Research Center.

    Safe descent. NASA research team uses Summit supercomputer to simulate a retropropulsion-powered descent to Mars’ surface. Courtesy Oak Ridge Leadership Computing Facility.

    A vehicle for human exploration will weigh considerably more than the familiar, car-sized rovers like Curiosity, which have been deployed to the planetary surface by parachute.

    NASA Mars Curiosity Rover

    “You can’t use parachutes to land very large payloads on the surface of Mars,” Korzun said. “The physics just breaks down. You have to do something else.”

    NASA expects humans to voyage to Mars by the mid- to late 2030s, so engineers have been at the drafting board for some time. Now, they have a promising solution in retropropulsion, or engine-powered deceleration.

    “Instead of pushing you forward, retropropulsion engines slow you down, like brakes,” Korzun said.

    Led by Eric Nielsen, a senior research scientist at NASA Langley, a team of scientists and engineers including Korzun is using Summit, the world’s fastest supercomputer at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL), to simulate retropropulsion for landing humans on Mars.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    1
    A vehicle delivering humans to Mars will weigh much more than rovers like Curiosity which have been successfully deployed. Landing the heavier craft is an engineering challenge. Courtesy NASA.

    “We’re able to demonstrate pretty revolutionary performance on Summit relative to what we were accustomed to with a conventional computing approach,” Nielsen said.

    The team uses its computational fluid dynamics (CFD) code called FUN3D to model the vehicle’s Martian descent. CFD applications use large systems of equations to simulate the small-scale interactions of fluids (including gases) during flow and turbulence—in this case, to capture the aerodynamic effects created by the landing vehicle and the atmosphere.

    “FUN3D and the computing capability itself have been completely game-changing, allowing us to move forward with technology development for retropropulsion, which has applications on Earth, the Moon, and Mars,” Korzun said.

    Sticking the landing

    NASA has already successfully deployed eight landers on Mars, including mobile science laboratories equipped with cameras, sensors, and communications devices—and researchers are familiar with the planet’s other-worldly challenges.

    The Martian atmosphere is about 100 times thinner (less dense) than Earth’s, which results in a speedy descent from orbit—about 6 to 7 minutes rather than the 35- to 40-minute reentry time for Earth.

    “We can’t match all of the relevant physics in ground or flight testing on Earth, so we’re very reliant on computational capability,” Korzun said. “This is really the first opportunity—at this level of fidelity and resolution—that we’ve been able to see what happens to the vehicle as it slows down with its engines on.”

    During retropropulsion, the vehicle is sensitive to large variations in aerodynamic forces, which can impact engine performance and the crew’s ability to control and land the vehicle at a targeted location.

    2
    Snapshot of total temperature distribution at supersonic speed. Total temperature allows researchers to visualize the extent of the exhaust plumes which are much hotter than the surrounding atmosphere. Courtesy NASA.

    The team needs a powerful supercomputer like the 200-petaflop Summit to simulate the entire vehicle as it navigates a range of atmospheric and engine conditions.

    To predict what will happen in the Martian atmosphere and how the engines should be designed and controlled for the crew’s success and safety, researchers need to investigate unsteady and turbulent flows across length and time scales—from centimeters to kilometers and from fractions of a second to minutes.

    To accurately replicate these faraway conditions, the team must model the large dimensions of the lander and its engines, the local atmospheric conditions, and the conditions of the engines along the descent trajectory.

    On Summit, the team is modeling the lander at multiple points in its 6- to 7-minute descent. To characterize the flow behaviors across speeds ranging from supersonic to subsonic, researchers run ensembles (suites of individual simulations) to resolve fluid dynamics at a resolution of up to 10 billion elements with as much as 200 terabytes of information stored per run.

    “One of the primary benefits of Summit for us is the sheer speed of the machine,” Nielsen said.

    Celestial speed

    Nielsen’s team spent several years optimizing FUN3D—a code that has advanced aerodynamic modeling for several decades—for new GPU technology using CUDA, a programming platform that serves as an intermediary between GPUs and traditional programming languages like C++.

    By leveraging the speed of Summit’s GPUs, Nielsen’s team reports a 35-times increase in performance per compute node.

    “We would typically wait 5 to 6 months to get an equivalent answer using CPU technology in a capacity environment, meaning lots of smaller runs. On Summit, we’re getting those answers in about 4 to 5 days,” he said. “Moreover, Summit enables us to perform 5 or 6 such simulations simultaneously, ultimately reducing turnaround time from 2 or 3 years to a work week.”

    The research team includes visualization specialists at NASA’s Ames Research Center, who take the quantitative data and transform it into an action shot of what is happening.

    “The visualization is a big takeaway from the Summit capability, which has enabled us to capture very small flow structures as well as really large flow structures,” Korzun said. “I can see what is happening right at the rocket engine nozzle exit, as well as tens of meters ahead in the direction the vehicle is traveling.”

    As the team members continue to collect new Summit data, they are thinking about the next steps to designing a human exploration vehicle for Mars.

    “Even though we are returning to the Moon, NASA’s long-term objective is the human exploration of the surface of Mars. These results are informing testing, such as wind tunnel testing, that we’ll be doing in the next couple of years,” Korzun said. “So this data will be useful for a very long time.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:01 pm on October 26, 2019 Permalink | Reply
    Tags: "Cold hard data: ORNL data scientists support historic Arctic expedition", ARM’s instruments are expected to produce about 250 terabytes (TB) of data., Krassovski represents the DOE’s Atmospheric Radiation Measurement (ARM) user facility., Misha Krassovski a computer scientist who works at ORNL is one of the 60 some scientific personnel who embarked on the first leg of the largest polar expedition of all time., MOSAiC is the largest polar expedition of all time and will produce demanding quantities of data., ORNL, ORNL staff in the field and the lab collect store and process the data to share with collaborators around the world., The goal is to have the data processed and accessible within a week., The instruments that ARM provided for MOSAiC will help create the most detailed record of Arctic atmosphere ever., The Polarstern- a German research vessel has settled into the ice for a yearlong float.   

    From Oak Ridge National Laboratory- “Cold, hard data: ORNL data scientists support historic Arctic expedition” 

    i1

    From Oak Ridge National Laboratory

    October 25, 2019

    1
    Misha B Krassovski

    2

    MOSAiC, the largest polar expedition of all time, will produce demanding quantities of data. ORNL staff in the field and the lab collect, store and process it to share with collaborators around the world.

    In the vast frozen whiteness of the central Arctic, the Polarstern, a German research vessel, has settled into the ice for a yearlong float.

    When the ship arrived at a scrupulously chosen ice floe in early October, and the dark sea water lapping against its hull began to freeze—locking it into place—passengers on board celebrated by venturing onto the ice. Some took photos and even kicked a soccer ball around the location of their new home. For the better part of a year, a web of structures and instruments will sprawl out from the ship to form a research camp—the northernmost little city in the world.

    Misha Krassovski, a computer scientist who works at the Department of Energy’s Oak Ridge National Laboratory, joined the festivities, but only for about 10 minutes.

    Then he scrambled back on board, into a ribbed metal shipping container holding much of the ship’s network systems. Some of the equipment needed attention.

    Krassovski is one of the 60 some scientific personnel who embarked on the first leg of the largest polar expedition of all time, called the Multidisciplinary Drifting Observatory for the Study of Arctic Climate, or MOSAiC. During the yearlong expedition, the Polarstern will drift through the Arctic, frozen in ice, as around 600 experts from collaborating institutions around the world rotate on board to study the Arctic climate system, the most rapidly warming climate on the planet.

    Krassovski represents one of those institutions—DOE’s Atmospheric Radiation Measurement (ARM) user facility. ARM is a resource managed by nine national laboratories that enables climate and atmospheric research through its permanent observatories and, in the case of MOSAiC, its mobile campaigns, instruments and data infrastructure.

    His job was to set up ARM’s central computer—the “site data system”—and to make sure data stream to it flawlessly from the more than 50 instruments ARM has provided for the mission. Those data will be shipped periodically to ARM’s Data Center, located at ORNL, where they’ll be accessible freely by anyone.

    “Data center in a can”

    The instruments that ARM provided for MOSAiC will help create the most detailed record of Arctic atmosphere ever. They’ll collect data on parameters such as aerosol concentrations, precipitation and humidity, to name a few.

    “You’ve got your instruments in the field, and you need systems to communicate with those instruments to pull the data off. We’re responsible for making sure that those systems are online,” said ORNL’s Cory Stuart, who manages all the site data systems for ARM’s mobile campaigns. “I’ve heard people say we’ve got a data center in a can.”

    During the course of the MOSAiC expedition, ARM’s instruments are expected to produce about 250 terabytes (TB) of data. For context, many newer laptops can store around one TB.

    “It’s like 250 times your typical laptop,” ARM Data Center director Giri Prakash said. “It’s quite a bit of diverse data, and we are fully ready to handle it.”

    This isn’t too unusual for ARM, which boasts 1.8 petabytes (around 1,800 times your laptop) of atmospheric data in its collection and regularly handles large sets of data from the field. The challenge, in this case, is that the treacherous environmental conditions in the Arctic will make it more difficult to transfer much of that information before the campaign is complete. While the plan does include shipping data back to the U.S. on disks throughout the deployment, ARM still must be prepared to store all 250 TB on the onsite system to minimize the chance of losing any data.

    The onsite system is a set of servers that occupies one rack, which stands about 6 feet tall by 2 feet wide and includes a storage array with 96 hard drives. Data from all the instruments flow via network or serial communication channels to each instrument computer. These instrument computers are either physical systems, like a laptop, or virtual machines, which are software emulated computers running on servers. From there, the site data system pulls it into the local storage system.

    The ARM team did their homework to ensure a smooth setup. They tested the system at Los Alamos National Laboratory months before the expedition began and again, dockside, just before the Polarstern set sail in September. The goal was to eliminate any surprises.

    “My hope for him [Krassovski] was that it would be a really cool experience, but that he’d be really bored,” Stuart said, smiling. “Because with those data systems the hope is that they come up, and they run, and you don’t have to do much.”

    Krassovski didn’t have time to get bored. In addition to troubleshooting the network systems, he kept busy helping with cable support poles, shelters, tents, flags and other items that must be installed before scientists can set up equipment and start doing measurements. One day he helped build 250 supports for electrical cables that will spread over the ice.

    “It all requires a lot of people, and volunteers are always appreciated,” Krassovski said.

    That supporting attitude is what landed him a spot on the Polarstern in the first place: Krassovski normally does not work with ARM. He volunteered for MOSAiC when conflicting schedules prevented Stuart and other members of the ARM team from going. Though he’s done similar work for ORNL’s Environmental Sciences Division in other frigid locations, such as northern Minnesota and Alaska, jumping in with a different group meant learning an entirely new data system very quickly.

    “This is a fantastic example of inter-program collaboration,” Stuart said. “Misha [Krassovski] is a rock star.”

    Sharing with the world

    Krassovski is currently aboard another research icebreaker headed back to port in Tromso, Norway. When he arrives at the end of October, the ARM Data Center’s involvement in MOSAiC will be far from over. Once he delivers the first USB hard drives to Oak Ridge, the goal is to have the data processed and accessible within a week.

    “As soon as it gets here, we do all the processing and make it available as quickly as possible,” Prakash said. “We are ready for that, and we practiced it.”

    While ARM data are readily accessible to scientists and other users worldwide, Prakash has been working with other international collaborators, such as the Alfred Wegener Institute, the German institution leading the expedition, to increase the visibility of the data to all MOSAiC participants.

    “We are prepared and excited to do our job so the researchers can do their wonderful science,” Prakash said.

    MOSAiC is supported by DOE’s Office of Science through ARM, a DOE Office of Science user facility, and partial direct funding for the MOSAiC campaign.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:17 am on October 14, 2019 Permalink | Reply
    Tags: "Supercomputing the Building Blocks of the Universe", , , ORNL, ,   

    From insideHPC: “Supercomputing the Building Blocks of the Universe” 

    From insideHPC

    October 13, 2019

    In this special guest feature, ORNL profiles researcher Gaute Hagen, who uses the Summit supercomputer to model scientifically interesting atomic nuclei.

    1
    Gaute Hagen uses ORNL’s Summit supercomputer to model scientifically interesting atomic nuclei. To validate models, he and other physicists compare computations with experimental observations. Credit: Carlos Jones/ORNL

    At the nexus of theory and computation, physicist Gaute Hagen of the Department of Energy’s Oak Ridge National Laboratory runs advanced models on powerful supercomputers to explore how protons and neutrons interact to “build” an atomic nucleus from scratch. His fundamental research improves predictions about nuclear energy, nuclear security and astrophysics.

    “How did matter that forms our universe come to be?” asked Hagen. “How does matter organize itself based on what we know about elementary particles and their interactions? Do we fully understand how these particles interact?”

    The lightest nuclei, hydrogen and helium, formed during the Big Bang. Heavier elements, up to iron, are made in stars by progressively fusing those lighter nuclei. The heaviest nuclei form in extreme environments when lighter nuclei rapidly capture neutrons and undergo beta decays.

    For example, building nickel-78, a neutron-rich nucleus that is especially strongly bound, or “doubly magic,” requires 28 protons and 50 neutrons interacting through the strong force. “To solve the Schrödinger equation for such a huge system is a tremendous challenge,” Hagen said. “It is only possible using advanced quantum mechanical models and serious computing power.”

    Through DOE’s Scientific Discovery Through Advanced Computing program, Hagen participates in the NUCLEI project to calculate nuclear structure and reactions from first principles; its collaborators represent 7 universities and 5 national labs. Moreover, he is the lead principal investigator of a DOE Innovative and Novel Computational Impact on Theory and Experiment award of time on supercomputers at Argonne and Oak Ridge National Laboratories for computations that complement part of the physics addressed under NUCLEI.

    Theoretical physicists build models and run them on supercomputers to simulate the formation of atomic nuclei and study their structures and interactions. Theoretical predictions can then be compared with data from experiments at new facilities producing increasingly neutron-rich nuclei. If the observations are close to the predictions, the models are validated.

    ‘Random walk’

    “I never planned to become a physicist or end up at Oak Ridge,” said Hagen, who hails from Norway. “That was a random walk.”

    Graduating from high school in 1994, he planned to follow in the footsteps of his father, an economics professor, but his grades were not good enough to get into the top-ranked Norwegian School of Economics in Bergen. A year of mandatory military service in the King’s Guard gave Hagen fresh perspective on his life. At 20, he entered the University of Bergen and earned a bachelor’s degree in the philosophy of science. Wanting to continue for a doctorate, but realizing he lacked math and science backgrounds that would aid his dissertation, he signed up for classes in those fields—and a scientist was born. He went on to earn a master’s degree in nuclear physics.

    Entering a PhD program, he used pen and paper or simple computer codes for calculations of the Schrödinger equation pertaining to two or three particles. One day his advisor introduced him to University of Oslo professor Morten Hjorth-Jensen, who used advanced computing to solve physics problems.

    “The fact that you could use large clusters of computers in parallel to solve for several tens of particles was intriguing to me,” Hagen said. “That changed my whole perspective on what you can do if you have the right resources and employ the right methods.”

    Hagen finished his graduate studies in Oslo, working with Hjorth-Jensen and taking his computing class. In 2005, collaborators of his new mentor—ORNL’s David Dean and the University of Tennessee’s Thomas Papenbrock—sought a postdoctoral fellow. A week after receiving his doctorate, Hagen found himself on a plane to Tennessee.

    For his work at ORNL, Hagen used a numerical technique to describe systems of many interacting particles, such as atomic nuclei containing protons and neutrons. He collaborated with experts worldwide who were specializing in different aspects of the challenge and ran his calculations on some of the world’s most powerful supercomputers.

    “Computing had taken such an important role in the work I did that having that available made a big difference,” he said. In 2008, he accepted a staff job at ORNL.”

    That year Hagen found another reason to stay in Tennessee—he met the woman who became his wife. She works in TV production and manages a vintage boutique in downtown Knoxville.

    Hagen, his wife and stepson spend some vacations at his father’s farm by the sea in northern Norway. There the physicist enjoys snowboarding, fishing and backpacking, “getting lost in remote areas, away from people, where it’s quiet and peaceful. Back to the basics.”

    Summiting

    Hagen won a DOE early career award in 2013. Today, his research employs applied mathematics, computer science and physics, and the resulting descriptions of atomic nuclei enable predictions that guide earthly experiments and improve understanding of astronomical phenomena.

    A central question he is trying to answer is: what is the size of a nucleus? The difference between the radii of neutron and proton distributions—called the “neutron skin”— has implications for the equation-of-state of neutron matter and neutron stars.

    In 2015, a team led by Hagen predicted properties of the neutron skin of the calcium-48 nucleus; the results were published in Nature Physics. In progress or planned are experiments by others to measure various neutron skins. The COHERENT experiment at ORNL’s Spallation Neutron Source did so for argon-40 by measuring how neutrinos—particles that interact only weakly with nuclei—scatter off of this nucleus. Studies of parity-violating electron scattering on lead-208 and calcium-48—topics of the PREX2 and CREX experiments, respectively—are planned at Thomas Jefferson National Accelerator Facility.

    One recent calculation in a study Hagen led solved a 50-year-old puzzle about why beta decays of atomic nuclei are slower than expected based on the beta decays of free neutrons. Other calculations explore isotopes to be made and measured at DOE’s Facility for Rare Isotope Beams, under construction at Michigan State University, when it opens in 2022.

    Hagen’s team has made several predictions about neutron-rich nuclei observed at experimental facilities worldwide. For example, 2016 predictions for the magicity of nickel-78 were confirmed at RIKEN in Japan and published in Nature this year. Now the team is developing methods to predict behavior of neutron-rich isotopes beyond nickel-78 to find out how many neutrons can be added before a nucleus falls apart.

    “Progress has exploded in recent years because we have methods that scale more favorably with the complexity of the system, and we have ever-increasing computing power,” Hagen said. At the Oak Ridge Leadership Computing Facility, he has worked on Jaguar (1.75 peak petaflops), Titan (27 peak petaflops) and Summit [above] (200 peak petaflops) supercomputers. “That’s changed the way that we solve problems.”

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    His team currently calculates the probability of a process called neutrino-less double-beta decay in calcium-48 and germanium-76. This process has yet to be observed but if seen would imply the neutrino is its own anti-particle and open a path to physics beyond the Standard Model of Particle Physics.

    Looking to the future, Hagen eyes “superheavy” elements—lead-208 and beyond. Superheavies have never been simulated from first principles.

    “Lead-208 pushes everything to the limits—computing power and methods,” he said. “With this next generation computer, I think simulating it will be possible.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 2:09 pm on October 2, 2019 Permalink | Reply
    Tags: In their experiments the scientists first used genomic data from a group of human oral bacteria TM7 or Saccharibacteria., Mircea Podar, ORNL, Podar and his team developed a method that relies on antibody engineering to identify and isolate specific microbes from complex human oral microbiome samples, The grand challenge of uncultivated microbial “dark matter” in which the vast majority of microorganisms remain unstudied in the laboratory., The method leverages what scientists already know about predicting which proteins in a target microbe are typically located on a cell’s membrane., We can answer fundamental questions about many uncultured microbes if we can culture them.   

    From Oak Ridge National Laboratory: “ORNL scientists shed light on microbial ‘dark matter’ with new approach” 

    i1

    From Oak Ridge National Laboratory

    September 30, 2019
    Sara S Shoemaker
    shoemakerms@ornl.gov
    8655769219

    1
    Mircea Podar

    Scientists at the U.S. Department of Energy’s Oak Ridge National Laboratory have demonstrated a way to isolate and grow targeted bacteria using genomic data, making strides toward resolving the grand challenge of uncultivated microbial “dark matter” in which the vast majority of microorganisms remain unstudied in the laboratory.

    Despite the importance of microorganisms to environmental and human health, only about half of the microbes within the human body have been grown in the lab for experimentation, while only a tiny fraction from most open environments have been cultured. Microbes can be extraordinarily difficult to grow simply because so little is known about them.

    Over the past 20 years, scientists have made strides in understanding microbial life by sequencing the genome of microbes sampled in the field. However, extracting that genetic material kills the organism. In addition to inferring the characteristics of microbes from sequence data, scientists want to be able to study live organisms and prove theories about their form and function.

    “One may think that we should be able to figure out what a microbe is and how to grow it just from the sequence data. But the problem is that we still don’t know how to read a lot of that information. It’s an enormous, multi-dimensional puzzle. You can make hypotheses, but until you can culture that organism you can only speculate at its physiology,” said ORNL microbiologist Mircea Podar.

    Podar and his team developed a method that relies on antibody engineering to identify and isolate specific microbes from complex human oral microbiome samples, as outlined in Nature Biotechnology.

    2
    In their experiments, the scientists first used genomic data from a group of human oral bacteria, TM7 or Saccharibacteria, previously associated with periodontal and inflammatory bowel disease. They successfully isolated three different species of TM7. In follow-up work, they also isolated a representative of a different uncultivated bacterial group, SR1, using the same strategy.

    The method leverages what scientists already know about predicting which proteins in a target microbe are typically located on a cell’s membrane. Computational structure modeling was used to predict regions in those proteins that can serve as antigens. The scientists then generated antibodies that naturally seek out and bind to those specific antigens. The scientists added a fluorescent tag to the antibodies that when illuminated, identified the target cells, which were then successfully isolated.

    Reverse genomics method leads to success

    The ORNL researchers did not have a natural source for the antigens since the microbes had not been previously grown. So they used sequence data to help create the antigens—essentially reversing the order of how genomic information is typically used in microbiology, Podar explained.

    In their experiments, the scientists first used genomic data from a group of human oral bacteria, TM7 or Saccharibacteria, previously associated with periodontal and inflammatory bowel disease. They successfully isolated three different species of TM7. In follow-up work, they also isolated a representative of a different uncultivated bacterial group, SR1, using the same strategy.

    “It’s been more than a decade since we began sequencing of uncultured microbes and found that while some are easy to grow, most are difficult. As a result, some scientists have become resigned to just relying on sequencing to analyze microbes,” Podar said. “This is the first approach that lets us be selective as to which microbes we target, and it doesn’t matter how few may be in the sample. It should be universally applicable to any microbial environment.”

    “We can answer fundamental questions about many uncultured microbes if we can culture them,” Podar said. In his self-described work of “growing wild things,” Podar has cultivated a Yellowstone microbe that grows symbiotically with another microbe in an acidic, near-boiling hot spring. His work has also contributed to the recent discovery of two genes that enable microbes to convert mercury into toxic methylmercury.

    Cultivating knowledge of form and function

    The next frontier is to expand cultivation on the many other lineages of life that we know about only from sequence data, Podar said. “Cultivation of multiple microbes at the same time is crucial. Then we can see who is interacting with whom, how they communicate with each other and relate to each other as helpers or in competition, for instance. These are things we cannot get from sequencing, and it’s one of the aspects we found about the TM7 in the oral microbiome.”

    Among the vital questions the scientists want to answer are how microbes sometimes form symbionts for survival, and why some thrive in certain conditions while others don’t. “We want to understand how specific microbes evolved and what they may be doing for various microbiomes,” Podar said. “We could apply that knowledge to human health, to how microbes benefit plants, and to microbes that could help us clean up contaminated areas.”

    The work was funded by the National Institute of Dental and Craniofacial Research of the U.S. National Institutes of Health, as well as by ORNL’s laboratory-directed research program and a National Science Foundation Graduate Research Fellowship. The researchers used the Compute and Data Environment for Science high-performance computing resource at ORNL.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 2:11 pm on September 24, 2019 Permalink | Reply
    Tags: , , CADES, ORNL,   

    From Oak Ridge National Laboratory: “ORNL develops, deploys AI capabilities across research portfolio” 

    i1

    From Oak Ridge National Laboratory

    September 24, 2019

    Scott S Jones
    jonesg@ornl.gov
    865-241-6491

    Processes like manufacturing aircraft parts, analyzing data from doctors’ notes and identifying national security threats may seem unrelated, but at the U.S. Department of Energy’s Oak Ridge National Laboratory, artificial intelligence is improving all of these tasks. To accelerate promising AI applications in diverse research fields, ORNL has established a labwide AI Initiative, and its success will help to ensure U.S. economic competitiveness and national security.

    Led by ORNL AI Program Director David Womble, this internal investment brings the lab’s AI expertise, computing resources and user facilities together to facilitate analyses of massive datasets that would otherwise be unmanageable. Multidisciplinary research teams are advancing AI and high-performance computing to tackle increasingly complex problems, including designing novel materials, diagnosing and treating diseases and enhancing the cybersecurity of U.S. infrastructure.

    “AI has the potential to revolutionize science and engineering, and it is exciting to be part of this,” Womble said. “With its world-class scientists and facilities, ORNL will make significant contributions.”

    Across the lab, experts in data science are applying AI tools known as machine learning algorithms (which allow computers to learn from data and predict outcomes) and deep learning algorithms (which use neural networks inspired by the human brain to uncover patterns of interest in datasets) to accelerate breakthroughs across the scientific spectrum. As part of the initiative, ORNL researchers are developing new technologies to complement and expand these capabilities, establishing AI as a force for improving both fundamental and applied science applications.

    Home to the world’s most powerful and smartest supercomputer, Summit, ORNL is particularly well-suited for AI research.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The IBM system debuted in June 2018 and resides at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL.

    With hardware optimized for AI applications, Summit provides an ideal platform for applying machine learning and deep learning to groundbreaking research. The system’s increased memory bandwidth allows AI algorithms to run at faster speeds and obtain more accurate results.

    Other AI-enabled machines include the NVIDIA DGX-2 systems located at ORNL’s Compute and Data Environment for Science.

    3

    These appliances allow researchers to tackle data-intensive problems using unique AI strategies and to run smaller-scale simulations in preparation for later work on Summit.

    “AI is rapidly changing the way computational scientists do research, and ORNL’s history of leadership in computing and data makes it the perfect setting in which to advance the state of the art,” said Associate Laboratory Director for Computing and Computational Sciences Jeff Nichols. “While Summit’s rapid training of AI networks is already assisting researchers across the scientific spectrum in realizing the potential of AI, we have begun preparing for the post-Summit world via Frontier, a second-generation AI system that will provide new capabilities for machine learning, deep learning and data analytics.”

    Although ORNL researchers are applying the lab’s unique combination of AI expertise and powerful computing resources to address a range of scientific challenges, three areas in particular are poised to deliver major early results: additive manufacturing, health care and cyber-physical security.

    Additive manufacturing, or 3D printing, enables researchers at the Manufacturing Demonstration Facility, a DOE Office of Energy Efficiency and Renewable Energy User Facility located at ORNL, to develop reliable, energy-efficient plastic and metal parts at low cost. Using AI, they can consistently create high-quality, specialized aerospace components. AI can instantly locate cracks and other defects before they become problems, thereby reducing costs and time to market.

    Additionally, AI makes it possible for the machines to detect and repair errors in real time during the process of binder jetting, in which a liquid binding agent fuses together layers of powder particles.

    Researchers at ORNL are also optimizing AI techniques to analyze patient data from medical tests, doctors’ notes and other health records. These techniques use language processing to identify patterns among notes from different doctors, extracting previously inaccessible insights from mountains of data. When combined with results from x-rays and other relevant tests, these results could improve health care providers’ ability to diagnose and treat problems ranging from post-traumatic stress disorder to cancer.

    For example, ORNL Health Data Sciences Institute Director Gina Tourassi uses AI to automatically compile and analyze data and determine which factors are responsible for the development of certain diseases. Her team is running machine learning algorithms on Summit to scan millions of medical documents in pursuit of these types of insights.

    Cybersecurity platforms such as “Situ” monitor thousands of events per second to detect anomalies that human analysts would not be able to find. Situ sorts through massive amounts of raw network data, freeing up network operators to focus on small, manageable amounts of activity to investigate potential threats and make more informed decisions.

    And through partnerships with power companies, ORNL has also used AI to improve the security of power grids by monitoring data streams and identifying suspicious activity.

    To date, ORNL researchers have earned two R&D 100 Awards and 10 patents for work related to AI research and algorithm development. The lab plans to recruit additional AI experts to continue building on this foundation.

    To ensure that U.S. researchers maintain leadership in R&D innovation and continue revolutionizing science with AI, ORNL also provides professional development opportunities including the Artificial Intelligence Summer Institute, which pairs students with ORNL researchers to solve science problems using AI, and the Data Learning Users Group, which allows OLCF users and ORNL staff to practice using deep learning techniques.

    ORNL also collaborates with the University of Tennessee, Knoxville, to support the Bredesen Center Ph.D. program in data science and engineering, a curriculum that combines data science with scientific specialties ranging from materials science to national security.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:27 am on May 7, 2019 Permalink | Reply
    Tags: , , , , ORNL, , , , TRC- Translational Research Capability   

    From Oak Ridge National Laboratory: “New research facility will serve ORNL’s growing mission in computing, materials R&D” 

    i1

    From Oak Ridge National Laboratory

    May 7, 2019
    Bill H Cabage
    cabagewh@ornl.gov
    865-574-4399

    1
    Pictured in this early conceptual drawing, the Translational Research Capability planned for Oak Ridge National Laboratory will follow the design of research facilities constructed during the laboratory’s modernization campaign.

    Energy Secretary Rick Perry, Congressman Chuck Fleischmann and lab officials today broke ground on a multipurpose research facility that will provide state-of-the-art laboratory space for expanding scientific activities at the Department of Energy’s Oak Ridge National Laboratory.

    The new Translational Research Capability, or TRC, will be purpose-built for world-leading research in computing and materials science and will serve to advance the science and engineering of quantum information.

    “Through today’s groundbreaking, we’re writing a new chapter in research at the Translational Research Capability Facility,” said U.S. Secretary of Energy Rick Perry. “This building will be the home for advances in Quantum Information Science, battery and energy storage, materials science, and many more. It will also be a place for our scientists, researchers, engineers, and innovators to take on big challenges and deliver transformative solutions.”

    With an estimated total project cost of $95 million, the TRC, located in the central ORNL campus, will accommodate sensitive equipment, multipurpose labs, heavy equipment and inert environment labs. Approximately 75 percent of the facility will contain large, modularly planned and open laboratory areas with the rest as office and support spaces.

    “This research and development space will advance and support the multidisciplinary mission needs of the nation’s advanced computing, materials research, fusion science and physics programs,” ORNL Director Thomas Zacharia said. “The new building represents a renaissance in the way we carry out research allowing more flexible alignment of our research activities to the needs of frontier research.”

    The flexible space will support the lab’s growing fundamental materials research to advance future quantum information science and computing systems. The modern facility will provide atomic fabrication and materials characterization capabilities to accelerate the development of novel quantum computing devices. Researchers will also use the facility to pursue advances in quantum modeling and simulation, leveraging a co-design approach to develop algorithms along with prototype quantum systems.

    The new laboratories will provide noise isolation, electromagnetic shielding and low vibration environments required for multidisciplinary research in quantum information science as well as materials development and performance testing for fusion energy applications. The co-location of the flexible, modular spaces will enhance collaboration among projects.

    At approximately 100,000 square feet, the TRC will be similar in size and appearance to another modern ORNL research facility, the Chemical and Materials Sciences Building, which was completed in 2011 and is located nearby.

    The facility’s design and location will also conform to sustainable building practices with an eye toward encouraging collaboration among researchers. The TRC will be centrally located in the ORNL main campus area on a brownfield tract that was formerly occupied by one of the laboratory’s earliest, Manhattan Project-era structures.

    ORNL began a modernization campaign shortly after UT-Battelle arrived in 2000 to manage the national laboratory. The new construction has enabled the laboratory to meet growing space and infrastructure requirements for rapidly advancing fields such as scientific computing while vacating legacy spaces with inherent high operating costs, inflexible infrastructure and legacy waste issues.

    The construction is supported by the Science Laboratory Infrastructure program of the DOE Office of Science.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 10:08 am on May 7, 2019 Permalink | Reply
    Tags: AMD Radeon, , DOE’s Exascale Computing Project, ORNL, ORNL Cray Frontier Shasta based Exascale supercomputer   

    From Oak Ridge National Laboratory: “U.S. Department of Energy and Cray to Deliver Record-Setting Frontier Supercomputer at ORNL” 

    i1

    From Oak Ridge National Laboratory

    May 7, 2019
    Morgan L McCorkle
    mccorkleml@ornl.gov
    865-574-7308

    Exascale system expected to be world’s most powerful computer for science and innovation.

    The U.S. Department of Energy today announced a contract with Cray Inc. to build the Frontier supercomputer at Oak Ridge National Laboratory, which is anticipated to debut in 2021 as the world’s most powerful computer with a performance of greater than 1.5 exaflops.

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology

    2

    Scheduled for delivery in 2021, Frontier will accelerate innovation in science and technology and maintain U.S. leadership in high-performance computing and artificial intelligence. The total contract award is valued at more than $600 million for the system and technology development. The system will be based on Cray’s new Shasta architecture and Slingshot interconnect and will feature high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology.

    By solving calculations up to 50 times faster than today’s top supercomputers—exceeding a quintillion, or 10^18, calculations per second—Frontier will enable researchers to deliver breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. As a second-generation AI system—following the world-leading Summit system deployed at ORNL in 2018—Frontier will provide new capabilities for deep learning, machine learning and data analytics for applications ranging from manufacturing to human health.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    “Frontier’s record-breaking performance will ensure our country’s ability to lead the world in science that improves the lives and economic prosperity of all Americans and the entire world,” said U.S. Secretary of Energy Rick Perry. “Frontier will accelerate innovation in AI by giving American researchers world-class data and computing resources to ensure the next great inventions are made in the United States.”

    Since 2005, Oak Ridge National Laboratory has deployed Jaguar, Titan, and Summit [above], each the world’s fastest computer in its time.

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, now No.9 on the TOP500

    The combination of traditional processors with graphics processing units to accelerate the performance of leadership-class scientific supercomputers is an approach pioneered by ORNL and its partners and successfully demonstrated through ORNL’s No.1 ranked Titan and Summit supercomputers.

    “ORNL’s vision is to sustain the nation’s preeminence in science and technology by developing and deploying leadership computing for research and innovation at an unprecedented scale,” said ORNL Director Thomas Zacharia. “Frontier follows the well-established computing path charted by ORNL and its partners that will provide the research community with an exascale system ready for science on day one.”

    Researchers with DOE’s Exascale Computing Project are developing exascale scientific applications today on ORNL’s 200-petaflop Summit system and will seamlessly transition their scientific applications to Frontier in 2021. In addition, the lab’s Center for Accelerated Application Readiness is now accepting proposals from scientists to prepare their codes to run on Frontier.

    Researchers will harness Frontier’s powerful architecture to advance science in such applications as systems biology, materials science, energy production, additive manufacturing and health data science. Visit the Frontier website to learn more about what researchers plan to accomplish in these and other scientific fields.

    Frontier will offer best-in-class traditional scientific modeling and simulation capabilities while also leading the world in artificial intelligence and data analytics. Closely integrating artificial intelligence with data analytics and modeling and simulation will drastically reduce the time to discovery by automatically recognizing patterns in data and guiding simulations beyond the limits of traditional approaches.

    “We are honored to be part of this historic moment as we embark on supporting extreme-scale scientific endeavors to deliver the next U.S. exascale supercomputer to the Department of Energy and ORNL,” said Peter Ungaro, president and CEO of Cray. “Frontier will incorporate foundational new technologies from Cray and AMD that will enable the new exascale era—characterized by data-intensive workloads and the convergence of modeling, simulation, analytics, and AI for scientific discovery, engineering and digital transformation.”

    Frontier will incorporate several novel technologies co-designed specifically to deliver a balanced scientific capability for the user community. The system will be composed of more than 100 Cray Shasta cabinets with high density compute blades powered by HPC and AI- optimized AMD EPYC processors and Radeon Instinct GPU accelerators purpose-built for the needs of exascale computing. The new accelerator-centric compute blades will support a 4:1 GPU to CPU ratio with high speed AMD Infinity Fabric links and coherent memory between them within the node. Each node will have one Cray Slingshot interconnect network port for every GPU with streamlined communication between the GPUs and network to enable optimal performance for high-performance computing and AI workloads at exascale.

    To make this performance seamless to consume by developers, Cray and AMD are co-designing and developing enhanced GPU programming tools optimized for performance, productivity and portability. This will include new capabilities in the Cray Programming Environment and AMD’s ROCm open compute platform that will be integrated together into the Cray Shasta software stack for Frontier.

    “AMD is proud to be working with Cray, Oak Ridge National Laboratory and the Department of Energy to push the boundaries of high performance computing with Frontier,” said Lisa Su, AMD president and CEO. “Today’s announcement represents the power of collaboration between private industry and public research institutions to deliver groundbreaking innovations that scientists can use to solve some of the world’s biggest problems.”

    Frontier leverages a decade of exascale technology investments by DOE. The contract award includes technology development funding, a center of excellence, several early-delivery systems, the main Frontier system, and multi-year systems support. The Frontier system is expected to be delivered in 2021, and acceptance is anticipated in 2022.

    Frontier will be part of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility. ORNL is managed by UT–Battelle for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 12:23 pm on April 12, 2019 Permalink | Reply
    Tags: "Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer", , , ORNL, ORNL Cray XK7 Titan Supercomputer once the fastest in the world now No.9 on the TOP500, ORNL IBM AC922 SUMMIT supercomputer No.1 on the TOP500   

    From insideHPC: “Scaling Deep Learning for Scientific Workloads on the #1 Summit Supercomputer” 

    From insideHPC

    April 11, 2019
    Rich Brueckner


    In this video from GTC 2018, Jack Wells from ORNL presents: Scaling Deep Learning for Scientific Workloads on Summit.

    2
    Jack Wells is the Director of Science for the Oak Ridge Leadership Computing Facility (OLCF).

    “HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We’ll discuss examples of how deep learning workflows are being deployed on next-generation systems at the Oak Ridge Leadership Computing Facility. We’ll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.”

    The biggest problems in science require supercomputers of unprecedented capability. That’s why the US Department of Energy’s Oak Ridge National Laboratory (ORNL) launched Summit, a system 8 times more powerful than ORNL’s previous top-ranked system Titan. Summit is providing scientists with incredible computing power to solve challenges in energy, artificial intelligence, human health, and other research areas, that were simply out of reach until now. These discoveries will help shape our understanding of the universe, bolster US economic competitiveness, and contribute to a better future.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Summit Specifications:
    Application Performance: 200 PF (currently #1 on the TOP500)
    Number of Nodes: 4,608
    Node performance: 42 TF
    Memory per Node: 512 GB DDR4 + 96 GB HBM2
    NV memory per Node: 1600 GB
    Total System Memory: >10 PB DDR4 + HBM2 + Non-volatile
    Processors:
    2 IBM POWER9 9,216 CPUs
    6 NVIDIA Volta 27,648 GPUs

    File System: 250 PB, 2.5 TB/s, GPFS
    Power Consumption: 13 MW
    Interconnect: Mellanox EDR 100G InfiniBand
    Operating System: Red Hat Enterprise Linux (RHEL) version 7.4

    Jack Wells is the Director of Science for the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science national user facility, and the Titan supercomputer, located at Oak Ridge National Laboratory (ORNL).

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, now No.9 on the TOP500.

    Wells is responsible for the scientific outcomes of the OLCF’s user programs. Wells has previously lead both ORNL’s Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences. Prior to joining ORNL as a Wigner Fellow in 1997, Wells was a postdoctoral fellow within the Institute for Theoretical Atomic and Molecular Physics at the Harvard-Smithsonian Center for Astrophysics. Wells has a Ph.D. in physics from Vanderbilt University, and has authored or co-authored over 100 scientific papers and edited 1 book, spanning nanoscience, materials science and engineering, nuclear and atomic physics computational science, applied mathematics, and novel analytics measuring the impact of scientific publications.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: