Tagged: OLCF-Oak Ridge Leadership Computing Facility Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:33 pm on July 1, 2021 Permalink | Reply
    Tags: "Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research", Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) program, , , , , , , OLCF-Oak Ridge Leadership Computing Facility, , ,   

    From U.S. Department of Energy Office of Science: “Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research” 

    DOE Main

    From U.S. Department of Energy Office of Science

    Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research
    The U.S. Department of Energy’s (DOE) Office of Science today announced that 22 million node-hours for 41 scientific projects under the Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) program. The projects, with applications ranging from nuclear forensics to advanced energy systems to climate change, will use DOE supercomputers to uncover unique insights about scientific problems that would otherwise be impossible to solve using other experimental approaches.

    Selected projects will receive computational time, also known as node-hours, on one or multiple DOE supercomputers to conduct research that would take years to complete on a standard desktop computer. A node-hour is the usage of one node (or computing unit) on a supercomputer for one hour. A project allocated 1,000,000 node-hours could run a simulation on 1,000 compute nodes for 1,000 hours – vastly reducing the total amount of time required to complete the simulation. These three supercomputers – The Oak Ridge Leadership Computing Facility’s “Summit” system at DOE’s Oak Ridge National Laboratory (US), The Argonne Leadership Computing Facility’s “Theta” system at DOE’s Argonne National Laboratory (US), and the DOE’s National Energy Research Scientific Computing Center’s “Cori” system at DOE’s Lawrence Berkeley National Laboratory (US) – are among the fastest computers in the nation. Oak Ridge National Laboratory’s “Summit” currently performs as the second fastest computer in the world.

    “The Department of Energy is committed to providing the advanced scientific tools needed to move U.S. science forward. Supercomputers allow us to explore scientific problems in ways we haven’t been able to in the past – modeling dangerous, large, or costly experiments, safely and quickly,” said Barb Helland, DOE Associate Director for DOE Office of Science Advanced Scientific Computing Research (US). “The ALCC awards are just one example of how the DOE’s investments in supercomputing benefit researchers all across our nation to advance our nation’s scientific competitiveness, accelerate clean energy options, and to understand and mitigate the impacts of climate change.”

    The ASCR Leadership Computing Challenge (ALCC) program supports efforts to broaden community access to DOE’s computing facilities. ALCC focuses on high-risk, high-payoff simulations in areas directly related to the DOE mission and seeks to broaden the community of researchers who use DOE’s advanced computing resources. The 2021 awardees are awarded compute time at DOE’s high-performance computing facilities at Oak Ridge National Laboratory in Tennessee, Argonne National Laboratory in Illinois, and the National Energy Research Scientific Computing Center (US) at Lawrence Berkeley National Laboratory in California. Of the 41 projects, 3 are from industry, 19 are led by universities and 19 are led by national laboratories.
    The projects cover a variety of topics, including:
    • Climate change research, including improving climate models, studying the effects of turbulence in oceans, characterizing the impact of low-level jets on wind farms, improving the simulation of biochemical processes, and simulating clouds on a global scale.
    • Energy research, including AI and deep learning prediction for fusion energy systems, modeling materials for energy storage, studying wind turbine mechanics, and research into the properties of lithium battery electrolytes.
    • Medical research, such as deep learning for medical natural language processing, modeling cancer screening strategies, and modeling cancer initiation pathways.
    Learn more about the 2021 ALCC awardees by visiting the ASCR website. The ALCC application period will re-open for the 2022-23 allocation cycle in Fall 2021.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition
    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

    Science Programs Organization

    The Office of Science manages its research portfolio through six program offices:

    Advanced Scientific Computing Research
    Basic Energy Sciences
    Biological and Environmental Research
    Fusion Energy Sciences
    High Energy Physics
    Nuclear Physics

    The Science Programs organization also includes the following offices:

    The Department of Energy’s Small Business Innovation Research and Small Business Technology Transfer Programs, which the Office of Science manages for the Department;
    The Workforce Development for Teachers and Students program sponsors programs helping develop the next generation of scientists and engineers to support the DOE mission, administer programs, and conduct research; and
    The Office of Project Assessment provides independent advice to the SC leadership regarding those activities essential to constructing and operating major research facilities.

     
  • richardmitnick 1:10 pm on May 24, 2021 Permalink | Reply
    Tags: "Scientists Tap Supercomputing to Study Exotic Matter in Stars", , OLCF-Oak Ridge Leadership Computing Facility   

    From Oak Ridge Leadership Computing Facility (US) at DOE’s Oak Ridge National Laboratory (US) : “Scientists Tap Supercomputing to Study Exotic Matter in Stars” 

    From Oak Ridge Leadership Computing Facility (US)

    at

    DOE’s Oak Ridge National Laboratory (US)

    5.6.21
    Rachel McDowell

    A team at Stony Brook University (US) used ORNL’s Summit supercomputer to model x-ray burst flames spreading across the surface of dense neutron stars.

    At the heart of some of the smallest and densest stars in the universe lies nuclear matter that might exist in never-before-observed exotic phases. Neutron stars, which form when the cores of massive stars collapse in a luminous supernova explosion, are thought to contain matter at energies greater than what can be achieved in particle accelerator experiments, such as the ones at the CERN Large Hadron Collider(CH) and the Relativistic Heavy Ion Collider (US).

    Although scientists cannot recreate these extreme conditions on Earth, they can use neutron stars as ready-made laboratories to better understand exotic matter. Simulating neutron stars, many of which are only 12.5 miles in diameter but boast around 1.4 to 2 times the mass of our sun, can provide insight into the matter that might exist in their interiors and give clues as to how it behaves at such densities.

    A team of nuclear astrophysicists led by Michael Zingale at Stony Brook University is using the Oak Ridge Leadership Computing Facility’s (OLCF’s) IBM AC922 Summit, the nation’s fastest supercomputer, to model a neutron star phenomenon called an x-ray burst—a thermonuclear explosion that occurs on the surface of a neutron star when its gravitational field pulls a sufficiently large amount of matter off a nearby star. Now, the team has modeled a 2D x-ray burst flame moving across the surface of a neutron star to determine how the flame acts under different conditions. Simulating this astrophysical phenomenon provides scientists with data that can help them better measure the radii of neutron stars, a value that is crucial to studying the physics in the interior of neutron stars. The results were published in The Astrophysical Journal.

    “Astronomers can use x-ray bursts to measure the radius of a neutron star, which is a challenge because it’s so small,” Zingale said. “If we know the radius, we can determine a neutron star’s properties and understand the matter that lives at its center. Our simulations will help connect the physics of the x-ray burst flame burning to observations.”

    The group found that different initial models and physics led to different results. In the next phase of the project, the team plans to run one large 3D simulation based on the results from the study to obtain a more accurate picture of the x-ray burst phenomenon.

    Switching physics

    Neutron star simulations require a massive amount of physics input and therefore a massive amount of computing power. Even on Summit, researchers can only afford to model a small portion of the neutron star surface.

    3
    A dense neutron star (right) pulling matter off a nearby star (left). Image credit: Colby Earles, ORNL.

    To accurately understand the flame’s behavior, Zingale’s team used Summit to model the flame for various features of the underlying neutron star. The team’s simulations were completed under an allocation of computing time under the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The team varied surface temperatures and rotation rates, using these as proxies for different accretion rates—or how quickly the star increases in mass as it accumulates additional matter from a nearby star.

    Alice Harpole, a postdoctoral researcher at Stony Brook University and lead author on the paper, suggested that the team model a hotter crust, leading to unexpected results.

    “One of the most exciting results from this project was what we saw when we varied the temperature of the crust in our simulations,” Harpole said. “In our previous work, we used a cooler crust. I thought it might make a difference to use a hotter crust, but actually seeing the difference that the increased temperature produced was very interesting.”

    Massive computing, more complexity

    The team modeled the x-ray burst flame phenomenon on the OLCF’s Summit at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL). Nicole Ford, an intern in the Science Undergraduate Laboratory Internship Program at DOE’s Lawrence Berkeley National Laboratory (LBNL) (US), ran complementary simulations on the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC).

    The OLCF and NERSC are a DOE Office of Science user facilities located at ORNL and LBNL, respectively.

    With simulations of 9,216 grid cells in the horizontal direction and 1,536 cells in the vertical direction, the effort required a massive amount of computing power. After the team completed the simulations, team members tapped the OLCF’s Rhea system to analyze and plot their results.

    On Summit, the team used the Castro code—which is capable of modeling explosive astrophysical phenomena—in the adaptive mesh refinement for the exascale (AMReX) library, which allowed team members to achieve varying resolutions at different parts of the grid. AMReX is one of the libraries being developed by the Exascale Computing Project (US), an effort to adapt scientific applications to run on DOE’s upcoming exascale systems, including the OLCF’s Frontier.

    Exascale systems will be capable of computing in the exaflops range, or 1018 calculations per second.

    AMReX provides a framework for parallelization on supercomputers, but Castro wasn’t always capable of taking advantage of the GPUs that make Summit so attractive for scientific research. The team attended OLCF-hosted hackathons at DOE’s Brookhaven National Laboratory (US) and ORNL to get help with porting the code to Summit’s GPUs.

    “The hackathons were incredibly useful to us in understanding how we could leverage Summit’s GPUs for this effort,” Zingale said. “When we transitioned from CPUs to GPUs, our code ran 10 times faster. This allowed us to make less approximations and perform more physically realistic and longer simulations.

    The team said that the upcoming 3D simulation they plan to run will not only require GPUs—it will eat up nearly all of the team’s INCITE time for the entire year.

    “We need to get every ounce of performance we can,” Zingale said. “Luckily, we have learned from these 2D simulations what we need to do for our 3D simulation, so we are prepared for our next big endeavor.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Established in 1942, DOE’s Oak Ridge National Laboratory (US) is the largest science and energy national laboratory in the Department of Energy system (by size) and third largest by annual budget. It is located in the Roane County section of Oak Ridge, Tennessee. Its scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries.

    ORNL has several of the world’s top supercomputers, including Summit, ranked by the TOP500 as Earth’s second-most powerful.

    IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500. Credit: Carlos Jones, DOE’s Oak Ridge National Laboratory (US).

    The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source and High Flux Isotope Reactor.

    It hosts the Center for Nanophase Materials Sciences, the BioEnergy Science Center, and the Consortium for Advanced Simulation of Light Water Nuclear Reactors.

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    Areas of research

    ORNL conducts research and development activities that span a wide range of scientific disciplines. Many research areas have a significant overlap with each other; researchers often work in two or more of the fields listed here. The laboratory’s major research areas are described briefly below.

    Chemical sciences – ORNL conducts both fundamental and applied research in a number of areas, including catalysis, surface science and interfacial chemistry; molecular transformations and fuel chemistry; heavy element chemistry and radioactive materials characterization; aqueous solution chemistry and geochemistry; mass spectrometry and laser spectroscopy; separations chemistry; materials chemistry including synthesis and characterization of polymers and other soft materials; chemical biosciences; and neutron science.
    Electron microscopy – ORNL’s electron microscopy program investigates key issues in condensed matter, materials, chemical and nanosciences.
    Nuclear medicine – The laboratory’s nuclear medicine research is focused on the development of improved reactor production and processing methods to provide medical radioisotopes, the development of new radionuclide generator systems, the design and evaluation of new radiopharmaceuticals for applications in nuclear medicine and oncology.
    Physics – Physics research at ORNL is focused primarily on studies of the fundamental properties of matter at the atomic, nuclear, and subnuclear levels and the development of experimental devices in support of these studies.
    Population – ORNL provides federal, state and international organizations with a gridded population database, called Landscan, for estimating ambient population. LandScan is a raster image, or grid, of population counts, which provides human population estimates every 30 x 30 arc seconds, which translates roughly to population estimates for 1 kilometer square windows or grid cells at the equator, with cell width decreasing at higher latitudes. Though many population datasets exist, LandScan is the best spatial population dataset, which also covers the globe. Updated annually (although data releases are generally one year behind the current year) offers continuous, updated values of population, based on the most recent information. Landscan data are accessible through GIS applications and a USAID public domain application called Population Explorer.

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    With a peak performance of 200,000 trillion calculations per second—or 200 petaflops, Summit will be eight times more powerful than ORNL’s previous top-ranked system, Titan. For certain scientific applications, Summit will also be capable of more than three billion billion mixed precision calculations per second, or 3.3 exaops. Summit will provide unprecedented computing power for research in energy, advanced materials and artificial intelligence (AI), among other domains, enabling scientific discoveries that were previously impractical or impossible.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 9:14 am on November 13, 2020 Permalink | Reply
    Tags: "UMass Dartmouth professors to use fastest supercomputer in the nation for research", , , OLCF-Oak Ridge Leadership Computing Facility, , , ,   

    From UMass Dartmouth: “UMass Dartmouth professors to use fastest supercomputer in the nation for research” 

    From UMass Dartmouth

    November 12, 2020
    Ryan Merrill
    508-910-6884
    rmerrill1@umassd.edu

    Professor Sigal Gottlieb and Professor Gaurav Khanna awarded opportunity to Oak Ridge National Lab’s Summit supercomputer.

    ORNL IBM AC922 SUMMIT supercomputer, was No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy.


    Oak Ridge National Lab’s Summit supercomputer is the fastest in America and Professor Sigal Gottlieb (Mathematics) and Professor Gaurav Khanna (Physics) are getting a chance to test its power.

    The system, built by IBM, can perform 200 quadrillion calculations in one second. Funded by the U.S. Department of Energy, the Summit supercomputer consists of 9,216 POWER9 processors, 27,648 Nvidia Tesla graphics processing units, and consumes 13 MW of power.

    Gottlieb and Khanna, alongside their colleague Zachary Grant of Oak Ridge National Lab, were awarded 880,000 core-hours of supercomputing time on Summit. They received the maximum awarded Directors’ Discretionary allocation which is equivalent to $132,200 of funding according to the Department of Energy. Their research project titled “Mixed-Precision WENO Method for Hyperbolic PDE Solutions” involves implementing and evaluating different computational methods for black hole simulations.

    Their proposal for supercomputing time was successful, in part, due to excellent preliminary results that were generated using UMass Dartmouth’s own C.A.R.N.i.E supercomputer, and MIT’s Satori supercomputer that Khanna had access to via UMass Dartmouth’s membership in the Massachusetts Green High Performance Computing Consortium (MGHPCC). The Satori supercomputer is similar in design to Summit, but almost two orders-of-magnitude smaller in size.

    Gottlieb and Khanna are the Co-Directors for UMass Dartmouth’s Center for Scientific Computing & Visualization Research and Grant was a former student of Gottlieb’s in the Engineering & Applied Sciences Ph.D. program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Mission Statement

    UMass Dartmouth distinguishes itself as a vibrant, public research university dedicated to engaged learning and innovative research resulting in personal and lifelong student success. The University serves as an intellectual catalyst for economic, social, and cultural transformation on a global, national, and regional scale.
    Vision Statement

    UMass Dartmouth will be a globally recognized premier research university committed to inclusion, access, advancement of knowledge, student success, and community engagement.

    The University of Massachusetts Dartmouth (UMass Dartmouth or UMassD) is one of five campuses and operating subdivisions of the University of Massachusetts. It is located in North Dartmouth, Massachusetts, United States, in the center of the South Coast region, between the cities of New Bedford to the east and Fall River to the west. Formerly Southeastern Massachusetts University, it was merged into the University of Massachusetts system in 1991.

    The campus has an overall student body of 8,647 students (school year 2016-2017), including 6,999 undergraduates and 1,648 graduate/law students. As of the 2017 academic year, UMass Dartmouth records 399 full-time faculty on staff. For the fourth consecutive year UMass Dartmouth receives top 20 national rank from President’s Higher Education Community Service Honor Roll for its civic engagement.

    The university also includes the University of Massachusetts School of Law, as the trustees of the state’s university system voted during 2004 to purchase the nearby Southern New England School of Law (SNESL), a private institution that was accredited regionally but not by the American Bar Association (ABA).
    UMass School of Law at Dartmouth opened its doors in September 2010, accepting all current SNESL students with a C or better average as transfer students, and achieved (provisional) ABA accreditation in June 2012. The law school achieved full accreditation in December 2016.

    In 2011, UMass Dartmouth became the first university in the world to have a sustainability report that met the top level of the world’s most comprehensive, credible, and widely used standard (the GRI’s G3.1 standard). In 2013, UMass Dartmouth became the first university in the world whose annual sustainability report achieved an A+ application level according to the Global Reporting Initiative G3.1 standard (by having the sources of data used in its annual sustainability report verified by an independent third party).

     
  • richardmitnick 9:53 am on February 4, 2020 Permalink | Reply
    Tags: "Closely spaced hydrogen atoms could facilitate superconductivity in ambient conditions", , , OLCF-Oak Ridge Leadership Computing Facility, , ,   

    From Oak Ridge National Laboratory: “Closely spaced hydrogen atoms could facilitate superconductivity in ambient conditions” 

    i1

    From Oak Ridge National Laboratory

    February 3, 2020

    Paul L Boisvert
    boisvertpl@ornl.gov
    865.576.9047

    1
    Illustration of a zirconium vanadium hydride atomic structure at near ambient conditions as determined using neutron vibrational spectroscopy and the Titan supercomputer at Oak Ridge National Laboratory. The lattice is composed of vanadium atoms (in gold) and zirconium atoms (in white) enclosing hydrogen atoms (in red). Three hydrogen atoms are shown interacting at surprisingly small hydrogen-hydrogen atomic distances, as short as 1.6 angstroms. These smaller spacings between the atoms might allow packing significantly more hydrogen into the material to a point where it begins to superconduct. Credit: Jill Hemman/Oak Ridge National Laboratory, U.S. Dept. of Energy.

    An international team of researchers has discovered the hydrogen atoms in a metal hydride material are much more tightly spaced than had been predicted for decades — a feature that could possibly facilitate superconductivity at or near room temperature and pressure.

    Such a superconducting material, carrying electricity without any energy loss due to resistance, would revolutionize energy efficiency in a broad range of consumer and industrial applications.

    The scientists conducted neutron scattering experiments at the Department of Energy’s Oak Ridge National Laboratory on samples of zirconium vanadium hydride at atmospheric pressure and at temperatures from -450 degrees Fahrenheit (5 K) to as high as -10 degrees Fahrenheit (250 K) — much higher than the temperatures where superconductivity is expected to occur in these conditions.

    Their findings, published in the Proceedings of the National Academy of Sciences, detail the first observations of such small hydrogen-hydrogen atomic distances in the metal hydride, as small as 1.6 angstroms, compared to the 2.1 angstrom distances predicted for these metals.

    This interatomic arrangement is remarkably promising since the hydrogen contained in metals affects their electronic properties. Other materials with similar hydrogen arrangements have been found to start superconducting, but only at very high pressures.

    The research team included scientists from the Empa research institute (Swiss Federal Laboratories for Materials Science and Technology), the University of Zurich, Polish Academy of Sciences, the University of Illinois at Chicago, and ORNL.

    “Some of the most promising ‘high-temperature’ superconductors, such as lanthanum decahydride, can start superconducting at about 8.0 degrees Fahrenheit, but unfortunately also require enormous pressures as high as 22 million pounds per square inch, or nearly 1,400 times the pressure exerted by water at the deepest part of Earth’s deepest ocean,” said Russell J. Hemley, Professor and Distinguished Chair in the Natural Sciences at the University of Illinois at Chicago. “For decades, the ‘holy grail’ for scientists has been to find or make a material that superconducts at room temperature and atmospheric pressure, which would allow engineers to design it into conventional electrical systems and devices. We’re hopeful that an inexpensive, stable metal like zirconium vanadium hydride can be tailored to provide just such a superconducting material.”

    Researchers had probed the hydrogen interactions in the well-studied metal hydride with high-resolution, inelastic neutron vibrational spectroscopy on the VISION beamline at ORNL’s Spallation Neutron Source.

    ORNL Spallation Neutron Source

    However, the resulting spectral signal, including a prominent peak at around 50 millielectronvolts, did not agree with what the models predicted.

    The breakthrough in understanding occurred after the team began working with the Oak Ridge Leadership Computing Facility to develop a strategy for evaluating the data.

    The OLCF at the time was home to Titan, one of the world’s fastest supercomputers, a Cray XK7 system that operated at speeds up to 27 petaflops (27 quadrillion floating point operations per second).

    ORNL Titan Cray XK7 Supercomputer

    “ORNL is the only place in the world that boasts both a world-leading neutron source and one of the world’s fastest supercomputers,” said Timmy Ramirez-Cuesta, team lead for ORNL’s chemical spectroscopy team. “Combining the capabilities of these facilities allowed us to compile the neutron spectroscopy data and devise a way to calculate the origin of the anomalous signal we encountered. It took an ensemble of 3,200 individual simulations, a massive task that occupied around 17% of Titan’s immense processing capacity for nearly a week — something a conventional computer would have required ten to twenty years to do.”

    These computer simulations, along with additional experiments ruling out alternative explanations, proved conclusively that the unexpected spectral intensity occurs only when distances between hydrogen atoms are closer than 2.0 angstroms, which had never been observed in a metal hydride at ambient pressure and temperature. The team’s findings represent the first known exception to the Switendick criterion in a bimetallic alloy, a rule that holds for stable hydrides at ambient temperature and pressure the hydrogen-hydrogen distance is never less than 2.1 angstroms.

    “An important question is whether or not the observed effect is limited specifically to zirconium vanadium hydride,” said Andreas Borgschulte, group leader for hydrogen spectroscopy at Empa. “Our calculations for the material—when excluding the Switendick limit — were able to reproduce the peak, supporting the notion that in vanadium hydride, hydrogen-hydrogen pairs with distances below 2.1 angstroms do occur.”

    In future experiments, the researchers plan to add more hydrogen to zirconium vanadium hydride at various pressures to evaluate the material’s potential for electrical conductivity. ORNL’s Summit supercomputer — which at 200 petaflops is over 7 times faster than Titan and since June 2018 has been No. 1 on the TOP500 List, a semiannual ranking of the world’s fastest computing systems — could provide the additional computing power that will be required to analyze these new experiments.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The research was supported by the Department of Energy’s Office of Science and the National Nuclear Security Administration, the National Science Foundation, Rutherford Appleton Laboratory, Empa and the Swiss National Science Foundation, the University of Zurich, and the National Centre for Research and Development in Warsaw, Poland. oClimax neutron data software, part of the ICEMAN project funded by the Laboratory Directed Research and Development program at ORNL, was used to analyze and interpret the inelastic neutron scattering spectra.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 9:12 am on July 10, 2019 Permalink | Reply
    Tags: , , Globus Data Transfer, , OLCF-Oak Ridge Leadership Computing Facility, ,   

    From insideHPC: “Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer” 

    From insideHPC

    Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date.

    1

    “Storage is in general a very large problem in our community — the Universe is just very big, so our work can often generate a lot of data,” explained Katrin Heitmann, Argonne physicist and computational scientist and an Oak Ridge National Laboratory Leadership Computing Facility (OLCF) Early Science user.

    “Using Globus to easily move the data around between different storage solutions and institutions for analysis is essential.”

    The data in question was stored on the Summit supercomputer at OLCF, currently the world’s fastest supercomputer according to the Top500 list published June 18, 2019. Globus was used to move the files from disk to tape, a key use case for researchers.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    “Due to its uniqueness, the data is very precious and the analysis will take time,” said Dr. Heitmann. “The first step after the simulations were finished was to make a backup copy of the data to HPSS, so we can move the data back and forth between disk and tape and thus carry out the analysis in steps. We use Globus for this work due to its speed, reliability, and ease of use.”

    “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster, Globus co-founder and director of Argonne’s Data Science and Learning Division. “We tend to take these functions for granted, and yet modern collaborative research would not be possible without them.”

    “Globus has underpinned groundbreaking research for decades. We could not be prouder of our role in helping scientists do their world-changing work, and we’re happy to see projects like this one continue to push the boundaries of what Globus can achieve. Congratulations to Dr. Heitmann and team!”

    “When it comes to data transfer performance, “the most important part is reliability,” says Dr. Heitmann. “It is basically impossible for me as a user to check the very large amounts of data upon arrival after a transfer has finished. The analysis of the data often uses a subset of the data, so it would take quite a while until bad data would be discovered and at that point we might not have the data anymore at the source. So the reliability aspects of Globus are key.”

    “Of course, speed is also important. If the transfers were very slow, given the amount of data we transfer, we would have had a problem. So it’s good to be able to rely on Globus for fast data movement as well. We are also grateful to Oak Ridge for access to Summit and for their excellent setup of data transfer nodes enabling the use of Globus for HPSS transfers. This work would not have been possible otherwise.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:49 am on August 8, 2018 Permalink | Reply
    Tags: ALCC Program Awards 14 Projects a Combined 729.5 Million Core Hours at the OLCF, , , OLCF-Oak Ridge Leadership Computing Facility, ,   

    From Oak Ridge Leadership Computing Facility: “ALCC Program Awards 14 Projects a Combined 729.5 Million Core Hours at the OLCF” 

    i1

    Oak Ridge National Laboratory

    From Oak Ridge Leadership Computing Facility

    Research teams receive computing time on the Titan supercomputer.

    Every year, the US Department of Energy’s (DOE’s) Office of Advanced Scientific Computing Research (ASCR) provides scientists with time on world-class computational resources across the country through the ASCR Leadership Computing Challenge (ALCC). The ALCC program grants 1-year awards to energy-related research efforts with an emphasis on high-risk, high-reward simulations in line with DOE’s mission.

    The ALCC program distributes time among multiple DOE Office of Science User Facilities, allocating up to 30 percent of the HPC resources at the Oak Ridge Leadership Computing Facility (OLCF) at DOE’s Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory’s Argonne Leadership Computing Facility (ALCF), as well as up to 10 percent at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory.

    ASCR manages all three user facilities, which each contain powerful supercomputers. Previous ALCC project recipients have leveraged these high-performance computing (HPC) systems to advance scientific and technological research in fields such as nuclear physics, energy efficiency, and materials science.

    In 2018, 14 projects have earned a combined 729.5 million core hours on Titan, the OLCF’s 27-petaflop supercomputer, to continue that tradition of innovation and discovery. Using Titan, teams of scientists will conduct experiments, collect data, and analyze results in support of various research topics, from studying biological processes in microbial ecosystems to developing new cosmological simulations for studying the history of the universe.

    Projects given time at the OLCF this year, which received awards ranging from 5 million to 100 million processor hours, are listed below. Some projects have additional computing time at the ALCF and/or NERSC.

    Brian Wirth from ORNL and the University of Tennessee received 60 million core hours on Titan for “Modeling Fusion Plasma Facing Components.”
    Todd Simons from Rolls-Royce Corporation received 10 million core hours on Titan for “Increasing the Scale of Implicit Finite Element Analyses.”
    Robert Edwards from Jefferson Lab received 96 million core hours on Titan for “The Real World of Real Glue.”
    Eric Lancon from Brookhaven National Laboratory received 80 million core hours on Titan for “Scaling LHC Proton–Proton Collision Simulations in the ATLAS Detector.”
    Robert Voigt from Leidos Inc. received 78.5 million core hours on Titan for “Demonstration of the Scalability of Programming Environments by Simulating Multi-Scale Applications.”
    Robert Patton from ORNL received 25 million core hours on Titan for “Advances in Machine Learning to Improve Scientific Discovery.”
    P. Straatsma from ORNL received 30 million core hours on Titan for “Portable Application Development for Next-Generation Supercomputer Architectures.”
    Katrin Heitmann from Argonne National Laboratory received 40 million core hours on Titan for “Emulating the Universe.”
    Chongle Pan from ORNL received 50 million core hours on Titan for “Petascale Analytics of Big Proteogenomics Data on Key Microbial Communities.”
    Peter Nugent from Lawrence Berkeley National Laboratory received 100 million core hours on Titan for “HPC4EnergyInnovation ALCC End-Station.”
    Mark Petersen from Los Alamos National Laboratory received 5 million core hours on Titan for “Investigating the Impact of Improved Southern Ocean Processes in Antarctic-Focused Global Climate Simulations.”
    Gary Grest from Sandia National Laboratories received 8 million core hours on Titan for “Large-Scale Numerical Simulations of Polymer Nanocomposites.”
    Swagato Mukherjee from Brookhaven National Laboratory received 85 million core hours on Titan for “Phase Boundary of Baryon-Rich QCD Matter.”
    Ronald Grover from General Motors received 12 million core hours on Titan for “Steady-State Engine Calibration in CFD Using a GPU-Based Chemistry Solver, Conjugate Heat Transfer, and Large Eddy Simulation (LES).”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: