Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 7:49 am on May 26, 2020 Permalink | Reply
    Tags: "Sandia to receive Fujitsu ‘green’ processor", , Fujitsu PRIMEHPC FX700, , Supercomputing   

    From Sandia Lab: “Sandia to receive Fujitsu ‘green’ processor” 

    From Sandia Lab

    5.26.20
    Neal Singer
    nsinger@sandia.gov
    505-845-7078

    New system to help break down memory-speed bottleneck.

    This spring, Sandia National Laboratories anticipates being one of the first Department of Energy laboratories to receive the newest A64FX Fujitsu processor, a Japanese Arm-based processor optimized for high-performance computing.

    1
    Fujitsu PRIMEHPC FX700

    Arm-based processors are used widely in small electronic devices like cell phones. More recently, Arm-based processors were installed in Sandia’s Astra supercomputer, where they are the frontline in a DOE effort to keep competitive the market of supercomputer chip providers.

    HPE Vanguard Astra supercomputer with ARM technology at Sandia Labs

    “Being early adopters of this technology benefits all parties involved,” said Scott Collis, director of Sandia’s Center for Computing Research.

    Penguin Computer Inc. will deliver the new system — the first Fujitsu PRIMEHPC FX700 with A64FX processors.

    “This Fujitsu-Penguin computer offers the potential to improve algorithms that may not perform well on GPU (graphics processing unit) accelerators,” Collis said. “In these cases, code performance is often limited by memory speed, not the speed of computation. This system is the first that closely couples efficient and powerful Arm processors to really fast memory to help breakdown this memory-speed bottleneck.”

    Said Ken Gudenrath, Penguin’s director of interactions with DOE, “Our goal is to provide early access to upcoming technologies.”

    Sandia will evaluate Fujitsu’s new processor and compiler using DOE mini- and proxy-applications and share the results with Fujitsu and Penguin. Mini- and proxy-apps are small, manageable versions of applications used for initial testing and collaborations. They are also open source, which means they can be freely modified to fit particular problems.

    Said James Laros, program lead of Sandia’s advanced-architectures technology-prototype program called Vanguard, tasked to explore emerging techniques in supercomputing, “This acquisition furthers the lab’s research and development in Arm-based computing technologies and builds upon the highly successful Astra platform, the world’s first petascale Arm-based supercomputer.”

    Processor maximizes green computational power.

    The 48-core A64FX processor was designed for Japan’s soon-to-be-deployed Fugaku supercomputer, which incorporates high-bandwidth memory. It also is the first to fully utilize wide vector lanes that were designed around Arm’s Scalable Vector Extensions. These wide vector lanes make possible a type of data level parallelism where a single instruction operates on multiple data elements arranged in parallel.

    “The new processor’s efficiency and increased performance per watt provides researchers with significantly greater fractions of usable peak performance,” said Sandia manager Robert Hoekstra. “The Japanese supercomputing team at the RIKEN Center for Computational Science has partnered with Fujitsu and focused on increasing vectorization and memory bandwidth to maximize the computational power of the system. The result is that an early A64FX-based system sits atop the Green500 list of most efficient supercomputers.”

    In addition to expanding Sandia’s efforts to develop new suppliers by advancing Arm-based technologies for high-performance computing, this acquisition also supports DOE’s collaboration with the Japanese supercomputing community. Cooperation with the RIKEN center is part of a memorandum of understanding signed in 2014 between DOE and the Japanese Ministry of Education, Culture, Sports, Science and Technology. Both organizations have agreed to work together to improve high performance computing, including collaborative development of computing architectures.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.



     
  • richardmitnick 8:41 am on May 6, 2020 Permalink | Reply
    Tags: "Supercomputer time to explore how black holes and jets have changed our Universe", , , , , , Supercomputing   

    From International Centre for Radio Astronomy Research: “Supercomputer time to explore how black holes and jets have changed our Universe” 

    ICRAR Logo
    From International Centre for Radio Astronomy Research

    April 23, 2020

    A/Prof. Chris Power (ICRAR / University of Western Australia)
    +61 478 906 421
    chris.power@icrar.org

    Kirsten Gottschalk (Media Contact, ICRAR)
    +61 438 361 876
    kirsten.gottschalk@icrar.org

    Fujistu Lenovo GADI supercomputer at the National Computational Infrastructure (NCI) at the Australian National University (ANU)

    Astronomers have been awarded 45 million units of supercomputing time to study the influence of supermassive black holes on their host galaxies.

    The team from WA, Tasmania and the UK were awarded the time on Australia’s largest research supercomputing facility, the National Computational Infrastructure (NCI Australia) in Canberra.

    They will use it to combine computer models of black holes—and the jets that shoot out of them—with large-scale cosmological simulations of the Universe.

    Associate Professor Chris Power, from the University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR), is leading the research.

    He said black holes can have a profound effect on how galaxies evolve.

    “Black holes produce very powerful jets and winds,” he said.

    “We know they can stop stars forming, and create the different kinds of galaxies we see in the Universe today.

    “But the problem is that we have a very cartoonish understanding of how this process works.”

    The researchers will use the supercomputer time to study how powerful jets from black holes impact their larger galactic and cosmic environments.

    They will combine sophisticated cosmological simulations of galaxy formation, developed at ICRAR, with detailed models of black hole jets, developed by Dr Stanislav Shabala and PhD student Patrick Yates at the University of Tasmania.

    The team also includes researchers from the University of Hertfordshire.

    Associate Professor Power said running the simulations on a laptop computer would take almost 5,000 years.

    “On the supercomputer, we’ll probably get results in a couple of days,” he said.

    “So we want to be able to run hundreds of these kinds of simulations. We’re basically treating them as experiments.”

    The astronomers will tweak their models with each simulation, improving our understanding of how black holes change their host galaxies.

    “It’s a bit like when we go into a lab and we’re pouring combinations of chemicals into test tubes—we can see what kinds of things happen,” Associate Professor Power said.

    The study will be one of the first to run on NCI’s brand new supercomputer Gadi, and will be undertaken over the next six to nine months.

    It was one of four awarded time through the Australasian Leadership Computing Grants program, which attracts bids from researchers all over the country.

    The other projects will conduct research in global climate modelling, decadal climate forecasts and combustion for low emissions gas turbines. More at NCI Australia.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition
    ICRAR is an equal joint venture between Curtin University and The University of Western Australia with funding support from the State Government of Western Australia. The Centre’s headquarters are located at UWA, with research nodes at both UWA and the Curtin Institute for Radio Astronomy (CIRA).
    ICRAR has strong support from the government of Australia and is working closely with industry and the astronomy community, including CSIRO and the Australian Telescope National Facility, <a
    ICRAR is:

    Playing a key role in the international Square Kilometre Array (SKA) project, the world's biggest ground-based telescope array.

    Attracting some of the world’s leading researchers in radio astronomy, who will also contribute to national and international scientific and technical programs for SKA and ASKAP.
    Creating a collaborative environment for scientists and engineers to engage and work with industry to produce studies, prototypes and systems linked to the overall scientific success of the SKA, MWA and ASKAP.

    Murchison Widefield Array,SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)

    A Small part of the Murchison Widefield Array

    Enhancing Australia’s position in the international SKA program by contributing to the development process for the SKA in scientific, technological and operational areas.
    Promoting scientific, technical, commercial and educational opportunities through public outreach, educational material, training students and collaborative developments with national and international educational organisations.
    Establishing and maintaining a pool of emerging and top-level scientists and technologists in the disciplines related to radio astronomy through appointments and training.
    Making world-class contributions to SKA science, with emphasis on the signature science themes associated with surveys for neutral hydrogen and variable (transient) radio sources.
    Making world-class contributions to SKA capability with respect to developments in the areas of Data Intensive Science and support for the Murchison Radio-astronomy Observatory.

     
  • richardmitnick 12:13 pm on April 27, 2020 Permalink | Reply
    Tags: "World's First 3D Simulations of Superluminous Supernovae", , , , , , Supercomputing   

    From NERSC: “World’s First 3D Simulations of Superluminous Supernovae” 

    From NERSC

    April 21, 2020
    Written by Linda Vu
    Contact: CScomms@lbl.gov

    1
    The nebula phase of the magnetar-powered super-luminous supernova from our 3D simulation. At the moment, the supernova ejecta has expanded to a size similar to the solar system. Large scale mixing appears at the outer and inner region of ejecta. The resulting light curves and spectra are sensitive to the mixing that depends on stellar structure and the physical properties of magnetar. Credit: Ken Chen

    For most of the 20th century, astronomers have scoured the skies for supernovae—the explosive deaths of massive stars—and their remnants in search of clues about the progenitor, the mechanisms that caused it to explode, and the heavy elements created in the process. In fact, these events create most of the cosmic elements that go on to form new stars, galaxies, and life.

    Because no one can actually see a supernova up close, researchers rely on supercomputer simulations to give them insights into the physics that ignites and drives the event. Now for the first time ever, an international team of astrophysicists simulated the three-dimensional (3D) physics of superluminous supernovae—which are about a hundred times more luminous than typical supernovae. They achieved this milestone using Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) CASTRO code and supercomputers at the National Energy Research Scientific Computing Center (NERSC). A paper describing their work was published in The Astrophysical Journal.

    Astronomers have found that these superluminous events occur when a magnetar—the rapidly spinning corpse of a massive star whose magnetic field is trillions of times stronger than Earth’s—is in the center of a young supernova. Radiation released by the magnetar is what amplifies the supernova’s luminosity. But to understand how this happens, researchers need multidimensional simulations.

    “To do 3D simulations of magnetar-powered superluminous supernovae, you need a lot of supercomputing power and the right code, one that captures the relevant microphysics,” said Ken Chen, lead author of the paper and an astrophysicist at the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), Taiwan.

    He adds that the numerical simulation required to capture the fluid instabilities of these superluminous events in 3D is very complex and requires a lot of computing power, which is why no one has done it before.

    2
    The turbulent core of a magnetar bubble inside the superluminous supernovae. Color coding shows densities. The magnetar is located at the center of this image and two bipolar outflows are emitted from it. The physical size of the outflow is about 10,000 km. (Image by Ken Chen)

    Fluid instabilities occur all around us. For instance, if you have a glass of water and put some dye on top, the surface tension of the water will become unstable and the heavier dye will sink to the bottom. Because two fluids are moving past each other, the physics of this instability cannot be captured in one dimension. You need a second or third dimension, perpendicular to height to see all of the instability. At the cosmic scale, fluid instabilities that lead to turbulence and mixing play a critical role in the formation of cosmic objects like galaxies, stars, and supernovae.

    “You need to capture physics over a range of scales, from very large to really tiny, in extremely high-resolution to accurately model astrophysical objects like superluminous supernovae. This poses a technical challenge for astrophysicists. We were able to overcome this issue with a new numerical scheme and several million supercomputing hours at NERSC,” said Chen.

    For this work, the researchers modeled a supernova remnant approximately 15-billion kilometers wide with a dense 10-kilometer wide magnetar inside. In this system, the simulations show that hydrodynamic instabilities form on two scales in the remnant material. One instability is in the hot bubble energized by the magnetar and the other occurs when the young supernova’s forward shock plows up against ambient gas.

    “Both of these fluid instabilities cause more mixing than would normally occur in a typical supernova event, which has significant consequences for the light curves and spectra of superluminous supernovae. None of this would have been captured in a one-dimensional model,” said Chen.

    They also found that the magnetar can accelerate calcium and silicon elements that were ejected from the young supernova to velocities of 12,000 kilometers per second, which account for their broadened emission lines in spectral observations. And that even energy from weak magnetars can accelerate elements from the iron group, which are located deep in the supernova remnant, to 5,000 to 7,000 kilometers per second, which explains why iron is observed early in core-collapse supernovae events like SN 1987A. This has been a long-standing mystery in astrophysics.

    “We were the first ones to accurately model a superluminous supernova system in 3D because we were fortunate to have access to NERSC supercomputers,” said Chen. “This facility is an extremely convenient place to do cutting-edge science.”

    In addition to Chen, other authors on the paper are Stan Woosley (University of California, Santa Cruz) and Daniel Whalen (University of Portsmouth and University of Vienna). The team also received technical support from staff at NERSC and Berkeley Lab’s Center for Computational Sciences and Engineering (CCSE).

    Chen started using NERSC as a graduate student at the University of Minnesota in 2011, then as the IAU-Gruber Fellow in the Department of Astrophysics at UC Santa Cruz before taking positions at the National Astronomical Observatory of Japan, and his current role at ASIAA.

    ______________________________________________________

    NERSC at LBNL

    NERSC Cray Cori II supercomputer, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC Hopper Cray XE6 supercomputer, named after Grace Hopper, One of the first programmers of the Harvard Mark I computer

    NERSC Cray XC30 Edison supercomputer

    NERSC GPFS for Life Sciences


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF computer cluster in 2003.

    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Future:

    Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

    NERSC is a DOE Office of Science User Facility.

    See the full article here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    Grace Hopper


    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

     
  • richardmitnick 3:51 pm on April 24, 2020 Permalink | Reply
    Tags: "Research provides new insights into the evolution of stars", , , , , HEDP-High Energy Density Physics, , Supercomputing,   

    From University of Rochester: “Research provides new insights into the evolution of stars” 

    From University of Rochester

    April 24, 2020

    Lindsey Valich
    lvalich@ur.rochester.edu

    1
    Scientists at the Laboratory for Laser Energetics studied how matter under high-pressure conditions—such as the conditions in the deep interiors of planets and stars—might emit or absorb radiation. The research enhances an understanding of high-energy-density science and could lead to more information about how stars evolve. (NASA photo)

    Atoms and molecules behave very differently at extreme temperatures and pressures. Although such extreme matter doesn’t exist naturally on the earth, it exists in abundance in the universe, especially in the deep interiors of planets and stars. Understanding how atoms react under high-pressure conditions—a field known as high-energy-density physics (HEDP)—gives scientists valuable insights into the fields of planetary science, astrophysics, fusion energy, and national security.

    One important question in the field of HED science is how matter under high-pressure conditions might emit or absorb radiation in ways that are different from our traditional understanding.

    In a paper published in Nature Communications, Suxing Hu, a distinguished scientist and group leader of the HEDP Theory Group at the University of Rochester’s Laboratory for Laser Energetics (LLE), together with colleagues from the LLE and France, has applied theory and calculations to predict the presence of two new phenomena—interspecies radiative transition (IRT) and the breakdown of dipole selection rule—in the transport of radiation in atoms and molecules under HED conditions. The research enhances an understanding of HED science and could lead to more information about how stars and other astrophysical objects evolve in the universe.

    2
    Suxing Hu is group leader of the High-Energy-Density Physics Theory Group at the Laboratory for Laser Energetics, (University of Rochester photo / Eugene Kowaluk)

    What is interspecies radiative transition (IRT)?

    Radiative transition is a physical process happening inside atoms and molecules, in which their electron or electrons can “jump” from different energy levels by either radiating (emitting) or absorbing a photon. Scientists find that, for matter in our everyday life, such radiative transitions mostly happen within each individual atom or molecule; the electron does its jumping between energy levels belonging to the single atom or molecule, and the jumping does not typically occur between different atoms and molecules. However, Hu and his colleagues predict that when atoms and molecules are placed under HED conditions, and are squeezed so tightly that they become very close to each other, radiative transitions can involve neighboring atoms and molecules.

    “Namely, the electrons can now jump from one atom’s energy levels to those of other neighboring atoms,” Hu says.

    What is the dipole selection rule?

    Electrons inside an atom have specific symmetries. For example, “s-wave electrons” are always spherically symmetric, meaning they look like a ball, with the nucleus located in the atomic center; “p-wave electrons,” on the other hand, look like dumbbells. D-waves and other electron states have more complicated shapes. Radiative transitions will mostly occur when the electron jumping follows the so-called dipole selection rule, in which the jumping electron changes its shape from s-wave to p-wave, from p-wave to d-wave, and so forth.

    Under normal, non-extreme conditions, Hu says, “one hardly sees electrons jumping among the same shapes, from s-wave to s-wave and from p-wave to p-wave, by emitting or absorbing photons.”

    However, as Hu and his colleagues found, when materials are squeezed so tightly into the exotic HED state, the dipole selection rule is often broken down.

    “Under such extreme conditions found in the center of stars and classes of laboratory fusion experiments, non-dipole x-ray emissions and absorptions can occur, which was never imagined before,” Hu says.

    Using supercomputers to conduct calculations

    The researchers used supercomputers at both the University of Rochester’s Center for Integrated Research Computing (CIRC) and at the LLE to conduct their calculations.

    4
    University of Rochester’s Center for Integrated Research Computing (CIRC)

    U Rochester Laboratory for Laser Energetics

    “Thanks to the tremendous advances in high-energy laser and pulsed-power technologies, ‘bringing stars to the earth’ has become reality for the past decade or two,” Hu says.

    Hu and his colleagues performed their research using the density-functional theory (DFT) calculation, which offers a quantum mechanical description of the bonds between atoms and molecules in complex systems. The DFT method was first described in the 1960s, and was the subject of the 1998 Nobel Prize in Chemistry. DFT calculations have been continually improved since. One such improvement to enable DFT calculations to involve core electrons was made by Valentin Karasev, a scientist at the LLE and a co-author of the paper.

    The results indicate there are new emission/absorption lines appearing in the x-ray spectra of these extreme matter systems, which are from the previously unknown channels of IRT and the breakdown of dipole selection rule.

    Hu and Philip Nilson, a senior scientist at the LLE and coauthor of the paper, are currently planning future experiments that will involve testing these new theoretical predictions at the OMEGA laser facility at the LLE.

    U Rochester Omega Laser facility

    The facility lets users create exotic HED conditions in nanosecond timescales, allowing scientists to probe the unique behaviors of matter at extreme conditions.

    “If proved to be true by experiments, these new discoveries will profoundly change how radiation transport is currently treated in exotic HED materials,” Hu says. “These DFT-predicted new emission and absorption channels have never been considered so far in textbooks.”

    This research is based upon work supported by the United States Department of Energy (DOE) National Nuclear Security Administration and the New York State Energy Research and Development Authority. The work is partially supported by the National Science Foundation.

    The LLE was established at the University in 1970 and is the largest DOE university-based research program in the nation. As a nationally funded facility, supported by the National Nuclear Security Administration as part of its Stockpile Stewardship Program, the LLE conducts implosion and other experiments to explore fusion as a future source of energy, to develop new laser and materials technologies, and to conduct research and develop technology related to HED phenomena.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Rochester

    The University of Rochester is one of the country’s top-tier research universities. Our 158 buildings house more than 200 academic majors, more than 2,000 faculty and instructional staff, and some 10,500 students—approximately half of whom are women.

    Learning at the University of Rochester is also on a very personal scale. Rochester remains one of the smallest and most collegiate among top research universities, with smaller classes, a low 10:1 student to teacher ratio, and increased interactions with faculty.

     
  • richardmitnick 12:38 pm on April 22, 2020 Permalink | Reply
    Tags: , , Fujitsu PRIMEHPC FX1000 supercomputer ., , , Supercomputing   

    From insideHPC: “Fujitsu Supercomputer to Power Aerospace Research at JAXA in Japan” 

    From insideHPC

    April 22, 2020

    Today Fujitsu announced that it has received an order for a supercomputer system from the Japan Aerospace Exploration Agency (JAXA).


    PRIMEHPC FX1000

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint research.

    2
    PRIMEHPC FX1000

    “Scheduled to start operation in October 2020, the new computing system for large-scale numerical simulation, composed of Fujitsu Supercomputer PRIMEHPC FX1000, is expected to have a theoretical computational performance of 19.4 petaflops, which is approximately 5.5 times that of the current system. At the same time, Fujitsu will implement 465 nodes of x86 servers Fujitsu Server PRIMERGY series for general-purpose systems that can handle diverse computing needs.”

    As it conducts research of space development, aviation technology, and related basic technology, JAXA has used supercomputer systems to develop numerical simulation technologies such as fluid dynamics and structural dynamics in the study of aircraft and rockets. In recent years, in addition to conventional numerical simulations, the system has been expanding their role in the HPC field. For example, the system has processed earth observation data collected by satellites for use by researchers and the general public, while it has been used in AI calculations, including deep learning.

    JAXA is currently operating a supercomputer system JSS2 comprised of SORA-MA, which consists 3,240 nodes of Fujitsu Supercomputer PRIMEHPC FX100, and J-SPACE that stores and manages various data using a large-capacity storage medium.

    Features of the New Supercomputer System

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint

    3

    Fujitsu will implement a computing system for large-scale numerical simulations. The system will consist 5,760 nodes of PRIMEHPC FX1000, which utilizes the technology of supercomputer Fugaku jointly developed by Fujitsu and RIKEN.

    4

    It is expected to have 19.4 petaflops, approximately 5.5 times the theoretical computing performance of the current system, in double precision (64 bit) usually used in simulations. In addition, a total of 465 nodes from x86 servers Fujitsu Server PRIMERGY series equipped with high memory capacity and GPU will be deployed as they compose a general-purpose system capable of handling a variety of computing needs. With a large file system capacity of approximately 50 petabytes, including high-speed access storage system of approximately 10 petabytes, the new system will offer high performance and ease of use. The implementation of PRIMEHPC FX1000 equipped with a highly versatile Arm architecture CPU A64FX will enable the application of various software and contribute to the widespread use of JAXA’s research results.

    Future Plans

    While enhancing the global advantage of JAXA’s aerospace research in the conventional numerical simulation field, the system, as the foundation of the Agency’s HPC infrastructure, will be used for an AI computational processing platform for joint research and shared use. The system will also be applied to a large-scale data analysis platform for aggregating and analyzing satellite observation data that had been previously stored and managed by different divisions at JAXA. Fujitsu will support JAXA in making its philosophy a reality by solving its issues with experience gained through supplying supercomputer systems to the Agency since the 1970s. Offering PRIMEHPC FX1000 worldwide, the company will contribute in solving social issues, accelerating leading-edge research, and bolstering the competitive edge of corporations.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:33 pm on March 24, 2020 Permalink | Reply
    Tags: , Covid-19 High Performance Computing Consortium, , , Supercomputing   

    From MIT News: “MIT joins White House supercomputing effort to speed up search for Covid-19 solutions” 

    MIT News

    From MIT News

    March 23, 2020
    Jennifer Chu

    1
    MIT joins a consortium of supercomputing resources to help speed the search for Covid-19 solutions. Image: CDC, MIT News.

    The White House has announced the launch of the Covid-19 High Performance Computing Consortium, a collaboration among various industry, government, and academic institutions which will aim to make their supercomputing resources available to the wider research community, in an effort to speed up the search for solutions to the evolving Covid-19 pandemic.

    MIT has joined the consortium, which is led by the U.S. Department of Energy, the National Science Foundation, and NASA.

    MIT News spoke with Christopher Hill, principal research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, who is serving on the new consortium’s steering committee, about how MIT’s computing power will aid in the fight against Covid-19.

    Q: How did MIT become a part of this consortium?

    A: IBM, which has longstanding computing relationships with both the government and MIT, approached the Institute late last week about joining. The Department of Energy owns IBM’s Summit supercomputer, located at Oak Ridge National Laboratory, which was already working on finding pharmaceutical compounds that might be effective against this coronavirus. In addition to its close working relationship with MIT, IBM also had donated the Satori supercomputer as part of the launch of the MIT Schwarzman College of Computing. We obviously want to do everything we can to help combat this pandemic, so we jumped at the chance to be part of a larger effort.

    Q: What is MIT bringing to the consortium?

    A: We’re primarily bringing two systems to the effort: Satori and Supercloud, which is an unclassified system run by Lincoln Laboratory. Both systems have very large numbers of the computing units — known as GPUs — that enable the machines to process information far more quickly, and they also have extra large memory. That makes the systems slightly different from other machines in the consortium in ways that may be helpful for some types of problems.

    For example, MIT’s two systems seem to be especially helpful at examining images from cryo-electron microscopy, which entails use of an electron microscope on materials at ultralow temperatures. Ultralow temperatures slow the motion of atoms, making the images clearer. In addition to the hardware, MIT faculty and staff have already expressed interest in assisting outside researchers who are using MIT equipment.

    Q: How will MIT operate as part of the consortium?

    A: The consortium will receive proposals through a single portal being run in conjunction with the NSF. A steering committee will decide which proposals are accepted and where to route them. The steering committee will be relying on guidance from a larger technical review committee, which will include the steering committee members and additional experts. Both committees are made of researchers from the participating institutions. I will serve on both committees for MIT, and we’ll be appointing a second person to serve on the technical review committee.

    Four individuals at MIT — Ben Forget, Nick Roy, Jeremy Kepner (Lincoln Lab), and myself ­— will oversee the work at the Institute. The goal of the consortium is to focus on projects where computing is likely to produce relevant advances in one week to three months —though some projects, like those related to vaccines — may take longer.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:11 pm on March 12, 2020 Permalink | Reply
    Tags: "NEC JUSTUS 2 Supercomputer Deployed at University of Ulm", , , , Supercomputing   

    From insideHPC: “NEC JUSTUS 2 Supercomputer Deployed at University of Ulm” 

    From insideHPC

    March 11, 2020

    1
    NEC has deployed a new supercomputer at the University of Ulm in Germany.

    With a peak performance of 2 petaflops, the 4.4 million euro JUSTUS 2 system will enable complex simulations in chemistry and quantum physics.

    “JUSTUS 2 enables highly complex computer simulations at the molecular and atomic level, for example from chemistry and quantum science, as well as complex data analysis. And this with significantly higher energy efficiency than its predecessor, ”said Ulrich Steinbach. “The new high-performance computer will be available to researchers from all over Baden-Württemberg and is therefore – particularly with regard to battery research – a very sensible investment in the future of our science and business location.”

    JUSTUS 2 is one of the most powerful supercomputers in the world. With 33,696 CPU cores, the system is expected to deliver a five-fold increase in performance compared to its predecessor.

    “The combination of HPC simulation and data evaluation with methods of artificial intelligence brings a new quality in the use of high-performance computers – and NEC is at the forefront of this development,” added Yuichi Kojima, managing director of NEC Deutschland GmbH.

    Weighing 13 tons in total, JUSTUS 2 has 702 nodes with two processors each. Named after the German chemist Justus von Liebig, JUSTUS 2 was funded by the German Research Foundation (DFG), the state of Baden-Württemberg and the universities of Ulm, Stuttgart and Freiburg.

    “High-performance computing is essential, especially at a science and technology-oriented university like Ulm,” said computer science professor and university president Professor Michael Weber. “Therefore, JUSTUS 2 is a significant investment in the future of our strategic development areas and beyond.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:41 pm on February 12, 2020 Permalink | Reply
    Tags: Green AI Hackathon, Green AI Hackathon-shrinking the carbon footprint of artificial intelligence models., IBM Satori- MIT’s new supercomputer., , Supercomputing   

    From MIT News: “Brainstorming energy-saving hacks on Satori, MIT’s new supercomputer” 

    MIT News

    From MIT News

    February 11, 2020
    Kim Martineau | MIT Quest for Intelligence

    1
    IBM Satori Supercomputer

    Three-day hackathon explores methods for making artificial intelligence faster and more sustainable.

    Mohammad Haft-Javaherian planned to spend an hour at the Green AI Hackathon — just long enough to get acquainted with MIT’s new supercomputer, Satori. Three days later, he walked away with $1,000 for his winning strategy to shrink the carbon footprint of artificial intelligence models trained to detect heart disease.

    “I never thought about the kilowatt-hours I was using,” he says. “But this hackathon gave me a chance to look at my carbon footprint and find ways to trade a small amount of model accuracy for big energy savings.”

    Haft-Javaherian was among six teams to earn prizes at a hackathon co-sponsored by the MIT Research Computing Project and MIT-IBM Watson AI Lab Jan. 28-30. The event was meant to familiarize students with Satori, the computing cluster IBM donated to MIT last year, and to inspire new techniques for building energy-efficient AI models that put less planet-warming carbon dioxide into the air.

    The event was also a celebration of Satori’s green-computing credentials. With an architecture designed to minimize the transfer of data, among other energy-saving features, Satori recently earned fourth place on the Green500 list of supercomputers. Its location gives it additional credibility: It sits on a remediated brownfield site in Holyoke, Massachusetts, now the Massachusetts Green High Performance Computing Center, which runs largely on low-carbon hydro, wind and nuclear power.

    A postdoc at MIT and Harvard Medical School, Haft-Javaherian came to the hackathon to learn more about Satori. He stayed for the challenge of trying to cut the energy intensity of his own work, focused on developing AI methods to screen the coronary arteries for disease. A new imaging method, optical coherence tomography, has given cardiologists a new tool for visualizing defects in the artery walls that can slow the flow of oxygenated blood to the heart. But even the experts can miss subtle patterns that computers excel at detecting.

    At the hackathon, Haft-Javaherian ran a test on his model and saw that he could cut its energy use eight-fold by reducing the time Satori’s graphics processors sat idle. He also experimented with adjusting the model’s number of layers and features, trading varying degrees of accuracy for lower energy use.

    A second team, Alex Andonian and Camilo Fosco, also won $1,000 by showing they could train a classification model nearly 10 times faster by optimizing their code and losing a small bit of accuracy. Graduate students in the Department of Electrical Engineering and Computer Science (EECS), Andonian and Fosco are currently training a classifier to tell legitimate videos from AI-manipulated fakes, to compete in Facebook’s Deepfake Detection Challenge. Facebook launched the contest last fall to crowdsource ideas for stopping the spread of misinformation on its platform ahead of the 2020 presidential election.

    If a technical solution to deepfakes is found, it will need to run on millions of machines at once, says Andonian. That makes energy efficiency key. “Every optimization we can find to train and run more efficient models will make a huge difference,” he says.

    To speed up the training process, they tried streamlining their code and lowering the resolution of their 100,000-video training set by eliminating some frames. They didn’t expect a solution in three days, but Satori’s size worked in their favor. “We were able to run 10 to 20 experiments at a time, which let us iterate on potential ideas and get results quickly,” says Andonian.

    As AI continues to improve at tasks like reading medical scans and interpreting video, models have grown bigger and more calculation-intensive, and thus, energy intensive. By one estimate, training a large language-processing model produces nearly as much carbon dioxide as the cradle-to-grave emissions from five American cars. The footprint of the typical model is modest by comparison, but as AI applications proliferate its environmental impact is growing.

    One way to green AI, and tame the exponential growth in demand for training AI, is to build smaller models. That’s the approach that a third hackathon competitor, EECS graduate student Jonathan Frankle, took. Frankle is looking for signals early in the training process that point to subnetworks within the larger, fully-trained network that can do the same job. The idea builds on his award-winning Lottery Ticket Hypothesis paper from last year that found a neural network could perform with 90 percent fewer connections if the right subnetwork was found early in training.

    The hackathon competitors were judged by John Cohn, chief scientist at the MIT-IBM Watson AI Lab, Christopher Hill, director of MIT’s Research Computing Project, and Lauren Milechin, a research software engineer at MIT.

    The judges recognized four other teams: Department of Earth, Atmospheric and Planetary Sciences (EAPS) graduate students Ali Ramadhan, Suyash Bire, and James Schloss, for adapting the programming language Julia for Satori; MIT Lincoln Laboratory postdoc Andrew Kirby, for adapting code he wrote as a graduate student to Satori using a library designed for easy programming of computing architectures; and Department of Brain and Cognitive Sciences graduate students Jenelle Feather and Kelsey Allen, for applying a technique that drastically simplifies models by cutting their number of parameters.

    IBM developers were on hand to answer questions and gather feedback. “We pushed the system — in a good way,” says Cohn. “In the end, we improved the machine, the documentation, and the tools around it.”

    Going forward, Satori will be joined in Holyoke by TX-Gaia, Lincoln Laboratory’s new supercomputer. Together, they will provide feedback on the energy use of their workloads. “We want to raise awareness and encourage users to find innovative ways to green-up all of their computing,” says Hill.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 3:06 pm on February 4, 2020 Permalink | Reply
    Tags: , , Fujitsu PRIMEHPC FX1000 supercomputer at Nagoya University, , Supercomputing   

    From insideHPC: “Fujitsu to Deploy Arm-based Supercomputer at Nagoya University” 

    From insideHPC

    February 4, 2020
    Rich Brueckner

    Today Fujitsu announced that it has received an order for an Arm-based supercomputer system from Nagoya University’s Information Technology Center. The system is scheduled to start operation in July 2020.

    1
    Fujitsu PRIMEHPC FX1000

    “For the first time in the world, this system will adopt 2,304 nodes of the Fujitsu Supercomputer PRIMEHPC FX1000, which utilizes the technology of the supercomputer Fugaku developed jointly with RIKEN. In addition, a cluster system, connecting 221 nodes of the latest x86 servers Fujitsu Server PRIMERGY CX2570 M5 in parallel, as well as storage systems are connected by a high-speed interconnect. The sum of the theoretical computational performance of the entire system is 15.88 petaflops, making it one of the highest performing systems in Japan.”

    As a national joint usage/research center, the Information Technology Center of Nagoya University provides computing resources for academic use to researchers and private companies nationwide. It is currently operating a supercomputer system consisting of Fujitsu Supercomputer PRIMEHPC FX100 and other components. This time, the Center is planning to innovate the system in order to fulfill the large-scale calculation demand from researchers in joint usage nationwide, as well as the new calculation requirement for supercomputers represented by data science. Fujitsu won the order for this system in recognition of its proposal that concentrates the technical capabilities of Fujitsu and Fujitsu Laboratories Ltd.

    With the new system, Nagoya University’s Information Technology Center will contribute to various research and development activities. These include the conventional simulation of numerical computation to unravel the mechanism of typhoons and design new drugs. Moreover, the new system will develop a technology in the medical field that supports diagnoses and treatment, while apply AI in developing automatic driving technology.

    Fujitsu will continue to support the activities of the Center with its technology and experience nurtured through the development and offering of world-class supercomputers. By providing PRIMEHPC FX1000 worldwide, the company will also contribute to solving social issues, accelerating leading-edge research, and strengthening corporate advantages.

    “In recent years, the digitization of university education and research activities has increased the demand for computing,” said Kensaku Mori, Director, The Information Technology Center of Nagoya University. “In addition to such areas as extreme weather including super typhoons, earthquakes, and tsunamis, which are closely related to the safety and security of people’s lives, chemical fields such as molecular structure and drug discovery, and simulations in basic sciences such as space and elementary particles, there is an ever-increasing demand for computing in the fields of medicine and mobility, including artificial intelligence and machine learning. Also important are the data consumed and generated in computing, the networks that connect them, and the visualization of knowledge discovery from computing and data. Equipped with essential functions for such digital science in universities, the new supercomputer will be offered not only to Nagoya University but also to universities and research institutes nationwide, contributing to the further development of academic research in Japan.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:52 pm on January 30, 2020 Permalink | Reply
    Tags: ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution., ALCF’s two new storage systems which it has named “Grand” (150 PB of center-wide storage) and “Eagle” (50 PB community file system) are using the Cray ClusterStor E1000 system., , , , , Supercomputing, This is in preparation for the pending Aurora exascale supercomputer.   

    From insideHPC: “Argonne to Deploy Cray ClusterStor E1000 Storage System for Exascale” 

    From insideHPC

    January 30, 2020
    Rich Brueckner

    1
    Cray ClusterStor E1000

    Today HPE announced that ALCF will deploy the new Cray ClusterStor E1000 as its parallel storage solution.

    The new collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more.

    The latest deployment will expand storage capacity for ALCF’s workloads that require converged modeling, simulation, AI and analytics workloads, in preparation for the pending Aurora exascale supercomputer.

    Depiction of ANL ALCF Cray Intel SC18 Shasta Aurora exascale supercomputer

    Powered by HPE and Intel, Aurora is a Cray Shasta system planned for delivery in 2021.

    The Cray ClusterStor E1000 system utilizes purpose-built software and hardware features to meet high-performance storage requirements of any size with significantly fewer drives. Designed to support the Exascale Era, which is characterized by the explosion of data and converged workloads, the Cray ClusterStor E1000 will power ALCF’s future Aurora supercomputer to target a multitude of data-intensive workloads required to make breakthrough discoveries at unprecedented speed.

    “ALCF is leveraging Exascale Era technologies by deploying infrastructure required for converged workloads in modeling, simulation, AI and analytics,” said Peter Ungaro, senior vice president and general manager, HPC and AI, at HPE. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-generation HPC storage needs to support emerging, data-intensive workloads. We look forward to continuing our collaboration with ALCF and empowering its research community to unlock new value.”

    ALCF’s two new storage systems, which it has named “Grand” and “Eagle,” are using the Cray ClusterStor E1000 system to gain a completely new, cost-effective high-performance computing (HPC) storage solution to effectively and efficiently manage growing converged workloads that today’s offerings cannot support.

    “When Grand launches, it will benefit ALCF’s legacy petascale machines, providing increased capacity for the Theta compute system and enabling new levels of performance for not just traditional checkpoint-restart workloads, but also for complex workflows and metadata-intensive work,” said Mark Fahey, director of operations, ALCF.”

    “Eagle will help support the ever-increasing importance of data in the day-to-day activities of science,” said Michael E. Papka, director, ALCF. “By leveraging our experience with our current data-sharing system, Petrel, this new storage will help eliminate barriers to productivity and improve collaborations throughout the research community.”

    The two new systems will gain a total of 200 petabyes (PB) of storage capacity, and through the Cray ClusterStor E1000’s intelligent software and hardware designs, will more accurately align data flows with target workloads. ALCF’s Grand and Eagle systems will help researchers accelerate a range of scientific discoveries across disciplines, and are each assigned to address the following:

    Computational capacity – ALCF’s “Grand” provides 150 PB of center-wide storage and new levels of input/output (I/O) performance to support massive computational needs for its users.
    Simplified data-sharing – ALCF’s “Eagle” provides a 50 PB community file system to make data-sharing easier than ever among ALCF users, their collaborators and with third parties.

    ALCF plans to deliver its Grand and Eagle storage systems in early 2020. The systems will initially connect to existing ALCF supercomputers powered by HPE HPC systems: Theta, based on the Cray XC40-AC and Cooley, based on the Cray CS-300. ALCF’s Grand, which is capable of 1 terabyte per second (TB/s) bandwidth, will be optimized to support converged simulation science and data-intensive workloads once the Aurora exascale supercomputer is operational.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: