Tagged: insideHPC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:36 pm on January 24, 2020 Permalink | Reply
    Tags: , , insideHPC, Microway supercomputer being installed, , The new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously with over 85% more total memory and over four times the aggregate memory band, The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.,   

    From insideHPC: “UMass Dartmouth Speeds Research with Hybrid Supercomputer from Microway” 

    From insideHPC

    Today Microway announced that research activities are accelerating at the University of Massachusetts Dartmouth since the installation of a new supercomputing cluster.

    “UMass Dartmouth’s powerful new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously, with over 85% more total memory and over four times the aggregate memory bandwidth. It includes a heterogeneous system architecture featuring a wide array of computational engines.”

    2

    The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.

    Over 50 nodes include Intel Xeon Scalable Processors, DDR4 memory, SSDs and Mellanox ConnectX-5 EDR 100Gb InfiniBand. A subset of systems also feature NVIDIA V100 GPU Accelerators for GPU computing applications.

    Equally important are a second subset of POWER9 with 2nd Generation NVLink- based- IBM Power Systems AC922 Compute nodes. These systems are similar to those utilized in the world’s #1 and #2 most powerful Summit and Sierra supercomputers at ORNL and LLNL. The advanced NVIDIA NVLink interfaces built into POWER9 CPU and NVIDIA GPU ensure a broad pipeline between CPU:GPU for data intensive workloads.

    The deployment of the hybrid architecture system was critical to meeting the users’ needs. It also allowed those on the UMass Dartmouth campus to apply to test workloads onto the larger national laboratory systems at ORNL.

    Microway was one of the few vendors able to deliver a unified system with a mix of x86 and POWER9 systems, complete software integration across both kinds of nodes in the cluster, and offer a single point of sale and warranty coverage.

    Microway was selected as the vendor for the new cluster through an open bidding process. “They not only competed well on the price,” says Khanna, “but they were also the only company that could deliver the kind of heterogeneous system we wanted with a mixture of architecture.”

    For more information about the UMass Dartmouth Center for Scientific Computing and Visualization Research please navigate to: http://cscvr1.umassd.edu/

    This new cluster purchase was funded through an Office of Naval Research (ONR) DURIP grant award.

    Serving Users Across a Research Campus

    The deployment has helped continue to serve, attract and retain faculty, undergraduate students, and those seeking advance degrees to the UMass Dartmouth campus. The Center for Scientific Computing and Visualization Research administers the new compute resource.

    With its new cluster, CSCVR is undertaking cutting edge work. Mathematics researchers are developing new numerical algorithms on the new deployment. A primary focus is in astrophysics: with focus on the study of black holes and stars.

    “Our engineering researchers,” says Gaurav Khanna, Co-Director of UMass Dartmouth’s Center for Scientific Computing & Visualization Research, “are very actively focused on computational engineering, and there are people in mechanical engineering who look at fluid and solid object interactions.” This type of research is known as two-phase fluid flow. Practical applications can take the form of modelling windmills and coming up with a better design for the materials on the windmill such as the coatings on the blade, as well as improved designs for the blades themselves.

    This team is also looking at wave energy converters in ocean buoys. “As buoys bob up and down,” Khanna explains, “you can use that motion to generate electricity. You can model that into the computation of that environment and then try to optimize the parameters needed to have the most efficient design for that type of buoy.”

    A final area of interest to this team is ocean weather systems. Here, UMass Dartmouth researchers are building large models to predict regional currents in the ocean, weather patterns, and weather changes.

    2

    A Hybrid Architecture for a Broad Array of Workloads

    The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.

    The deployment of the hybrid architecture system was critical to meeting the users’ needs. It also allowed those on the UMass Dartmouth campus to apply to test workloads onto the larger national laboratory systems at ORNL.

    Microway was one of the few vendors able to deliver a unified system with a mix of x86 and POWER9 systems, complete software integration across both kinds of nodes in the cluster, and offer a single point of sale and warranty coverage.

    “Microway was selected as the vendor for the new cluster through an open bidding process. “They not only competed well on the price,” says Khanna, “but they were also the only company that could deliver the kind of heterogeneous system we wanted with a mixture of architecture.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:38 pm on January 24, 2020 Permalink | Reply
    Tags: "DOE Announces $625 Million for New Quantum Centers", insideHPC,   

    From insideHPC: “DOE Announces $625 Million for New Quantum Centers” 

    From insideHPC

    The U.S. Department of Energy (DOE) has announced plans to spend up $625 million over the next five years to establish two to five multidisciplinary Quantum Information Science (QIS) Research Centers in support of the National Quantum Initiative.

    1
    https://enquanted.wordpress.com/2018/07/13/americas-national-quantum-initiative/

    The National Quantum Initiative Act, signed into law by President Trump in December 2018, established a coordinated multiagency program to support research and training in QIS, encompassing activities at the National Institute of Standards and Technology, the Department of Energy, and the National Science Foundation. The Act called for the creation of a total between four and ten competitively awarded QIS research centers.

    2

    “America continues to lead the world in QIS and emerging technologies because of our incredible innovation ecosystem. The National Quantum Initiative launched by the President, including these new research centers, leverages the combined strengths of academia, industry, and DOE laboratories to drive QIS breakthroughs,” said Chief Technology Officer of the United States Michael Kratsios.

    To further advance that effort, DOE’s Argonne National Laboratory announced that it had launched a new, 52-mile testbed for quantum communications experiments, which will enable scientists to address challenges in operating a quantum network and help lay the foundation for a quantum internet.

    The Department’s planned investment in QIS Centers represents a long-term, large-scale commitment of U.S. scientific and technological resources to a highly competitive and promising new area of investigation with enormous potential to transform science and technology.

    3

    The aim of the Centers, coupled with DOE’s core research portfolio, is to create the ecosystem needed to foster and facilitate advancement of QIS, with major anticipated benefits for national security, economic competitiveness, and America’s continued leadership in science.

    Each QIS Center is expected to incorporate a collaborative research team spanning multiple scientific and engineering disciplines and multiple institutions. Centers will draw on the resources of the full range of research communities stewarded by DOE’s Office of Science and integrate elements from a wide range of technical fields.

    Applications are expected to be in the form of multi-institutional proposals submitted by a single lead institution. Eligible lead and partner institutions include universities, nonprofit research institutions, industry, DOE national laboratories, other U.S. government laboratories, and federal agencies.

    Total planned funding will be up to $625 million for awards beginning in Fiscal Year 2020 and lasting up to five years in duration, with outyear funding contingent on congressional appropriations. Selections will be made based on peer review.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 11:53 am on January 1, 2020 Permalink | Reply
    Tags: "Theta and the Future of Accelerator Programming at Argonne", , , insideHPC,   

    From Argon ALCF via insideHPC: “Theta and the Future of Accelerator Programming at Argonne” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    From insideHPC

    January 1, 2020
    Rich Brueckner


    In this video from the Argonne Training Program on Extreme-Scale Computing 2019, Scott Parker from Argonne presents: Theta and the Future of Accelerator Programming.

    ANL ALCF Theta Cray XC40 supercomputer

    Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Scott Parker is the Lead for Performance Tools and Programming Models at the ALCF. He received his B.S. in Mechanical Engineering from Lehigh University, and a Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign. Prior to joining Argonne, he worked at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance tools, performance optimization, and spectral element computational fluid dynamics solvers.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:46 am on December 19, 2019 Permalink | Reply
    Tags: , , insideHPC, Mississippi State University, , Orion Dell-EMC supercomputer,   

    From insideHPC: “Orion Supercomputer comes to Mississippi State University” 

    From insideHPC

    December 18, 2019
    Rich Brueckner

    1
    Orion Dell EMC supercomputer

    Today Mississippi State University and the NOAA celebrated one of the country’s most powerful supercomputers with a ribbon-cutting ceremony for the Orion supercomputer, the fourth-fastest computer system in U.S. academia. Funded by NOAA and managed by MSU’s High Performance Computing Collaboratory, the Orion system is powering research and development advancements in weather and climate modeling, autonomous systems, materials, cybersecurity, computational modeling and more.

    3

    With 3.66 Petaflops of performance on the Linpack benchmark, Orion is 60th most powerful supercomputer in the world according to Top500.org, which ranks the world’s most powerful non-distributed computer systems. It is housed in the Malcolm A. Portera High Performance Computing Center, located in MSU’s Thad Cochran Research, Technology and Economic Development Park.

    “Mississippi State has a long history of using advanced computing power to drive innovative research, making an impact in Mississippi and around the world,” said MSU President Mark E. Keenum. “We also have had many successful collaborations with NOAA in support of the agency’s vital work. I am grateful that NOAA has partnered with us to help meet its computing needs, and I look forward to seeing the many scientific advancements that will take place because of this world-class supercomputer.”

    NOAA has provided MSU with $22 million in grants to purchase, install and run Orion. The Dell-EMC system consists of 28 computer cabinets, each cabinet approximately the size of an industrial refrigerator, 72,000 processing cores and 350 terabytes of Random Access Memory.

    “We’re excited to support this powerhouse of computing capacity at Mississippi State,” said Craig McLean, NOAA assistant administrator for Oceanic and Atmospheric Research. “Orion joins NOAA’s network of computer centers around the country, and boosts NOAA’s ability to conduct innovative research to advance weather, climate and ocean forecasting products vital to protecting American lives and property.”

    MSU’s partnerships with NOAA include the university’s leadership of the Northern Gulf Institute, a consortium of six academic institutions that works with NOAA to address national strategic research and education goals in the Gulf of Mexico region. Additionally, MSU’s High Performance Computing Collaboratory provides the computing infrastructure for NOAA’s Exploration Command Center at the NASA Stennis Space Center. The state-of-the-art communications hub enables research scientists at sea and colleagues on shore to communicate in real time and view live video streams of undersea life.

    “NOAA has been an incredible partner in research with MSU, and this is the latest in a clear demonstration of the benefits of this partnership for both the university and the agency,” said MSU Provost and Executive Vice President David Shaw.”

    Orion supports research operations for several MSU centers and institutes, such as the Center for Computational Sciences, Center for Cyber Innovation, Geosystems Research Institute, Center for Advanced Vehicular Systems, Institute for Genomics, Biocomputing and Biogeotechnology, the Northern Gulf Institute and the FAA Alliance for System Safety of UAS through Research Excellence (ASSURE). These centers use high-performance computing to model and simulate real-world phenomena, generating insights that would be impossible or prohibitively expensive to obtain otherwise.

    “With our faculty expertise and our computing capabilities, MSU is able to remain at the forefront of cutting-edge research areas,” said MSU Interim Vice President for Research and Economic Development Julie Jordan. “The Orion supercomputer is a great asset for the state of Mississippi as we work with state, federal and industry partners to solve complex problems and spur new innovations.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:13 pm on December 6, 2019 Permalink | Reply
    Tags: "IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute", , , insideHPC,   

    From insideHPC: “IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute” 

    From insideHPC


    AiMos the newest Supercomputer at Rensselaer Polytechnic Institute
    In this video, Christopher Carothers, director of the Center for Computational Innovations, discusses AiMos, the newest supercomputer at Rensselaer Polytechnic Institute.

    The most powerful supercomputer to debut on the November 2019 Top500 ranking of supercomputers will be unveiled today at the Rensselaer Polytechnic Institute Center for Computational Innovations (CCI). Part of a collaboration between IBM, Empire State Development (ESD), and NY CREATES, the eight petaflop IBM POWER9-equipped AI supercomputer is configured to help enable users to explore new AI applications and accelerate economic development from New York’s smallest startups to its largest enterprises.

    1

    Named AiMOS (short for Artificial Intelligence Multiprocessing Optimized System) in honor of Rensselaer co-founder Amos Eaton, the machine will serve as a test bed for the IBM Research AI Hardware Center, which opened on the SUNY Polytechnic Institute (SUNY Poly) campus in Albany earlier this year. The AI Hardware Center aims to advance the development of computing chips and systems that are designed and optimized for AI workloads to push the boundaries of AI performance. AiMOS will provide the modeling, simulation, and computation necessary to support the development of this hardware.

    “Computer artificial intelligence, or more appropriately, human augmented intelligence (AI), will help solve pressing problems, from healthcare to security to climate change. In order to realize AI’s full potential, special purpose computing hardware is emerging as the next big opportunity,” said Dr. John E. Kelly III, IBM Executive Vice President. “IBM is proud to have built the most powerful and smartest computers in the world today, and to be collaborating with New York State, SUNY, and RPI on the new AiMOS system. Our collective goal is to make AI systems 1,000 times more efficient within the next decade.”

    According to the recently released November 2019 Top500 and Green500 supercomputer rankings, AiMOS is the most powerful supercomputer housed at a private university. Overall, it is the 24th most powerful supercomputer in the world and third-most energy efficient. Built using the same IBM Power Systems technology as the world’s smartest supercomputers, the US Dept. of Energy’s Summit and Sierra supercomputers, AiMOS uses a heterogenous system architecture that includes IBM POWER9 CPUs and NVIDIA GPUs. This enables AiMOS with a capacity of eight quadrillion calculations per second.


    In this video, Rensselaer Polytechnic Institute President Shirley Ann Jackson discusses AiMOS, the newest supercomputer at Rensselaer Polytechnic Institute.

    “As the home of one of the top high-performance computing systems in the U.S. and in the world, Rensselaer is excited to accelerate our ongoing research in AI, deep learning, and in fields across a broad intellectual front,” said Rensselaer President Shirley Ann Jackson. “The creation of new paradigms requires forward-thinking collaborators, and we look forward to working with IBM and the state of New York to address global challenges in ways that were previously impossible.”

    AiMOS will be available for use by public and private industry partners.

    Dr. Douglas Grose, Future President of NY CREATES said, “The unveiling of AiMOS and its incredible computational capabilities is a testament to New York State’s international high-tech leadership. As a test bed for the AI Hardware Center, AiMOS furthers the powerful potential of the AI Hardware Center and its partners, including New York State, IBM, Rensselaer, and SUNY, and we look forward to the advancement of AI systems as a result of this milestone.”

    SUNY Polytechnic Institute Interim President Grace Wang said, “SUNY Poly is proud to work with New York State, IBM, and Rensselaer Polytechnic Institute and our other partners as part of the AI Hardware Center initiative which will drive exciting artificial intelligence innovations. We look forward to efforts like this providing unique and collaborative research opportunities for our faculty and researchers, as well as leading-edge educational opportunities for our top-tier students. As the exciting benefits of AiMOS and the AI Hardware Center come to fruition, SUNY Poly is thrilled to play a key role in supporting the continued technological leadership of New York State and the United States in this critical research sector.”

    AiMOS will also support the work of Rensselaer faculty, students, and staff who are engaged in a number of ongoing collaborations that employ and advance AI technology, many of which involve IBM Research. These initiatives include the Rensselaer-IBM Artificial Intelligence Research Collaboration (AIRC), which brings researchers at both institutions together to explore new frontiers in AI, Cognitive and Immersive Systems Lab (CISL), and The Jefferson Project, which combines Internet of Things technology and powerful analytics to help manage and protect one of New York’s largest lakes, while creating a data-based blueprint for preserving bodies of fresh water around the globe.

    “The established expertise in computation and data analytics at Rensselaer, when combined with AiMOS, will enable many of our research projects to make significant strides that simply were not possible on our previous platform,” said Christopher Carothers, director of the CCI and professor of computer science at Rensselaer. “Our message to the campus and beyond is that, if you are doing work on large-scale data analytics, machine learning, AI, and scientific computing then it should be running at the CCI.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 4:42 pm on December 5, 2019 Permalink | Reply
    Tags: insideHPC, Sawtooth supercomputer at Idaho National Laboratory   

    From insideHPC: “Sawtooth Supercomputer from HPE Comes to Idaho National Lab” 

    From insideHPC

    December 5, 2019
    Rich Brueckner

    A powerful new supercomputer arrived this week at Idaho National Laboratory’s Collaborative Computing Center. The machine has the power to run complex modeling and simulation applications, which are essential to developing next-generation nuclear technologies.

    Named after a central Idaho mountain range, Sawtooth arrives in December and will be available to users early next year. The $19.2 million system ranks #37 on the 2019 Top500 fastest supercomputers in the world. That is the highest ranking reached by an INL supercomputer. Of 102 new systems added to the list in the past six months, only three were faster than Sawtooth.

    Named after a central Idaho mountain range, Idaho National Laboratory’s powerful new supercomputer, Sawtooth, ranks No. 37 on the 2019 top 500 fastest supercomputers in the world. It will be able to crunch much more complex mathematical calculations at approximately six times the speed of Falcon and Lemhi, INL’s current systems.

    1
    Sawtooth HPE superomputer

    3

    The boost in computing power will enable researchers at INL and elsewhere to simulate new fuels and reactor designs, greatly reducing the time, resources and funding needed to transition advanced nuclear technologies from the concept phase into the marketplace.

    Supercomputing reduces the need to build physical experiments to test every hypothesis, as was the process used to develop the majority of technologies used in currently operating reactors. By using simulations to predict how new fuels and designs will perform in a reactor environment, engineers can select only the most promising technologies for the real-world experiments, saving time and money.

    Workers begin setting up the Sawtooth supercomputer in the data center of INL’s Collaborative Computing Center.

    INL’s ability to model new nuclear technologies has become increasingly important as nations strive to meet growing energy needs while minimizing emissions. Today, there are about 450 nuclear power reactors operating in 30 countries plus Taiwan. These reactors produce approximately 10% of the world’s electricity and 60% of America’s carbon-free electricity. According to the World Nuclear Association, 15 countries are currently building about 50 power reactors.

    John Wagner, the associate laboratory director for INL’s Nuclear Science and Technology directorate, said Sawtooth plays an important role in developing and deploying advanced nuclear technologies and is a key capability for the National Reactor Innovation Center (NRIC).

    In August, the U.S. Department of Energy designated INL to lead NRIC, which was established to provide developers the resources to test, demonstrate and assess performance of new nuclear technologies, critical steps that must be completed before they are available commercially.

    “With advanced modeling and simulation and the computing power now available, we expect to be able to dramatically shorten the time it takes to test, manufacture and commercialize new nuclear technologies,” Wagner said. “Other industries and organizations, such as aerospace, have relied on modeling and simulation to bring new technologies to market much faster without compromising safety and performance.”

    Sawtooth is funded by the DOE’s Office of Nuclear Energy through the Nuclear Science User Facilities program. It will provide computer access to researchers at INL, other national laboratories, industry and universities. Idaho’s three research universities will be able to access Sawtooth and INL’s other supercomputers remotely via the Idaho Regional Optical Network (IRON), an ultra-high-speed fiber optic network.

    Sawtooth, with its nearly 100,000 processors, is being installed in the new 67,000-square-foot Collaborative Computing Center, which opened in October. The new facility was designed to be the heart of modeling and simulation work for INL as well as provide floor space, power and cooling for systems such as Sawtooth. Falcon and Lemhi, the lab’s current supercomputing systems, also are slated to move to this new facility.

    Three tractor trailers line up in the distance to deliver Idaho National Laboratory’s newest supercomputer, Sawtooth, to the newly built Collaborative Computing Center in Idaho Falls, Idaho.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 2:16 pm on November 25, 2019 Permalink | Reply
    Tags: "SDSC Conducts 50000+ GPU Cloudburst Experiment with Wisconsin IceCube Particle Astrophysics Center", , insideHPC, ,   

    From insideHPC: “SDSC Conducts 50,000+ GPU Cloudburst Experiment with Wisconsin IceCube Particle Astrophysics Center” 

    From insideHPC

    November 25, 2019

    SDSC Triton HP supercomputer

    SDSC Gordon-Simons supercomputer

    SDSC Dell Comet supercomputer

    1

    U Wisconsin ICECUBE neutrino detector at the South Pole

    Researchers at SDSC and the Wisconsin IceCube Particle Astrophysics Center have successfully completed a computational experiment as part of a multi-institution collaboration that marshaled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.

    2
    The chart shows the time evolution of the burst over the course of ~200 minutes. The black line is the number of GPUs used for science, peaking at 51,500 GPUs. Each color shows the number of GPUs purchased in a region of a cloud provider. The steep rise indicates the burst capability of the infrastructure to support short but intense computation for science. Credit: Igor Sfiligoi, SDSC/UC San Diego

    In all, some 51,500 GPU processors were used during the approximately two-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.

    The experiment used simulations from the IceCube Neutrino Observatory, an array of some 5,160 optical sensors deep within a cubic kilometer of ice at the South Pole. In 2017, researchers at the NSF-funded observatory found the first evidence of a source of high-energy cosmic neutrinos – subatomic particles that can emerge from their sources and pass through the universe unscathed, traveling for billions of light years to Earth from some of the most extreme environments in the universe.

    The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC.

    Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.

    “We focused this GPU cloud burst in the area of multi-messenger astrophysics, which is based on the observation and analysis of what we call ‘messenger’ signals, in this case neutrinos,” said Würthwein, also a physics professor at UC San Diego and executive director of the Open Science Grid (OSG), a multi-disciplinary research partnership specializing in high-throughput computational services funded by the NSF and the U.S. Department of Energy.”

    ““The NSF chose multi messenger astronomy as one of its 10 Big Ideas to focus on during the next few years,” said Würthwein. “We now have instruments that can measure gravitational waves, neutrinos, and various forms of light to see the most violent events in the universe. We’re only starting to understand the physics behind such energetic celestial phenomena that can reach Earth from deepest space.”

    Exascale Extrapolations

    The net result was a peak of about 51k GPUs of various kinds, with an aggregate peak of about 350 PFLOP32s (according to NVIDIA specifications), according to Sfiligoi.

    “For comparison, the Number 1 TOP100 HPC system, Summit, (based at Oak Ridge National Laboratory) has a nominal performance of about 400 PFLOP32s. So, at peak, our cloud-based cluster provided almost 90% of the performance of Summit, at least for the purpose of IceCube simulations.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The relatively short time span of the experiment showed the ability to conduct a massive amount of data processing within a very short period – an advantage for research projects that must meet a tight deadline. Francis Halzen, principal investigator for IceCube, a Distinguished Professor at the University of Wisconsin-Madison, and director of the university’s Institute for Elementary Particle Physics, foresaw this several years ago.

    “We have initiated an effort to improve the calibration of the instrument that will result in sensitivity improved by an estimated factor of four,” wrote Halzen. “We can apply this improvement to 10 years of archived data, thus obtaining the equivalent of 40 years of current IceCube data.”

    ““We conducted this experiment with three goals in mind,” said IceCube’s Riedel, “One obvious goal was to produce simulations that will be used to do science with IceCube for multi-messenger astrophysics. But we also wanted to understand the readiness of our cyberinfrastructure for bursting into future Exascale-class facilities such as Argonne’s Aurora or Oak Ridge’s Frontier, when they become available. And more generally, we sought to determine how much GPU capacity can be bought today for an hour or so GPU burst in the commercial cloud.”

    ____________________________________________________

    3
    No one region contributed more than 11% of the total science output, showing the power of dHTC in aggregating resources globally to achieve large-scale computation. Credit: Igor Sfiligoi, SDSC/UC San Diego.

    “This was a social experiment as well,” added Würthwein. “We scavenged up all available GPUs on demand across 28 cloud regions across three continents – North America, Europe, and Asia. The results of this experiment tell us that we can elastically burst to very large scales of GPUs using the cloud, given that exascale computers don’t exist now but may soon be used in the coming years. The demo also shows such bursting of massive data, is suitable for a wide range of challenges across astronomy and other sciences. To the extent that the elasticity is there, we believe that this can be applied across all of scientific research to get results quickly.”
    ____________________________________________________

    HTCondor was used to integrate all purchased GPUs into a single resource pool that IceCube submitted their workflows to from their home base in Wisconsin. This was accomplished by aggregating resources in each cloud region, and then aggregating those aggregators into a single global pool at SDSC.

    “This is very similar to the production infrastructure that OSG operates for IceCube to aggregate dozens of ‘on-prem’ clusters into a single global resource pool across the U.S., Canada, and Europe,” said Sfiligoi.

    An additional experiment to reach even higher scales is likely to be made sometime around the Christmas and New Year holidays, when commercial GPU use is traditionally lower, and therefore availability of such GPUs for scientific research is greater.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:58 pm on October 24, 2019 Permalink | Reply
    Tags: Cray Storm supercomputer, High-Performance Computing Center of the University of Stuttgart (HLRS) in Germany, insideHPC   

    From insideHPC: “Cray CS-Storm Supercomputer coming to HLRS in Germany” 

    From insideHPC

    Today Cray announced that the High-Performance Computing Center of the University of Stuttgart (HLRS) in Germany has selected a new Cray CS-Storm GPU-accelerated supercomputer to advance its computing infrastructure in response to user demand for processing-intensive applications like machine learning and deep learning.

    1

    The new Cray system is tailored for artificial intelligence (AI) and includes the Cray Urika-CS AI and Analytics suite, enabling HLRS to accelerate AI workloads, arm users to address complex computing problems and process more data with higher accuracy of AI models in engineering, automotive, energy, and environmental industries and academia.

    “As we extend our service portfolio with AI, we require an infrastructure that can support the convergence of traditional high-performance computing applications and AI workloads to better support our users and customers,” said Prof. Dr. Michael Resch, director at HRLS. “We’ve found success working with our current Cray Urika-GX system for data analytics, and we are now at a point where AI and deep learning have become even more important as a set of methods and workflows for the HPC community. Our researchers will use the new CS-Storm system to power AI applications to achieve much faster results and gain new insights into traditional types of simulation results.”

    Supercomputer users at HLRS are increasingly asking for access to systems containing AI acceleration capabilities. With the GPU-accelerated CS-Storm system and Urika-CS AI and Analytics suite, which leverages popular machine intelligence frameworks like TensorFlow and PyTorch, HLRS can provide machine learning and deep learning services to its leading teaching and training programs, global partners and R&D. The Urika-CS AI and Analytics suite includes Cray’s Hyperparameter Optimization (HPO) and Cray Programming Environment Deep Learning Plugin, arming system users with the full potential of deep learning and advancing the services HLRS offers to its users interested in data analytics, machine learning and related fields.

    “The future will be driven by the convergence of modeling and simulation with AI and analytics and we’re honored to be working with HLRS to further their AI initiatives by providing advanced computing technology for the Center’s engineering and HPC training and research endeavors,” said Peter Ungaro, president and CEO at Cray, a Hewlett Packard Enterprise company. “HLRS has the opportunity to apply AI to improve and scale data analysis for the benefit of its core research areas, such as looking at trends in industrial HPC usage, creating models of car collisions, and visualizing black holes. The Cray CS-Storm combined with the unique Cray-CS AI and Analytics suite will allow HLRS to better tackle converged AI and simulation workloads in the exascale era.”

    In addition to the Cray CS-Storm architecture and Cray-CS AI and Analytics suite, the system will feature NVIDIA V100 Tensor Core GPUs and Intel Xeon Scalable processors.

    “The convergence of AI and scientific computing has accelerated the pace of scientific progress and is helping solve the world’s most challenging problems,” said Paresh Kharya, Director of Product Management and Marketing at NVIDIA. “Our work with Cray and HLRS on their new GPU-accelerated system will result in a modern HPC infrastructure that addresses the demands of the Center’s research community to combine simulation with the power of AI to advance science, find cures for disease, and develop new forms of energy.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:24 pm on October 23, 2019 Permalink | Reply
    Tags: , Cray Archer2, , insideHPC,   

    From insideHPC: “ARCHER2 to be first Cray Shasta System in Europe” 

    From insideHPC

    October 22, 2019

    Today Cray, a Hewlett Packard Enterprise company, announced a £48 million contract award in the UK to expand its high-performance computing capabilities with Cray’s next-generation Shasta supercomputer. The new ARCHER2 supercomputer will be the first Shasta system announced in EMEA and the second system worldwide used for academic research. ARCHER2 will be the UK’s most powerful supercomputer and will be equipped with the revolutionary Slingshot interconnect, Cray ClusterStor high-performance storage, the Cray Shasta Software platform, and 2nd Gen AMD EPYC processors. The new supercomputer will be 11X higher performance than its predecessor, ARCHER.

    3

    1
    UK Research and Innovation (UKRI) has once again contracted the team at CRAY to build their follow-up to the Archer supercomputer. Archer 2 is reported to offer up to 11x the throughput of the previous Archer supercomputer put into service back in late 2013. Archer 2 is going to be powered by 12,000 EPYC Rome 64 Core CPUs with 5,848 compute nodes, each having two of the 64 core behemoths. The total core count is 748,544 ( 1,497,088 threads) and 1.57PB for the entire system. The CPU speed is listed as 2.2GHz, which we must assume they are running off of the base clock, so that would be EPYC 7742 CPUs with a 225W TDP. These sorts of specs are insane but also will make some significant heat. Archer 2 will be cooled by 23 Shasta Mountain direct liquid cooling and associated liquid cooling cabinets. The back end for connectivity is Cray’s next-gen slingshot 100Gbps network compute groups. AMD GPUs are part of this array, but the information I have not found yet on which GPU units from AMD will be used. Estimated peak performance is 28 PFLOP/s and the transition for the Archer to the Archer 2 will begin in Q1 2020 and be completed late 1H 2020 as long as things go as planned.

    “ARCHER2 will be an important resource for the UK’s research community, providing them with the capability to pursue investigations which are not possible using current resources, said Lynn Gladden, executive chair, professor at the Engineering and Physical Sciences Research Council (ESPRC). “The new system delivered by Cray will greatly increase the potential for researchers to make discoveries across fields such as physics, chemistry, healthcare and technology development.”

    The new Cray Shasta-based ARCHER2 system will replace the existing ARCHER Cray XC30 in 2020 and be an even greater capability resource for academic researchers and industrial users from the UK, Europe and the rest of the world. At rates previously unattainable, the new supercomputer will achieve 11X higher performance with only a 27% increase in grid power. The ARCHER2 project provides resources for exploration in research disciplines including oil and gas, sustainability and natural resources, mental and physical health, oceanography, atomistic structures, and technology advancement.

    “We’re pleased to continue supporting UKRI’s mission and provide the most advanced high-end computing resources for the UK’s science and research endeavors,” said Peter Ungaro, president and CEO at Cray, a Hewlett Packard Enterprise company. “As traditional modeling and simulation applications and workflows converge with AI and analytics, a new Exascale Era architecture is required. Shasta will uniquely provide this new capability and ARCHER2 will be the first of its kind in Europe, as its next-gen architecture will provide UK and neighboring scientists and researchers the ability to meet their research requirements across a broad range of disciplines, faster.”

    The new Shasta system will be the third Cray supercomputer delivered to UKRI, with the previous systems being HECToR and ARCHER. ARCHER2 will be supported by 2nd Gen AMD EPYC processors.

    4

    “AMD is incredibly proud to continue our collaboration with Cray to deliver what will be the most powerful supercomputer in the UK, helping to process data faster and reduce the time it takes to reach critical scientific conclusions,” said Forrest Norrod, senior vice president and general manager, AMD Datacenter and Embedded Systems Group. “Investments in high-performance computing technology are imperative to keep up with today’s increasingly complex problems and explosive data growth. The 2nd Gen AMD EPYC processors paired with Cray Shasta will provide a powerful resource for the next generation of research in the UK when ARCHER2 is delivered next year.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:17 am on October 14, 2019 Permalink | Reply
    Tags: "Supercomputing the Building Blocks of the Universe", , insideHPC, , ,   

    From insideHPC: “Supercomputing the Building Blocks of the Universe” 

    From insideHPC

    October 13, 2019

    In this special guest feature, ORNL profiles researcher Gaute Hagen, who uses the Summit supercomputer to model scientifically interesting atomic nuclei.

    1
    Gaute Hagen uses ORNL’s Summit supercomputer to model scientifically interesting atomic nuclei. To validate models, he and other physicists compare computations with experimental observations. Credit: Carlos Jones/ORNL

    At the nexus of theory and computation, physicist Gaute Hagen of the Department of Energy’s Oak Ridge National Laboratory runs advanced models on powerful supercomputers to explore how protons and neutrons interact to “build” an atomic nucleus from scratch. His fundamental research improves predictions about nuclear energy, nuclear security and astrophysics.

    “How did matter that forms our universe come to be?” asked Hagen. “How does matter organize itself based on what we know about elementary particles and their interactions? Do we fully understand how these particles interact?”

    The lightest nuclei, hydrogen and helium, formed during the Big Bang. Heavier elements, up to iron, are made in stars by progressively fusing those lighter nuclei. The heaviest nuclei form in extreme environments when lighter nuclei rapidly capture neutrons and undergo beta decays.

    For example, building nickel-78, a neutron-rich nucleus that is especially strongly bound, or “doubly magic,” requires 28 protons and 50 neutrons interacting through the strong force. “To solve the Schrödinger equation for such a huge system is a tremendous challenge,” Hagen said. “It is only possible using advanced quantum mechanical models and serious computing power.”

    Through DOE’s Scientific Discovery Through Advanced Computing program, Hagen participates in the NUCLEI project to calculate nuclear structure and reactions from first principles; its collaborators represent 7 universities and 5 national labs. Moreover, he is the lead principal investigator of a DOE Innovative and Novel Computational Impact on Theory and Experiment award of time on supercomputers at Argonne and Oak Ridge National Laboratories for computations that complement part of the physics addressed under NUCLEI.

    Theoretical physicists build models and run them on supercomputers to simulate the formation of atomic nuclei and study their structures and interactions. Theoretical predictions can then be compared with data from experiments at new facilities producing increasingly neutron-rich nuclei. If the observations are close to the predictions, the models are validated.

    ‘Random walk’

    “I never planned to become a physicist or end up at Oak Ridge,” said Hagen, who hails from Norway. “That was a random walk.”

    Graduating from high school in 1994, he planned to follow in the footsteps of his father, an economics professor, but his grades were not good enough to get into the top-ranked Norwegian School of Economics in Bergen. A year of mandatory military service in the King’s Guard gave Hagen fresh perspective on his life. At 20, he entered the University of Bergen and earned a bachelor’s degree in the philosophy of science. Wanting to continue for a doctorate, but realizing he lacked math and science backgrounds that would aid his dissertation, he signed up for classes in those fields—and a scientist was born. He went on to earn a master’s degree in nuclear physics.

    Entering a PhD program, he used pen and paper or simple computer codes for calculations of the Schrödinger equation pertaining to two or three particles. One day his advisor introduced him to University of Oslo professor Morten Hjorth-Jensen, who used advanced computing to solve physics problems.

    “The fact that you could use large clusters of computers in parallel to solve for several tens of particles was intriguing to me,” Hagen said. “That changed my whole perspective on what you can do if you have the right resources and employ the right methods.”

    Hagen finished his graduate studies in Oslo, working with Hjorth-Jensen and taking his computing class. In 2005, collaborators of his new mentor—ORNL’s David Dean and the University of Tennessee’s Thomas Papenbrock—sought a postdoctoral fellow. A week after receiving his doctorate, Hagen found himself on a plane to Tennessee.

    For his work at ORNL, Hagen used a numerical technique to describe systems of many interacting particles, such as atomic nuclei containing protons and neutrons. He collaborated with experts worldwide who were specializing in different aspects of the challenge and ran his calculations on some of the world’s most powerful supercomputers.

    “Computing had taken such an important role in the work I did that having that available made a big difference,” he said. In 2008, he accepted a staff job at ORNL.”

    That year Hagen found another reason to stay in Tennessee—he met the woman who became his wife. She works in TV production and manages a vintage boutique in downtown Knoxville.

    Hagen, his wife and stepson spend some vacations at his father’s farm by the sea in northern Norway. There the physicist enjoys snowboarding, fishing and backpacking, “getting lost in remote areas, away from people, where it’s quiet and peaceful. Back to the basics.”

    Summiting

    Hagen won a DOE early career award in 2013. Today, his research employs applied mathematics, computer science and physics, and the resulting descriptions of atomic nuclei enable predictions that guide earthly experiments and improve understanding of astronomical phenomena.

    A central question he is trying to answer is: what is the size of a nucleus? The difference between the radii of neutron and proton distributions—called the “neutron skin”— has implications for the equation-of-state of neutron matter and neutron stars.

    In 2015, a team led by Hagen predicted properties of the neutron skin of the calcium-48 nucleus; the results were published in Nature Physics. In progress or planned are experiments by others to measure various neutron skins. The COHERENT experiment at ORNL’s Spallation Neutron Source did so for argon-40 by measuring how neutrinos—particles that interact only weakly with nuclei—scatter off of this nucleus. Studies of parity-violating electron scattering on lead-208 and calcium-48—topics of the PREX2 and CREX experiments, respectively—are planned at Thomas Jefferson National Accelerator Facility.

    One recent calculation in a study Hagen led solved a 50-year-old puzzle about why beta decays of atomic nuclei are slower than expected based on the beta decays of free neutrons. Other calculations explore isotopes to be made and measured at DOE’s Facility for Rare Isotope Beams, under construction at Michigan State University, when it opens in 2022.

    Hagen’s team has made several predictions about neutron-rich nuclei observed at experimental facilities worldwide. For example, 2016 predictions for the magicity of nickel-78 were confirmed at RIKEN in Japan and published in Nature this year. Now the team is developing methods to predict behavior of neutron-rich isotopes beyond nickel-78 to find out how many neutrons can be added before a nucleus falls apart.

    “Progress has exploded in recent years because we have methods that scale more favorably with the complexity of the system, and we have ever-increasing computing power,” Hagen said. At the Oak Ridge Leadership Computing Facility, he has worked on Jaguar (1.75 peak petaflops), Titan (27 peak petaflops) and Summit [above] (200 peak petaflops) supercomputers. “That’s changed the way that we solve problems.”

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    His team currently calculates the probability of a process called neutrino-less double-beta decay in calcium-48 and germanium-76. This process has yet to be observed but if seen would imply the neutrino is its own anti-particle and open a path to physics beyond the Standard Model of Particle Physics.

    Looking to the future, Hagen eyes “superheavy” elements—lead-208 and beyond. Superheavies have never been simulated from first principles.

    “Lead-208 pushes everything to the limits—computing power and methods,” he said. “With this next generation computer, I think simulating it will be possible.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: