Tagged: insideHPC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 8:39 pm on May 27, 2022 Permalink | Reply
    Tags: "HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre", AI compute demand is doubling every three to four months for the system users., , insideHPC, LRZ provides researchers with advanced and reliable IT services for their science., Powered by the largest processor ever built the Cerebras Wafer-Scale Engine 2 (WSE-2) the CS-2 delivers greater AI-optimization than any other deep learning processor in existence., , The new system is an additional resource to Germany’s national supercomputing computing center.   

    From InsideHPC : “HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre” 

    From InsideHPC

    May 25, 2022

    The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), today announced the joint development of a system designed to accelerate scientific research and innovation in AI at Leibniz Supercomputing Centre (LRZ), an institute of the Bavarian Academy of Sciences and Humanities (BAdW).

    The system is purpose-built for scientific research and is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system, which makes it the first solution in Europe to leverage the Cerebras CS-2 system Cerebras said. The HPE Superdome Flex server delivers a modular, scale-out solution to meet computing demands and features specialized capabilities for in-memory processing required for high volumes of data.

    Additionally, the HPE Superdome Flex server’s specific pre-and post-data processing capabilities for AI model training and inference “is ideal to support the Cerebras CS-2 system, which delivers the deep learning performance of 100s of graphics processing units (GPUs), with the programming ease of a single node,” Cerebras said. “Powered by the largest processor ever built – the Cerebras Wafer-Scale Engine 2 (WSE-2) which is 56 times larger than the nearest competitor – the CS-2 delivers greater AI-optimized compute cores, faster memory, and more fabric bandwidth than any other deep learning processor in existence.”

    The system will be used by local scientists and engineers for research use cases. Applications include Natural Language Processing (NLP), medical image processing involving innovative algorithms to analyze medical images, or computer-aided capabilities to accelerate diagnoses and prognosis, and computational fluid dynamics (CFD) to advance understanding in areas such as aerospace engineering and manufacturing.

    “Currently, we observe that AI compute demand is doubling every three to four months with our users. With the high integration of processors, memory and on-board networks on a single chip, Cerebras enables high performance and speed. This promises significantly more efficiency in data processing and thus faster breakthrough of scientific findings,” said Prof. Dr. Dieter Kranzlmüller, Director of the LRZ. “As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science. To ensure optimal use of the system, we will work closely with our users and our partners Cerebras and HPE to identify ideal use cases in the community and to help achieve groundbreaking results.”

    The new system is funded by the Free State of Bavaria through the Hightech Agenda, a program dedicated to strengthening the tech ecosystem in Bavaria to fuel the region’s mission to becoming an international AI hotspot. The new system is also an additional resource to Germany’s national supercomputing computing center, and part of LRZ’s Future Computing Program that represents a portfolio of heterogenous computing architectures across CPUs, GPUs, FPGSs and ASICs.

    1
    Cerebras CS2-HPE Superdome Flex

    Cerebras said WSE-2 is 46,225 square millimeters of silicon, housing 2.6 trillion transistors and 850,000 AI-optimized computational cores as well as evenly distributed memory that hold up to 40 gigabytes of data and fast interconnects to transport them across the disk at 220 petabytes per second. This allows the WSE-2 to keep all the parameters of multi-layered neural networks on one chip during execution, which in turn reduces computation time and data processing. To date, the CS-2 system is being used in a number of U.S. research facilities and enterprises and is proving particularly effective in image and pattern recognition and natural language processing (NLP). Additional efficiency is also provided by water cooling, which reduces power consumption.

    To support the Cerebras CS-2 system, the HPE Superdome Flex server provides large-memory capabilities and scalability to process the massive, data-intensive machine learning projects that the Cerebra’ CS-2 system targets, Cerebras said. The HPE Superdome Flex server also manages and schedules jobs according to AI application needs, enables cloud access, and stages larger research datasets. In addition, the HPE Superdome Flex server includes a software stack with programs to build AI procedures and models.

    In addition to AI workloads, the combined technologies from HPE and Cerebras will also be considered for more traditional HPC workloads in support of larger, memory-intensive modeling and simulation needs, the companies said.

    “The future of computing is becoming more complex, with systems becoming more heterogeneous and tuned to specific applications. We should stop thinking in terms of HPC or AI systems,” says Laura Schulz, Head of Strategy at LRZ. “AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras. We’re working towards a future where the underlying compute is complex, but doesn’t impact the user; that the technology–whether HPC, AI or quantum–is available and approachable for our researchers in pursuit of their scientific discovery.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, InsideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    InsideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239
    Phone: (503) 877-5048

     
  • richardmitnick 4:03 pm on August 25, 2021 Permalink | Reply
    Tags: , , , insideHPC, NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE's Argonne National Laboratory", , The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society.   

    From insideHPC : “NVIDIA and HPE to Deliver 2240-GPU Polaris Supercomputer for DOE’s Argonne National Laboratory” 

    From insideHPC

    August 25, 2021

    NVIDIA and DOE’s Argonne National Laboratory (US) this morning announced Polaris, a GPU-based supercomputer, with 22240 NVIDIA A100 Tensor Core GPUs delivering 1.4 exaflops of theoretical AI performance and about 44 petaflops of peak double-precision performance.

    The Polaris system will be hosted at the laboratory’s Argonne Leadership Computing Facility (ALCF) in support of R&D with extreme scale for users’ algorithms and science. Polaris, to be built by Hewlett Packard Enterprise, will combine simulation and machine learning by tackling data-intensive and AI high performance computing workloads, powered by 560 total nodes, each with four A100 GPUs, the organizations said.

    1
    ANL ALCF HPE NVIDIA Polaris supercomputer depiction.

    “The era of exascale AI will enable scientific breakthroughs with massive scale to bring incredible benefits for society,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU-accelerated computing platform provides pioneers like the ALCF breakthrough performance for next-generation supercomputers such as Polaris that let researchers push the boundaries of scientific exploration.”

    “Polaris is a powerful platform that will allow our users to enter the era of exascale AI,” said ALCF Director Michael E. Papka. “Harnessing the huge number of NVIDIA A100 GPUs will have an immediate impact on our data-intensive and AI HPC workloads, allowing Polaris to tackle some of the world’s most complex scientific problems.”

    The system will accelerate transformative scientific exploration, such as advancing cancer treatments, exploring clean energy and propelling particle collision research to discover new approaches to physics. And it will transport the ALCF into the era of exascale AI by enabling researchers to update their scientific workloads for Aurora, Argonne’s forthcoming exascale system.

    Polaris will also be available to researchers from academia, government agencies and industry through the ALCF’s peer-reviewed allocation and application programs. These programs provide the scientific community with access to the nation’s fastest supercomputers to address “grand challenges” in science and engineering.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:56 pm on March 23, 2021 Permalink | Reply
    Tags: "Berzelius" is now Sweden’s fastest supercomputer for AI and machine learning., "Sweden’s Fastest Supercomputer for AI Now Online", , , insideHPC,   

    From insideHPC: “Sweden’s Fastest Supercomputer for AI Now Online” 

    From insideHPC

    March 23, 2021

    Berzelius is now Sweden’s fastest supercomputer for AI and machine learning, and has been installed in the National Supercomputer Centre at Linköping University [Linköpings universitet](SE). A donation of EUR 29.5 million from the Knut and Alice Wallenberg Foundation has made the construction of the new supercomputer possible.

    1

    “It’s very gratifying, but also a major challenge, that Linköping University is taking a national responsibility to connect all initiatives within high-performance computing and data processing. Our new supercomputer is a powerful addition to the important research carried out into such fields as the life sciences, machine learning and artificial intelligence”, says Jan-Ingvar Jönsson, vice-chancellor of Linköping University.

    The new supercomputer – Berzelius – takes its name from the renowned scientist Jacob Berzelius, who came from Östergötland, the region of Sweden in which Linköping is located. The supercomputer is based on the Nvidia DGX Super Pod computing architecture and delivers 300 petaflops of AI performance. This makes Berzelius the fastest supercomputer in Sweden by far, and important for the development of Swedish AI research carried out in collaboration between the academic world and industry.

    Marcus Wallenberg, vice-chair of the Knut and Alice Wallenberg Foundation, took part in the digital inauguration of Berzelius. “We are extremely happy for research in Sweden that the Wallenberg Foundations have been able to contribute to the acquisition of world-class computer infrastructure in a location that supplements and reinforces the major research initiatives we have made in recent years in such fields as AI, mathematics and the data-driven life sciences,” said Wallenberg.

    The researchers who will primarily work with the supercomputer are associated with the research programmes funded by the Knut and Alice Wallenberg Foundation, such as the Wallenberg AI Autonomous Systems and Software Program, Wasp. Anders Ynnerman, professor of scientific visualisation at Linköping University and programme director for Wasp, is happy to welcome the new machine.

    “Research in machine learning requires enormous quantities of data that must be stored, transported and processed during the training phase. Berzelius is a resource of a completely new order of magnitude in Sweden for this purpose, and it will make it possible for Swedish researchers to compete among the global vanguard in AI,” said Ynnerman.

    Berzelius will initially be equipped with 60 of the latest and fastest AI systems from Nvidia, with eight graphics processing units and Nvidia Networking in each. Jensen Huang is Nvidia’s CEO and founder.

    “In every phase of science, there has been an instrument that was essential to its advancement, and today, the most important instrument of science is the supercomputer. With Berzelius, Marcus and the Wallenberg Foundation have created the conditions so that Sweden can be at the forefront of discovery and science. The researchers that will be attracted to this system will enable the nation to transform itself from an industrial technology leader to a global technology leader,” said Huang.

    The facility has networks from Nvidia, application tools from Atos, and storage capacity from DDN. The machine has been delivered and installed by Atos. Pierre Barnabé is Senior Executive Vice-President and Head of the Big Data and Cybersecurity Division at Atos.

    “We are really delighted to have been working with Linköping University on the delivery and installation of this new high-performance supercomputer. With Berzelius, researchers will now have powerful computing power that is able to harnesses the power of deep learning and analytics, in order to speed-up data processing times, and provide researchers with insights faster, thereby helping Sweden to address some of the key challenges in AI and machine learning today,” said Barnabé.

    Berzelius comprises 60 Nvidia DGX A100 systems interconnected with Nvidia Mellanox HDR 200 Gb/s Infini Band networking and four DDN AI400X with NVMe. The Atos Codex AI Suite will support researchers in using the system efficiently.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 9:06 am on February 27, 2021 Permalink | Reply
    Tags: "HPE to Build Research Supercomputer for Sweden’s KTH Royal Institute of Technology", , , insideHPC,   

    From insideHPC: “HPE to Build Research Supercomputer for Sweden’s KTH Royal Institute of Technology” 

    From insideHPC

    February 26, 2021

    1
    HPE Dardel Cray EX system

    HPE’s string of HPC contract wins has continued with the company’s announcement today that it’s building a supercomputer for KTH Royal Institute of Technology [Kungliga Tekniska högskolan] (KTH) in Stockholm. Funded by Swedish National Infrastructure for Computing (SNIC), the HPE Cray EX system will target modeling and simulation in academic pursuits and industrial areas, including drug design, renewable energy and advanced automotive and fleet vehicles, HPE said.

    The new supercomputer (named “Dardel” in honor of the Swedish novelist, Thora Dardel and her first husband Nils Dardel, a post-impressionist painter) will replace KTH’s current flagship system, Beskow, and will be housed on KTH’s main campus at the PDC Center for High Performance Computing.

    The supercomputer will include HPE Slingshot HPC networking to congestion control and will also feature AMD EPYC CPUs and AMD Instinct GPU accelerators, and will have a theoretical peak performance of 13.5 petaflops. HPE will install the first phase of the supercomputer this summer and will include more than 65,000 CPU cores; it is scheduled to be ready for use in July. The second phase will consist of GPUs to be installed later this year and be ready for use in January 2022.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 4:58 pm on February 16, 2021 Permalink | Reply
    Tags: "PSC’s Big Data and AI Supercomputer Replaced by New Bridges-2 Platform", , insideHPC,   

    From insideHPC: “PSC’s Big Data and AI Supercomputer Replaced by New Bridges-2 Platform” 

    From insideHPC

    February 16, 2021

    From the vastness of neutron-star collisions to the raw power of incoming tsunamis to the tiny, life-and-death details of how COVID-19 progresses, the Bridges platform at the Pittsburgh Supercomputing Center (PSC) has seen it all.

    Pittsburgh Supercomputer Center

    Bridges supercomputer at PSC

    ANTON 2 D. E. Shaw Research (DESRES) supercomputer at PSC

    Olympus supercomputing Cluster at PSC

    Funded in late 2014 by the National Science Foundation (NSF) as a component in the NSF’s XSEDE cyberinfrastructure “ecosystem,” the $10-million Bridges—also supported by the NSF with an initial $10 million in operational funding—has, among numerous other advances:

    powered artificial intelligence (AI) that beat the world’s best human poker players
    identified the anonymous printers who put on paper fundamental 17th Century works on individual liberty that underlie the U.S. Constitution
    improved predictions of severe weather to lengthen warning times
    shed light on the genetic resilience of species to climate change
    offered gene researchers an easy-to-use tool to assemble the largest DNA and RNA sequences

    Bridges was so successful that, in 2018, the NSF extended its life with an additional $1.9 million for an extra year of operations.

    “Bridges was our first and highly successful step in creating a new computational platform for accelerating new and novel forms of computational research,” said Shawn Brown, Bridges’ principal investigator and director of PSC. “Bridges-2 will take the next step in accelerating rapidly evolving data and AI research with greater capabilities and the same excellent team supporting the research.”

    Recently Bridges answered a call to service, enrolling in the international COVID-19 HPC Consortium. This made the system available to scientists doing COVID research via a unique overnight approval process. As part of the Consortium, Bridges (along with the D.E. Shaw Research Anton2 supercomputer at PSC) supplied over $880,000 worth of computing time and storage to scientists searching for public health, drug and vaccine remedies to prevent and treat disease caused by the SARS-CoV-2 virus.

    Like all of PSC’s systems, Bridges was available for open research or educational purposes at no cost to users, as well as at cost-recovery for private companies. In total, Bridges powered 2,100 projects by 16,000 users at 800 institutions of science and learning across the U.S. and the globe.

    The Predominance of Memory—and of Helping the Newbies

    Bridges’ first became a twinkle in the eyes of PSC’s technical staff and leadership when the center’s NSF-funded Blacklight (2010-2015) found intense demand meeting scientists’ needs in two key HPC areas.

    The first was memory. Memory—the same as RAM in a personal computer—was not traditionally a priority in HPC design. Raw processing speed (in flops, or floating-point operations per second) stemming from many small-memory but fast processors had been the need in earlier HPC problems such as aircraft design or weather prediction. But the advent of new applications for HPC systems, including but not limited to assembly of massive DNA sequences such as in the Human Genome Project, created a need for a computer that could store “Big Data” ready for its processors without necessarily having the fastest or largest array of processors. Blacklight offered RAM in two massive chunks that proved popular with scientists in these “new research communities.”

    These new communities posed another challenge for PSC’s staff. The scientists in these fields had never before used HPC and didn’t need or want to learn how to program supercomputers. So Blacklight also featured an ease of entry that few HPC systems of its generation could match.

    In designing Bridges in cooperation with Hewlett Packard Enterprise, PSC doubled down on Blacklight’s strengths and added more. Bridges would be an even bigger Big Data resource, with regular, large, and extreme memory nodes tailored for different types and sizes of projects, and an even larger data storage system, called Pylon. The 276-TB-RAM Bridges (276,000 gigabytes, compared with a “hot” personal computer of today’s 64 GB) was no slouch in speed either, offering 1.3 petaflops (roughly 12,000 times as fast as the hot PC).

    “Big machines are like old friends, it’s always sad to see when they go,” said Olexandr Isayev, assistant professor of chemistry at the Mellon College of Science. “We used PSC Bridges very productively for a number of years … It was a fun experience, and we are looking forward to Bridges-2 for even more fun—but most importantly, to solve even more challenging problems in chemical sciences!”

    Bridges also offered even more user-friendly operating modes, ranging from direct programming for the HPC veterans to “virtual machine” and “interactive” modes for the newbies. The system proved so easy to use that it helped power PSC’s STEM education efforts, and was used by students from the middle school to graduate levels. Bridges’ use included 130 such educational allocations.

    Bridges went one step further by containing graphics processing unit nodes (GPUs, the same as graphics cards in personal computers). GPU technology greatly speeded many types of computation, and powered the explosion of AI successes beginning in 2012. The 21,056-core “heterogeneous computing” Bridges gave researchers the ability to route portions of their computations to whatever type of processor would speed their work the most. PSC staff wrote most of the software that made that vast and speedy data-traffic routing possible.

    In 2019, the NSF funded an expansion of Bridges’ AI capabilities with the latest GPU technology. Bridges-AI, which features the world’s first NVIDIA DGX-2 system for open research, fueled even more and more sophisticated AI work on Bridges. Bridges-AI will continue this mission, as a part of the new Bridges-2.

    Now Bridges has taken its final bow, ceding the title of PSC’s flagship high-performance computing (HPC) system to the larger, more advanced Bridges-2.

    1
    Pittsburgh Supercomputer Center HPE Bridges 2 supercomputer.

    Looking Forward

    While Bridges’ final disposition is still being worked out, its components will not go to waste. Some of them will be repurposed into Bridges-2. Others will be used for research and philanthropic purposes by other organizations.

    The $10 million (plus $10 million in initial operational funding) Bridges-2, funded in 2019, will in turn build on Bridges’ successes, providing even more massive AI and Big Data capacity to serve both scientists for whom HPC has long been a standard tool and those in new communities. About three times larger than Bridges, with 64,512 cores, Bridges-2 has completed its early user period and begun regular operations.

    Later this year, Bridges-2 will be federated with PSC’s new Neocortex, an AI-specialized HPC system also funded by the NSF last year.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 4:38 pm on February 12, 2021 Permalink | Reply
    Tags: "Czech Hydrometeorological Institute (CHMI) [Český hydrometeorologický ústav (ČHMÚ] (CZ) Puts NEC SX-Aurora Tsubasa Supercomputer into Operation", , insideHPC, ,   

    From insideHPC: “Czech Hydrometeorological Institute (CHMI) [Český hydrometeorologický ústav (ČHMÚ] (CZ) Puts NEC SX-Aurora Tsubasa Supercomputer into Operation” 

    From insideHPC

    February 12, 2021

    NEC Corporation (NEC; TSE: 6701) today announced that the “Czech Hydrometeorological Institute (CHMI) [Český hydrometeorologický ústav (ČHMÚ] (CZ) put an NEC SX-Aurora TSUBASA supercomputer into service.

    1
    NEC SX-Aurora TSUBASA A500-64

    The newly deployed HPC solution is used for high-resolution regional climate modelling.

    The SX-Aurora TSUBASA supercomputer was delivered by NEC Deutschland GmbH in September 2020 and operational readiness was declared in December 2020. At the heart of the solution are 48 vector hosts containing 384 vector engine cards of type VE 20B in a directly liquid-cooled (DLC) environment, together with a fully non-blocking high-speed interconnect based on Mellanox HDR InfiniBand network technology, and a total of 18 Terabytes of HBM2 high-speed memory, and 24 Terabytes of DDR4 main memory. In addition, an HPC parallel storage solution on the basis of the NEC LxFS-z Storage Appliance with a usable capacity of more than 2 Petabyte was deployed.

    NEC has realized a highly efficient DLC concept with cold water by combining leading-edge DLC and side cooler technology to avoid any leakage of waste heat into the computer room, which allows the complete system and the environment to operate without any additional air-conditioning in place. In total, the complete solution even shows a much better power efficiency than originally defined by the tender requirements.

    The new system will be used to simulate future climate, and how its changes will manifest themselves. For example, it will help to
    predict the future frequency and intensity of draught periods, and the change of extremity of weather phenomena like flash floods and strong winds. The ultimate goal is therefore to help prepare for adaptation measures, mitigating the impacts of the changing climate. In addition, it acts as a development system for the adaptation and optimization of certain meteorological codes and climate applications that benefit greatly from the vectorization on SX-Aurora TSUBASA.

    “We are very happy to bring the new NEC SX-Aurora TSUBASA into operation. For us, NEC’s vector technology that SX-Aurora TSUBASA provides represent a highly attractive alternative to competing HPC technologies, especially since we do not need to rewrite the majority of our productive codes. Another great advantage is the excellent ratio between the applicative performance gain factor and power consumption,” said Dr. Radmila Brozkova, Head of the CHMI Numerical Weather Prediction department.

    “It is an honor for us that CHMI has selected NEC for the delivery of our latest HPC solution, which clearly guides the way into the future of climate modelling and weather forecasting. CHMI is a very important customer for us, and we are happy to provide our strongest support not only for smooth operations, but also by performance that optimizes the climate applications in use,” said Yuichi Kojima, Managing Director of NEC Deutschland GmbH and Vice President HPC.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 3:19 pm on February 10, 2021 Permalink | Reply
    Tags: "HPE Supercomputers Installed at Oak Ridge for U.S. Air Force", Called “Fawbush” and "Miller" named after meteorologists Major Ernest Fawbush and Captain Robert Miller who predicted the first tornado forecast at the Tinker Air Force Base in Oklahoma., insideHPC, The systems are two HPE Cray EX supercomputers.   

    From insideHPC: “HPE Two Cray EX Supercomputer System Installed at Oak Ridge for U.S. Air Force” 

    From insideHPC

    February 10, 2021

    HPE has delivered a two-systems-in-one supercomputer to support weather modeling and forecasting for the U.S. Air force and Army. The HPC system, powered by two HPE Cray EX supercomputers, is now operational at Oak Ridge National Laboratory, where it is managed by ORNL.

    1
    HPC system, powered by two HPE Cray EX supercomputers, called “Fawbush” and “Miller” is now operational at Oak Ridge National Laboratory.

    HPE said that at 7.2 petaflops peak performance, the combined systems are 6.5 times faster than Air Force Weather’s existing system, allowing larger computations at higher resolution, better accuracy in global weather simulations, and reducing from 17 kilometers between model grid points to 10 kilometers. The systems, called “Fawbush” and “Miller”, are named after meteorologists Major Ernest Fawbush and Captain Robert Miller, who predicted the first tornado forecast at the Tinker Air Force Base in Oklahoma in 1948.

    “Air Force Weather uses the weather intelligence across atmospheric and solar data, when delivering ongoing alerts, analyses and forecasts to U.S. defense missions worldwide to help military aircraft mitigate weather conditions and achieve readiness,” HPE said in its announcement.

    HPE said it one of the first operational systems to be powered by the HPE Cray EX supercomputer architecture, formerly known as “Cray Shasta,” that will also power the upcoming three U.S. exascale systems, including Frontier, expected to be installed this year at Oak Ridge.

    The Air Force systems, conducted via a partnership with the Oak Ridge and Air Force Weather, feature 2nd Gen AMD EPYC CPUs.

    HPE said the system will enable the Air Force, in collaboration with ORNL’s Computational Earth Sciences Division, to focus on the following areas:

    Forecast stream flow, flooding, or inundation to predict how much of a given land will be submerged in water and the level of its depth. Researchers plan to achieve this by creating a global hydrology model that involves simulating hundreds of watershed and drainage basins to eventually increase accuracy in predicting future events.
    Remote sensing of a cloud-covered area to address how to navigate impacted missions through forecasting the formation, growth and precipitation of atmospheric clouds. Researchers plan to achieve this by using comprehensive cloud physics that are not made possible with existing statistical regression models.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:06 pm on February 9, 2021 Permalink | Reply
    Tags: "Lenovo to Power SURF Dutch National Supercomputer", , insideHPC,   

    From insideHPC: “Lenovo to Power SURF Dutch National Supercomputer” 

    From insideHPC

    February 9, 2021
    Doug Black

    Lenovo Data Center Group (DCG) announced it will deliver high-performance computer (HPC) infrastructure for SURF, the ICT cooperative for education and research in the Netherlands. The €20 million project, which begins in early 2021, will result in the creation of the largest and most powerful supercomputer in the country.

    1

    Supporting scientists from over 100 education and research institutions throughout the Netherlands, the supercomputer will power highly complex calculations in life-enhancing work across all fields of science including meteorology, astrophysics, medical and social sciences, and materials and earth sciences, such as climate change research.

    Explaining the significance of this collaboration, and the benefits to be passed on to research projects, Walter Lioen, Research Services Manager at SURF, said: “The need of researchers for computing power, data storage and processing is growing exponentially.

    3
    SURF’s current Cartesius system. Cartesius is a bullx system extended with one Bull sequana cell. It is a clustered SMP (Symmetric Multiprocessing) system built by Atos/Bull.

    Lenovo state-of-the-art HPC technology will include Lenovo ThinkSystem servers, powered by 2nd Gen AMD EPYC™ processors and ThinkSystem servers powered by future generation AMD EPYC processors, and all being cooled by Lenovo Neptune water cooling technology. 12,4 Pebibyte (PiB) of Lenovo Distributed Storage Solution (DSS-G) and servers with NVIDIA HGX A100 4-GPU will also assist with the artificial intelligence and machine learning abilities required for SURF’s innovative research. NVIDIA Mellanox HDR 200Gb/s InfiniBand, with smart in-network computing acceleration engines, provide the extremely low latency, high data throughput networking. Running at ten-times the capacity of the previous system and achieving an overall peak performance of almost 13 PFLOPs, Lenovo’s smarter infrastructure will deliver a powerful, highly efficient and sustainable tool for scientists and researchers in the future.

    “AMD is proud to be working with leading global institutions to provide access to advanced technologies and capabilities that are critical for supporting modern HPC workloads and research that addresses some of the world’s greatest challenges,” said Roger Benson, senior director, Commercial EMEA at AMD. “We are thrilled to be collaborating with Lenovo Data Center Group on such an innovative HPC project, bringing the performance of AMD EPYC processors to scientists and research institutions in The Netherlands, allowing them to excel in their work.”

    While optimal performance is a necessity for the SURF Dutch National Supercomputer, it is also vitally important to ensure the system is energy efficient. Lenovo’s water-cooling technology will remove approximately 90% of the heat from the system, reducing overall energy consumption, significantly increasing overall efficiency and ultimately allowing the processors to perform at their peak.

    Tina Borgbjerg, General Manager for Benelux and Nordics, Lenovo DCG, says, “We’re so pleased to contribute to a project that will not only enrich scientific research in the Netherlands but deliver a smarter and more energy-efficient system, thanks to our incredible water-cooling technology. The sheer power that will be delivered by this national supercomputer showcases our strength in HPC, and the scale of this deal further demonstrates our commitment to the Benelux region and the Netherlands.”

    “Our partnership with SURF shows our continued commitment to delivering innovative HPC technology to empower those who help solve humanity’s greatest challenges” said Noam Rosen, EMEA Director, HPC & AI at Lenovo DCG. “Harnessing the capabilities of the dawning exascale era of computing and putting them in the hands of organizations like SURF for ground- breaking research is what Lenovo’s ‘From Exascale to EveryscaleTM’ initiative is all about.”

    “Our A100 Tensor Core GPUs featured in SURF are based on NVIDIA Ampere architecture, an engineering milestone that boosts performance for Artificial Intelligence (AI) training and inference, as well as easily meeting the access and power demands of the modern AI and HPC workloads,” said Ian Buck, General Manager and Vice President of Accelerated Computing at NVIDIA. “Incorporating the world’s most advanced AI technology, all connected by high-bandwidth, low-latency NVIDIA Mellanox HDR InfiniBand networking, into the SURF supercomputer gives researchers what they need to quickly and effectively take on the workloads of the exascale AI era.”

    The modernization of the infrastructure will begin in February 2021 and phase 1 of the new supercomputer is expected to be operational by mid-2021.

    In the design of the new supercomputer, the usability for scientific research was paramount. SURF has chosen Lenovo because of its quality, performance and future flexibility, as well as its considerations for sustainability.”

    About SURF

    SURF is the collaborative organization for ICT in Dutch education and research. Within the SURF cooperative, universities, universities of applied sciences, vocational schools, research institutions and university medical centers work together on ICT facilities and ICT innovations. This provides students, lecturers, researchers and staff with the best ICT facilities for top research and talent development under favorable conditions. For more information, visit https://www.surf.nl/en

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:48 pm on July 26, 2020 Permalink | Reply
    Tags: "DOE Unveils Blueprint for ‘Unhackable’ Quantum Internet: Central Roles for Argonne; Univ. of Chicago; Fermilab", , DOE said its 17 national laboratories will serve as the backbone of the intended quantum internet., , insideHPC, Reliance on the laws of quantum mechanics.,   

    From insideHPC brought forward by Fermi National Accelerator Lab: “DOE Unveils Blueprint for ‘Unhackable’ Quantum Internet: Central Roles for Argonne, Univ. of Chicago, Fermilab” 

    From insideHPC

    brought forward by

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    July 23, 2020

    1
    Photons entangled in a pair of loops on a 52-mile quantum network testbed.
    Credit: Argonne National Laboratory

    At a press conference held today at the University of Chicago, the U.S. Department of Energy (DOE) unveiled a strategy [OSTI] for the development of a national quantum internet intended to bring “the United States to the forefront of the global quantum race and usher in a new era of communications.”

    An outgrowth of the National Quantum Initiative Act of 2018, an initial goal of the strategy is build a prototype of a communications system using quantum mechanics over the next decade. DOE said its 17 national laboratories will serve as the backbone of the intended quantum internet, which will rely on the laws of quantum mechanics to control and transmit information more securely. “Currently in its initial stages of development, the quantum internet could become a secure communications network and have a profound impact on areas critical to science, industry, and national security,” DOE said in its announcement today.

    The announcement follows a meeting in February in New York between members of the national labs, universities and industry to the essential research to be accomplished, describing the engineering and design barriers and setting near-term goals for the project. Steps toward building the quantum internet are underway in the Chicago area, which DOE said has become a hub for quantum research. Last February, scientists from Argonne National Laboratory in Lemont, Illinois, and the University of Chicago entangled photons across a 52-mile “quantum loop” in the Chicago suburbs, “successfully establishing one of the longest land-based quantum networks in the nation,” according to DOE.

    That network will be connected to DOE’s Fermilab, specializing in particle physics, in Batavia, IL, establishing a three-node, 80-mile testbed.

    “The combined intellectual and technological leadership of the University of Chicago, Argonne, and Fermilab has given Chicago a central role in the global competition to develop quantum information technologies,” said Robert J. Zimmer, president of the University of Chicago. “This work entails defining and building entirely new fields of study, and with them, new frontiers for technological applications that can improve the quality of life for many around the world and support the long-term competitiveness of our city, state, and nation.”

    In today’s announcement, DOE stated that “quantum transmissions is….exceedingly difficult to eavesdrop on as information passes between locations. Scientists plan to use that trait to make virtually unhackable networks.” Early adopters could include banks and health providers, applications for national security and aircraft communications – even, eventually, mobile phone users.

    DOE said scientists are also exploring how the quantum internet could support exchange of vast amounts of data and how networks of ultra-sensitive quantum sensors could allow engineers to better monitor and predict earthquakes—a longtime and elusive goal—or to search for deposits of oil, gas, or minerals.

    The report released today includes such research objectives as building and integrating quantum networking devices, routing quantum information and correcting errors.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:38 pm on April 22, 2020 Permalink | Reply
    Tags: , , Fujitsu PRIMEHPC FX1000 supercomputer ., insideHPC, ,   

    From insideHPC: “Fujitsu Supercomputer to Power Aerospace Research at JAXA in Japan” 

    From insideHPC

    April 22, 2020

    Today Fujitsu announced that it has received an order for a supercomputer system from the Japan Aerospace Exploration Agency (JAXA).


    PRIMEHPC FX1000

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint research.

    2
    PRIMEHPC FX1000

    “Scheduled to start operation in October 2020, the new computing system for large-scale numerical simulation, composed of Fujitsu Supercomputer PRIMEHPC FX1000, is expected to have a theoretical computational performance of 19.4 petaflops, which is approximately 5.5 times that of the current system. At the same time, Fujitsu will implement 465 nodes of x86 servers Fujitsu Server PRIMERGY series for general-purpose systems that can handle diverse computing needs.”

    As it conducts research of space development, aviation technology, and related basic technology, JAXA has used supercomputer systems to develop numerical simulation technologies such as fluid dynamics and structural dynamics in the study of aircraft and rockets. In recent years, in addition to conventional numerical simulations, the system has been expanding their role in the HPC field. For example, the system has processed earth observation data collected by satellites for use by researchers and the general public, while it has been used in AI calculations, including deep learning.

    JAXA is currently operating a supercomputer system JSS2 comprised of SORA-MA, which consists 3,240 nodes of Fujitsu Supercomputer PRIMEHPC FX100, and J-SPACE that stores and manages various data using a large-capacity storage medium.

    Features of the New Supercomputer System

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint

    3

    Fujitsu will implement a computing system for large-scale numerical simulations. The system will consist 5,760 nodes of PRIMEHPC FX1000, which utilizes the technology of supercomputer Fugaku jointly developed by Fujitsu and RIKEN.

    4

    It is expected to have 19.4 petaflops, approximately 5.5 times the theoretical computing performance of the current system, in double precision (64 bit) usually used in simulations. In addition, a total of 465 nodes from x86 servers Fujitsu Server PRIMERGY series equipped with high memory capacity and GPU will be deployed as they compose a general-purpose system capable of handling a variety of computing needs. With a large file system capacity of approximately 50 petabytes, including high-speed access storage system of approximately 10 petabytes, the new system will offer high performance and ease of use. The implementation of PRIMEHPC FX1000 equipped with a highly versatile Arm architecture CPU A64FX will enable the application of various software and contribute to the widespread use of JAXA’s research results.

    Future Plans

    While enhancing the global advantage of JAXA’s aerospace research in the conventional numerical simulation field, the system, as the foundation of the Agency’s HPC infrastructure, will be used for an AI computational processing platform for joint research and shared use. The system will also be applied to a large-scale data analysis platform for aggregating and analyzing satellite observation data that had been previously stored and managed by different divisions at JAXA. Fujitsu will support JAXA in making its philosophy a reality by solving its issues with experience gained through supplying supercomputer systems to the Agency since the 1970s. Offering PRIMEHPC FX1000 worldwide, the company will contribute in solving social issues, accelerating leading-edge research, and bolstering the competitive edge of corporations.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: