Tagged: Cray supercomputing company Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:02 pm on August 8, 2019 Permalink | Reply
    Tags: , , Army Research Lab (ARL), Cray supercomputing company, ERDC-U.S. Army Engineering and Research Development Center, ,   

    From insideHPC: “AMD to Power Two Cray CS500 Systems at Army Research Centers” 

    From insideHPC

    August 8, 2019

    Today Cray announced that the U.S. Department of Defense (DOD) has selected two Cray CS500 systems for its High Performance Computing Modernization Program (HPCMP) annual technology procurement known as TI-18.

    3

    The Army Research Lab (ARL) and the U.S. Army Engineering and Research Development Center (ERDC) will each deploy a Cray CS500 to help serve the U.S. through accelerated research in science and technology.

    2
    Cray CS500

    The two contracts are valued at more than $46M and the CS500 systems are expected to be delivered to ARL and ERDC in the fourth quarter of 2019.

    “We’re proud to continue to support the DOD and its advanced use of high-performance computing in providing ARL and ERDC new systems for their research programs,” said Peter Ungaro, CEO at Cray. “We’re looking forward to continued collaboration with the DOD in leveraging the capabilities of these new systems to achieve their important mission objectives.”

    Cray has a long history of delivering high-performance computing technologies to ARL and ERDC and continues to play a vital role in helping the organizations deliver on their missions to ensure the U.S. remains a leader in science. Both organizations’ CS500 systems will be equipped with 2nd Gen AMD EPYC processors and NVIDIA Tensor Core GPUs, and will provide access to high-performance capabilities and resources that make it possible for researchers, scientists and engineers across the Department of Defense to better understand insights and enable new discoveries across diverse research disciplines to address the Department’s most challenging problems.

    “We are truly proud to partner with Cray to create the world’s most powerful supercomputing platforms. To be selected to help accelerate scientific research and discovery is a testament to our commitment to datacenter innovation,” said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Solutions Business Group, AMD. “By leveraging breakthrough CPU performance and robust feature set of the 2nd Gen AMD EPYC processors with Cray CS500 supercomputers, the DOD has a tremendous opportunity to grow its computing capabilities and deliver on its missions.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 9:51 am on June 26, 2019 Permalink | Reply
    Tags: Cray supercomputing company, Frontier Shasta based Exascale supercomputer, ,   

    From insideHPC: “Cray to Deliver First Exabyte HPC Storage System for Frontier Supercomputer” 

    From insideHPC

    June 25, 2019

    At ISC 2019, Cray announced plans to deliver the worlds first Exabyte HPC storage system to Oak Ridge National Lab. As part of the Frontier CORAL-2 contract DOE and ORNL, the next generation Cray ClusterStor storage file system will be integrated as part of ORNL’s Frontier exascale supercomputer, built on Cray’s Shasta architecture.

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology

    “We are excited to continue our partnership with ORNL to collaborate in developing a next generation storage solution that will deliver the capacity and throughput needed to support the dynamic new research that will be done on the Frontier exascale system for years to come,” said John Dinning, chief product officer at Cray. “By delivering a new hybrid storage solution that is directly connected to the Slingshot network, users will be able to drive data of any size, access pattern or scale to feed their converged modeling, simulation and AI workflows.”

    The storage solution is a new design for the data-intensive workloads of the exascale era and will be based on next generation Cray ClusterStor storage and the Cray Slingshot high-speed interconnect. The storage system portion of the previously-announced Frontier contract is valued at more than $50 million, which is the largest single Cray ClusterStor win to date. The Frontier system is expected to be delivered in 2021.

    The new storage solution will be based on the next generation of Cray’s ClusterStor storage line and will be comprised of over one exabyte (EB) of hybrid flash and high capacity storage running the Lustre® parallel file system. One exabyte of storage is 1,000 petabytes (or one quintillion bytes), which is enough capacity to store more than 200 million high definition movies. The storage solution will be directly connected to ORNL’s Frontier system via the Slingshot system interconnect to enable seamless scaling of diverse modeling, simulation, analytics and AI workloads running simultaneously on the system. The Frontier system is anticipated to debut in 2021 as the world’s most powerful computer with a performance of greater than 1.5 exaflops.

    Compared to the storage for ORNL’s current Summit supercomputer, this next generation solution is more than four times the capacity (more than 1 EB (or 1,000 PB) versus 250 PB), and more than four times the throughput (up to 10 TB/s versus 2.5 TB/s) of their existing Spectrum Scale-based storage system.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The new Cray ClusterStor storage solution for ORNL will be comprised of over 40 cabinets of storage and provide more than 1 EB of total capacity across two tiers of storage to support random and streaming access of data. The primary tier is a flash tier for high-performance scratch storage and the secondary tier is a hard disk tier for high capacity storage. The new storage system will be a center-wide system at ORNL in support of the Frontier exascale system and will be accessed by the Lustre global parallel file system with ZFS local volumes all in a single global POSIX namespace, which will make it the largest single high-performance file system in the world.

    “HPC storage systems have traditionally utilized large arrays of hard disks accessed via large and predictable reads and writes of data. This is in stark contrast to AI and machine learning workloads, which typically have a mix of random and sequential access of small and large data sizes. As a result, traditional storage systems are not well suited for the combined usage of these workloads given the mix of data access and the need for an intelligent high-speed system interconnect to quickly move massive amounts of data on and off the supercomputer to enable these diverse workloads to run simultaneously on exascale systems like Frontier.”

    The next generation ClusterStor-based storage solution addresses these challenges head on by providing a blend of flash and capacity storage to support complex access patterns, a powerful new software stack for improved manageability and tiering of data, and seamless scaling across both compute and storage through direct connection to the Slingshot high-speed network. In addition to scaling, the direct connection of storage to the Slingshot network eliminates the need for storage routers that are required in most traditional HPC networks. This results in lower cost, lower complexity and lower latency in the system overall, thus delivering higher unprecedented application performance and ROI. Additionally, since Slingshot is ethernet compatible, it can also enable seamless interoperability with existing third party network storage as well as with other data and compute sources.

    Cray’s Shasta supercomputers, ClusterStor storage and the Slingshot interconnect are quickly becoming the leading technology choices for the exascale era by combining the performance and scale of supercomputing with the productivity of cloud computing and full datacenter interoperability. The new compute, software, storage and interconnect capabilities being pioneered for leading research labs like ORNL are being productized as standard offerings from Cray for research and enterprise customers alike, with expected availability starting at the end of 2019.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:08 am on May 7, 2019 Permalink | Reply
    Tags: AMD Radeon, Cray supercomputing company, DOE’s Exascale Computing Project, , ORNL Cray Frontier Shasta based Exascale supercomputer   

    From Oak Ridge National Laboratory: “U.S. Department of Energy and Cray to Deliver Record-Setting Frontier Supercomputer at ORNL” 

    i1

    From Oak Ridge National Laboratory

    May 7, 2019
    Morgan L McCorkle
    mccorkleml@ornl.gov
    865-574-7308

    Exascale system expected to be world’s most powerful computer for science and innovation.

    The U.S. Department of Energy today announced a contract with Cray Inc. to build the Frontier supercomputer at Oak Ridge National Laboratory, which is anticipated to debut in 2021 as the world’s most powerful computer with a performance of greater than 1.5 exaflops.

    ORNL Cray Frontier Shasta based Exascale supercomputer with Slingshot interconnect featuring high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology

    2

    Scheduled for delivery in 2021, Frontier will accelerate innovation in science and technology and maintain U.S. leadership in high-performance computing and artificial intelligence. The total contract award is valued at more than $600 million for the system and technology development. The system will be based on Cray’s new Shasta architecture and Slingshot interconnect and will feature high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology.

    By solving calculations up to 50 times faster than today’s top supercomputers—exceeding a quintillion, or 10^18, calculations per second—Frontier will enable researchers to deliver breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. As a second-generation AI system—following the world-leading Summit system deployed at ORNL in 2018—Frontier will provide new capabilities for deep learning, machine learning and data analytics for applications ranging from manufacturing to human health.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    “Frontier’s record-breaking performance will ensure our country’s ability to lead the world in science that improves the lives and economic prosperity of all Americans and the entire world,” said U.S. Secretary of Energy Rick Perry. “Frontier will accelerate innovation in AI by giving American researchers world-class data and computing resources to ensure the next great inventions are made in the United States.”

    Since 2005, Oak Ridge National Laboratory has deployed Jaguar, Titan, and Summit [above], each the world’s fastest computer in its time.

    ORNL OCLF Jaguar Cray Linux supercomputer

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, now No.9 on the TOP500

    The combination of traditional processors with graphics processing units to accelerate the performance of leadership-class scientific supercomputers is an approach pioneered by ORNL and its partners and successfully demonstrated through ORNL’s No.1 ranked Titan and Summit supercomputers.

    “ORNL’s vision is to sustain the nation’s preeminence in science and technology by developing and deploying leadership computing for research and innovation at an unprecedented scale,” said ORNL Director Thomas Zacharia. “Frontier follows the well-established computing path charted by ORNL and its partners that will provide the research community with an exascale system ready for science on day one.”

    Researchers with DOE’s Exascale Computing Project are developing exascale scientific applications today on ORNL’s 200-petaflop Summit system and will seamlessly transition their scientific applications to Frontier in 2021. In addition, the lab’s Center for Accelerated Application Readiness is now accepting proposals from scientists to prepare their codes to run on Frontier.

    Researchers will harness Frontier’s powerful architecture to advance science in such applications as systems biology, materials science, energy production, additive manufacturing and health data science. Visit the Frontier website to learn more about what researchers plan to accomplish in these and other scientific fields.

    Frontier will offer best-in-class traditional scientific modeling and simulation capabilities while also leading the world in artificial intelligence and data analytics. Closely integrating artificial intelligence with data analytics and modeling and simulation will drastically reduce the time to discovery by automatically recognizing patterns in data and guiding simulations beyond the limits of traditional approaches.

    “We are honored to be part of this historic moment as we embark on supporting extreme-scale scientific endeavors to deliver the next U.S. exascale supercomputer to the Department of Energy and ORNL,” said Peter Ungaro, president and CEO of Cray. “Frontier will incorporate foundational new technologies from Cray and AMD that will enable the new exascale era—characterized by data-intensive workloads and the convergence of modeling, simulation, analytics, and AI for scientific discovery, engineering and digital transformation.”

    Frontier will incorporate several novel technologies co-designed specifically to deliver a balanced scientific capability for the user community. The system will be composed of more than 100 Cray Shasta cabinets with high density compute blades powered by HPC and AI- optimized AMD EPYC processors and Radeon Instinct GPU accelerators purpose-built for the needs of exascale computing. The new accelerator-centric compute blades will support a 4:1 GPU to CPU ratio with high speed AMD Infinity Fabric links and coherent memory between them within the node. Each node will have one Cray Slingshot interconnect network port for every GPU with streamlined communication between the GPUs and network to enable optimal performance for high-performance computing and AI workloads at exascale.

    To make this performance seamless to consume by developers, Cray and AMD are co-designing and developing enhanced GPU programming tools optimized for performance, productivity and portability. This will include new capabilities in the Cray Programming Environment and AMD’s ROCm open compute platform that will be integrated together into the Cray Shasta software stack for Frontier.

    “AMD is proud to be working with Cray, Oak Ridge National Laboratory and the Department of Energy to push the boundaries of high performance computing with Frontier,” said Lisa Su, AMD president and CEO. “Today’s announcement represents the power of collaboration between private industry and public research institutions to deliver groundbreaking innovations that scientists can use to solve some of the world’s biggest problems.”

    Frontier leverages a decade of exascale technology investments by DOE. The contract award includes technology development funding, a center of excellence, several early-delivery systems, the main Frontier system, and multi-year systems support. The Frontier system is expected to be delivered in 2021, and acceptance is anticipated in 2022.

    Frontier will be part of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility. ORNL is managed by UT–Battelle for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: