Tagged: insideHPC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:38 pm on April 22, 2020 Permalink | Reply
    Tags: , , Fujitsu PRIMEHPC FX1000 supercomputer ., insideHPC, ,   

    From insideHPC: “Fujitsu Supercomputer to Power Aerospace Research at JAXA in Japan” 

    From insideHPC

    April 22, 2020

    Today Fujitsu announced that it has received an order for a supercomputer system from the Japan Aerospace Exploration Agency (JAXA).


    PRIMEHPC FX1000

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint research.

    2
    PRIMEHPC FX1000

    “Scheduled to start operation in October 2020, the new computing system for large-scale numerical simulation, composed of Fujitsu Supercomputer PRIMEHPC FX1000, is expected to have a theoretical computational performance of 19.4 petaflops, which is approximately 5.5 times that of the current system. At the same time, Fujitsu will implement 465 nodes of x86 servers Fujitsu Server PRIMERGY series for general-purpose systems that can handle diverse computing needs.”

    As it conducts research of space development, aviation technology, and related basic technology, JAXA has used supercomputer systems to develop numerical simulation technologies such as fluid dynamics and structural dynamics in the study of aircraft and rockets. In recent years, in addition to conventional numerical simulations, the system has been expanding their role in the HPC field. For example, the system has processed earth observation data collected by satellites for use by researchers and the general public, while it has been used in AI calculations, including deep learning.

    JAXA is currently operating a supercomputer system JSS2 comprised of SORA-MA, which consists 3,240 nodes of Fujitsu Supercomputer PRIMEHPC FX100, and J-SPACE that stores and manages various data using a large-capacity storage medium.

    Features of the New Supercomputer System

    The system will contribute in improving the international competitiveness of aerospace research, as it will be widely used as the basis for JAXA’s high performance computing. It is also expected to be used for various applications, including a large-scale data analysis platform for satellite observation and an AI calculation processing platform for joint

    3

    Fujitsu will implement a computing system for large-scale numerical simulations. The system will consist 5,760 nodes of PRIMEHPC FX1000, which utilizes the technology of supercomputer Fugaku jointly developed by Fujitsu and RIKEN.

    4

    It is expected to have 19.4 petaflops, approximately 5.5 times the theoretical computing performance of the current system, in double precision (64 bit) usually used in simulations. In addition, a total of 465 nodes from x86 servers Fujitsu Server PRIMERGY series equipped with high memory capacity and GPU will be deployed as they compose a general-purpose system capable of handling a variety of computing needs. With a large file system capacity of approximately 50 petabytes, including high-speed access storage system of approximately 10 petabytes, the new system will offer high performance and ease of use. The implementation of PRIMEHPC FX1000 equipped with a highly versatile Arm architecture CPU A64FX will enable the application of various software and contribute to the widespread use of JAXA’s research results.

    Future Plans

    While enhancing the global advantage of JAXA’s aerospace research in the conventional numerical simulation field, the system, as the foundation of the Agency’s HPC infrastructure, will be used for an AI computational processing platform for joint research and shared use. The system will also be applied to a large-scale data analysis platform for aggregating and analyzing satellite observation data that had been previously stored and managed by different divisions at JAXA. Fujitsu will support JAXA in making its philosophy a reality by solving its issues with experience gained through supplying supercomputer systems to the Agency since the 1970s. Offering PRIMEHPC FX1000 worldwide, the company will contribute in solving social issues, accelerating leading-edge research, and bolstering the competitive edge of corporations.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:11 pm on March 12, 2020 Permalink | Reply
    Tags: "NEC JUSTUS 2 Supercomputer Deployed at University of Ulm", , , insideHPC,   

    From insideHPC: “NEC JUSTUS 2 Supercomputer Deployed at University of Ulm” 

    From insideHPC

    March 11, 2020

    1
    NEC has deployed a new supercomputer at the University of Ulm in Germany.

    With a peak performance of 2 petaflops, the 4.4 million euro JUSTUS 2 system will enable complex simulations in chemistry and quantum physics.

    “JUSTUS 2 enables highly complex computer simulations at the molecular and atomic level, for example from chemistry and quantum science, as well as complex data analysis. And this with significantly higher energy efficiency than its predecessor, ”said Ulrich Steinbach. “The new high-performance computer will be available to researchers from all over Baden-Württemberg and is therefore – particularly with regard to battery research – a very sensible investment in the future of our science and business location.”

    JUSTUS 2 is one of the most powerful supercomputers in the world. With 33,696 CPU cores, the system is expected to deliver a five-fold increase in performance compared to its predecessor.

    “The combination of HPC simulation and data evaluation with methods of artificial intelligence brings a new quality in the use of high-performance computers – and NEC is at the forefront of this development,” added Yuichi Kojima, managing director of NEC Deutschland GmbH.

    Weighing 13 tons in total, JUSTUS 2 has 702 nodes with two processors each. Named after the German chemist Justus von Liebig, JUSTUS 2 was funded by the German Research Foundation (DFG), the state of Baden-Württemberg and the universities of Ulm, Stuttgart and Freiburg.

    “High-performance computing is essential, especially at a science and technology-oriented university like Ulm,” said computer science professor and university president Professor Michael Weber. “Therefore, JUSTUS 2 is a significant investment in the future of our strategic development areas and beyond.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:22 pm on February 29, 2020 Permalink | Reply
    Tags: "HPE to Build Supercomputer for MWA Telescope in Australia", , insideHPC, Pawsey Supercomputing Centre Perth AU,   

    From insideHPC: “HPE to Build Supercomputer for MWA Telescope in Australia” 

    From insideHPC

    February 29, 2020

    HPE has been selected by the Pawsey Supercomputing Centre, Perth, AU to deliver a new $2 million compute cluster that will support one of the Square Kilometre Array precursor projects in Australia, the Murchison Widefield Array (MWA) radio telescope.

    SKA Square Kilometer Array

    SKA Murchison Widefield Array, Boolardy station in outback Western Australia, at the Murchison Radio-astronomy Observatory (MRO)

    Pawsey Supercomputer Centre, Perth Australia

    Magnus Cray XC40 supercomputer

    Galaxy Cray XC30 Series Supercomputer

    Fujisto Raijin supercomputer

    Fujitsu Raijin Supercomputer

    “The new 78-node cluster will provide a dedicated system for astronomers to process in excess of 30 PB – equal to 399 years of high definition video – of MWA telescope data using Pawsey infrastructure. The new cluster will provide users with enhanced GPU capabilities to power AI, computational work, machine learning workflows and data analytics.”

    The MWA and another SKA precursor telescope – ASKAP – are located at the Murchison Radio-astronomy Observatory in remote Western Australia, which is owned and operated by Australia’s national science agency, CSIRO.

    Australian Square Kilometre Array Pathfinder (ASKAP) is a radio telescope array located at Murchison Radio-astronomy Observatory (MRO) in the Australian Mid West. ASKAP consists of 36 identical parabolic antennas, each 12 metres in diameter, working together as a single instrument with a total collecting area of approximately 4,000 square metres.

    Until now processing of data collected by both the MWA and ASKAP telescopes has been done on Galaxy, Pawsey’s real-time supercomputing system dedicated to radio astronomy.

    However, the data processing needs of both instruments has been growing: MWA has doubled the number of antennas available, and ASKAP will soon be ready to undertake full surveys of the sky.

    To meet this growing demand, the new MWA cluster has been procured ahead of the main supercomputing system, as part of a $70 million Pawsey capital refresh project funded by the Australian Government.

    Mark Stickells, Pawsey Executive Director, said the upgrade will allow Pawsey to deliver a service that is tailored to the Australian scientific landscape, and to keep pace with global advances in supercomputing technology.

    “Procurement of the new MWA cluster was the result of a thorough consultation process with key stakeholders and will provide the best system possible to respond to the specific needs of MWA telescope users,” he said. “The new MWA cluster at Pawsey will feature 156 of the latest generation of Intel CPUs and 78 cutting-edge GPUs more high-bandwidth memory, internal high-speed storage and more memory per node.”

    About the importance of this process and its results, Professor Melanie Johnston-Hollitt, MWA Director, said “As the MWA Director, I am delighted to see the conclusion of the procurement process, it was an outstanding example of collaboration between Pawsey and MWA and I am glad we had the opportunity to provide input into Australia’s HPC future.”

    “As a researcher, I am excited that this new infrastructure will give us the chance to accelerate our workflows, leading to faster scientific discoveries and for providing the opportunity to continue to use the MWA as a scientific, technical, and operational testbed for the future Square Kilometre Array,” she concluded.

    The Pawsey Supercomputing Centre’s 546 TeraFlops MWA cluster will comprise 78 nodes, each with two Intel Xeon Gold 6230 processors operating at 2.1 GHz and providing forty compute cores in total, a single NVIDIA V100 with 32 GB of high-bandwidth memory, 960 GB of local NVMe storage and 384 GB of main memory.

    HPE was chosen not only because they successfully meet MWA’s technical requirements, but also their ability to leverage resources around the world to provide the highest level of support for the lifetime of the system in addition to their local support.

    They provided the most space-efficient solution, only requiring two racks, which saves on floor space as well as power and cooling connections.

    Commissioning of the new MWA cluster system is expected to be finalized by Q2 2020.

    The Pawsey Supercomputing Centre is an unincorporated joint venture of CSIRO – Australia’s national science agency, Curtin University, Edith Cowan University, Murdoch University and the University of Western Australia. The procurement of this system was conducted by CSIRO as the centre agent for Pawsey.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:55 pm on February 6, 2020 Permalink | Reply
    Tags: Dell EMC C4140 HPC-5 supercomputer, , insideHPC   

    From insideHPC: “Eni unveils HPC-5 Supercomputer from Dell Technologies” 

    From insideHPC

    Today Eni dedicated its new HPC-5 system, the most powerful industrial supercomputer in the world.

    Dell EMC C4140 HPC-5 supercomputer

    “HPC-5 by Dell Technologies is made up of 1,820 Dell EMC PowerEdge C4140 servers, each with two Intel Gold 6252 24-core processors and four NVIDIA V100 GPU accelerators. The servers are connected through an InfiniBand Mellanox HDR ultra-high-performance network with a speed of 200 Gbit/s and a full non-blocking topology that ensures efficient and direct connection among every server. HPC-5 also comes with a high-performance 15-petabyte storage system (200 GB/s aggregate read/write speeds).”

    The new supercomputer supports the previous system (HPC-4), tripling its computing power from 18 to 52 PetaFlop/s, equivalent to 52 million billion mathematical operations per second, allowing Eni’s supercomputing ecosystem to reach a total peak power of 70 PetaFlop/s. HPC-5 is in fact the world’s most powerful supercomputer infrastructure in the industrial sector, and allows the company to achieve another milestone in its digitalization process.

    The remarkable increase in computing power, obtained thanks to the use of hybrid architectures, assists Eni in the achievement of multiple strategic targets: further acceleration of the company’s transformation and the development of new energy sources and related processes, such as the generation of energy from the sea, the magnetic confinement fusion, as well as other climate and environmental technologies, developed in collaboration with the many prestigious partnerships formed with research centers.

    “Today Eni unveils a supercomputing system with key features which are unique in the industrial world,” said Eni CEO Claudio Descalzi. “This system is able to boost and even further refine the highly complex processes that support Eni’s people in their activities and therefore accelerate our digital transformation. This is an important time in the path toward the energy transition. It’s another step forward to the global goal that we share with our research and technology partners: making tomorrow’s energy an even closer reality.”

    In addition, HPC-5’s capability of processing big data and Artificial Intelligence systems will lead to further improvement in work processes, with increased process safety, better performances, better planning of exploration activities, enhanced precision in reservoir simulations, supporting the company’s professionals in daily activities, while speeding up decision-making.

    Eni’s Green Data Center, that houses all of the company’s supercomputing systems and data, is the ideal location for the HPC-5 presentation: it has been developed to be a cutting-edge technology hub and to achieve world-leading energy efficiency results also thanks to the nearby photovoltaic plant that is partially powering HPC-5.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
    • Get Qoral Health 9:50 am on February 8, 2020 Permalink | Reply

      Filed Under: Compute, Featured, HPC Hardware, Industry Segments, Main Feature, Manufacturing, New Installations, News Tagged With: Dell Technologies, ENI, HPC5 supercomputer, Mellanox, Weekly Featured Newsletter Post

      Like

      • richardmitnick 12:37 pm on February 8, 2020 Permalink | Reply

        Thanks for reading and commenting. Did you see the full blog post or the Facebook Fan page?

        Like

  • richardmitnick 3:06 pm on February 4, 2020 Permalink | Reply
    Tags: , , Fujitsu PRIMEHPC FX1000 supercomputer at Nagoya University, insideHPC,   

    From insideHPC: “Fujitsu to Deploy Arm-based Supercomputer at Nagoya University” 

    From insideHPC

    February 4, 2020
    Rich Brueckner

    Today Fujitsu announced that it has received an order for an Arm-based supercomputer system from Nagoya University’s Information Technology Center. The system is scheduled to start operation in July 2020.

    1
    Fujitsu PRIMEHPC FX1000

    “For the first time in the world, this system will adopt 2,304 nodes of the Fujitsu Supercomputer PRIMEHPC FX1000, which utilizes the technology of the supercomputer Fugaku developed jointly with RIKEN. In addition, a cluster system, connecting 221 nodes of the latest x86 servers Fujitsu Server PRIMERGY CX2570 M5 in parallel, as well as storage systems are connected by a high-speed interconnect. The sum of the theoretical computational performance of the entire system is 15.88 petaflops, making it one of the highest performing systems in Japan.”

    As a national joint usage/research center, the Information Technology Center of Nagoya University provides computing resources for academic use to researchers and private companies nationwide. It is currently operating a supercomputer system consisting of Fujitsu Supercomputer PRIMEHPC FX100 and other components. This time, the Center is planning to innovate the system in order to fulfill the large-scale calculation demand from researchers in joint usage nationwide, as well as the new calculation requirement for supercomputers represented by data science. Fujitsu won the order for this system in recognition of its proposal that concentrates the technical capabilities of Fujitsu and Fujitsu Laboratories Ltd.

    With the new system, Nagoya University’s Information Technology Center will contribute to various research and development activities. These include the conventional simulation of numerical computation to unravel the mechanism of typhoons and design new drugs. Moreover, the new system will develop a technology in the medical field that supports diagnoses and treatment, while apply AI in developing automatic driving technology.

    Fujitsu will continue to support the activities of the Center with its technology and experience nurtured through the development and offering of world-class supercomputers. By providing PRIMEHPC FX1000 worldwide, the company will also contribute to solving social issues, accelerating leading-edge research, and strengthening corporate advantages.

    “In recent years, the digitization of university education and research activities has increased the demand for computing,” said Kensaku Mori, Director, The Information Technology Center of Nagoya University. “In addition to such areas as extreme weather including super typhoons, earthquakes, and tsunamis, which are closely related to the safety and security of people’s lives, chemical fields such as molecular structure and drug discovery, and simulations in basic sciences such as space and elementary particles, there is an ever-increasing demand for computing in the fields of medicine and mobility, including artificial intelligence and machine learning. Also important are the data consumed and generated in computing, the networks that connect them, and the visualization of knowledge discovery from computing and data. Equipped with essential functions for such digital science in universities, the new supercomputer will be offered not only to Nagoya University but also to universities and research institutes nationwide, contributing to the further development of academic research in Japan.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 3:36 pm on January 24, 2020 Permalink | Reply
    Tags: , , insideHPC, Microway supercomputer being installed, , The new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously with over 85% more total memory and over four times the aggregate memory band, The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.,   

    From insideHPC: “UMass Dartmouth Speeds Research with Hybrid Supercomputer from Microway” 

    From insideHPC

    Today Microway announced that research activities are accelerating at the University of Massachusetts Dartmouth since the installation of a new supercomputing cluster.

    “UMass Dartmouth’s powerful new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously, with over 85% more total memory and over four times the aggregate memory bandwidth. It includes a heterogeneous system architecture featuring a wide array of computational engines.”

    2

    The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.

    Over 50 nodes include Intel Xeon Scalable Processors, DDR4 memory, SSDs and Mellanox ConnectX-5 EDR 100Gb InfiniBand. A subset of systems also feature NVIDIA V100 GPU Accelerators for GPU computing applications.

    Equally important are a second subset of POWER9 with 2nd Generation NVLink- based- IBM Power Systems AC922 Compute nodes. These systems are similar to those utilized in the world’s #1 and #2 most powerful Summit and Sierra supercomputers at ORNL and LLNL. The advanced NVIDIA NVLink interfaces built into POWER9 CPU and NVIDIA GPU ensure a broad pipeline between CPU:GPU for data intensive workloads.

    The deployment of the hybrid architecture system was critical to meeting the users’ needs. It also allowed those on the UMass Dartmouth campus to apply to test workloads onto the larger national laboratory systems at ORNL.

    Microway was one of the few vendors able to deliver a unified system with a mix of x86 and POWER9 systems, complete software integration across both kinds of nodes in the cluster, and offer a single point of sale and warranty coverage.

    Microway was selected as the vendor for the new cluster through an open bidding process. “They not only competed well on the price,” says Khanna, “but they were also the only company that could deliver the kind of heterogeneous system we wanted with a mixture of architecture.”

    For more information about the UMass Dartmouth Center for Scientific Computing and Visualization Research please navigate to: http://cscvr1.umassd.edu/

    This new cluster purchase was funded through an Office of Naval Research (ONR) DURIP grant award.

    Serving Users Across a Research Campus

    The deployment has helped continue to serve, attract and retain faculty, undergraduate students, and those seeking advance degrees to the UMass Dartmouth campus. The Center for Scientific Computing and Visualization Research administers the new compute resource.

    With its new cluster, CSCVR is undertaking cutting edge work. Mathematics researchers are developing new numerical algorithms on the new deployment. A primary focus is in astrophysics: with focus on the study of black holes and stars.

    “Our engineering researchers,” says Gaurav Khanna, Co-Director of UMass Dartmouth’s Center for Scientific Computing & Visualization Research, “are very actively focused on computational engineering, and there are people in mechanical engineering who look at fluid and solid object interactions.” This type of research is known as two-phase fluid flow. Practical applications can take the form of modelling windmills and coming up with a better design for the materials on the windmill such as the coatings on the blade, as well as improved designs for the blades themselves.

    This team is also looking at wave energy converters in ocean buoys. “As buoys bob up and down,” Khanna explains, “you can use that motion to generate electricity. You can model that into the computation of that environment and then try to optimize the parameters needed to have the most efficient design for that type of buoy.”

    A final area of interest to this team is ocean weather systems. Here, UMass Dartmouth researchers are building large models to predict regional currents in the ocean, weather patterns, and weather changes.

    2

    A Hybrid Architecture for a Broad Array of Workloads

    The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.

    The deployment of the hybrid architecture system was critical to meeting the users’ needs. It also allowed those on the UMass Dartmouth campus to apply to test workloads onto the larger national laboratory systems at ORNL.

    Microway was one of the few vendors able to deliver a unified system with a mix of x86 and POWER9 systems, complete software integration across both kinds of nodes in the cluster, and offer a single point of sale and warranty coverage.

    “Microway was selected as the vendor for the new cluster through an open bidding process. “They not only competed well on the price,” says Khanna, “but they were also the only company that could deliver the kind of heterogeneous system we wanted with a mixture of architecture.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:38 pm on January 24, 2020 Permalink | Reply
    Tags: "DOE Announces $625 Million for New Quantum Centers", insideHPC,   

    From insideHPC: “DOE Announces $625 Million for New Quantum Centers” 

    From insideHPC

    The U.S. Department of Energy (DOE) has announced plans to spend up $625 million over the next five years to establish two to five multidisciplinary Quantum Information Science (QIS) Research Centers in support of the National Quantum Initiative.

    1
    https://enquanted.wordpress.com/2018/07/13/americas-national-quantum-initiative/

    The National Quantum Initiative Act, signed into law by President Trump in December 2018, established a coordinated multiagency program to support research and training in QIS, encompassing activities at the National Institute of Standards and Technology, the Department of Energy, and the National Science Foundation. The Act called for the creation of a total between four and ten competitively awarded QIS research centers.

    2

    “America continues to lead the world in QIS and emerging technologies because of our incredible innovation ecosystem. The National Quantum Initiative launched by the President, including these new research centers, leverages the combined strengths of academia, industry, and DOE laboratories to drive QIS breakthroughs,” said Chief Technology Officer of the United States Michael Kratsios.

    To further advance that effort, DOE’s Argonne National Laboratory announced that it had launched a new, 52-mile testbed for quantum communications experiments, which will enable scientists to address challenges in operating a quantum network and help lay the foundation for a quantum internet.

    The Department’s planned investment in QIS Centers represents a long-term, large-scale commitment of U.S. scientific and technological resources to a highly competitive and promising new area of investigation with enormous potential to transform science and technology.

    3

    The aim of the Centers, coupled with DOE’s core research portfolio, is to create the ecosystem needed to foster and facilitate advancement of QIS, with major anticipated benefits for national security, economic competitiveness, and America’s continued leadership in science.

    Each QIS Center is expected to incorporate a collaborative research team spanning multiple scientific and engineering disciplines and multiple institutions. Centers will draw on the resources of the full range of research communities stewarded by DOE’s Office of Science and integrate elements from a wide range of technical fields.

    Applications are expected to be in the form of multi-institutional proposals submitted by a single lead institution. Eligible lead and partner institutions include universities, nonprofit research institutions, industry, DOE national laboratories, other U.S. government laboratories, and federal agencies.

    Total planned funding will be up to $625 million for awards beginning in Fiscal Year 2020 and lasting up to five years in duration, with outyear funding contingent on congressional appropriations. Selections will be made based on peer review.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 11:53 am on January 1, 2020 Permalink | Reply
    Tags: "Theta and the Future of Accelerator Programming at Argonne", , , insideHPC,   

    From Argon ALCF via insideHPC: “Theta and the Future of Accelerator Programming at Argonne” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    From insideHPC

    January 1, 2020
    Rich Brueckner


    In this video from the Argonne Training Program on Extreme-Scale Computing 2019, Scott Parker from Argonne presents: Theta and the Future of Accelerator Programming.

    ANL ALCF Theta Cray XC40 supercomputer

    Designed in collaboration with Intel and Cray, Theta is a 6.92-petaflops (Linpack) supercomputer based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

    “Theta’s unique architectural features represent a new and exciting era in simulation science capabilities,” said ALCF Director of Science Katherine Riley. “These same capabilities will also support data-driven and machine-learning problems, which are increasingly becoming significant drivers of large-scale scientific computing.”

    Scott Parker is the Lead for Performance Tools and Programming Models at the ALCF. He received his B.S. in Mechanical Engineering from Lehigh University, and a Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign. Prior to joining Argonne, he worked at the National Center for Supercomputing Applications, where he focused on high-performance computing and scientific applications. At Argonne since 2008, he works on performance tools, performance optimization, and spectral element computational fluid dynamics solvers.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 11:46 am on December 19, 2019 Permalink | Reply
    Tags: , , insideHPC, Mississippi State University, , Orion Dell-EMC supercomputer,   

    From insideHPC: “Orion Supercomputer comes to Mississippi State University” 

    From insideHPC

    December 18, 2019
    Rich Brueckner

    1
    Orion Dell EMC supercomputer

    Today Mississippi State University and the NOAA celebrated one of the country’s most powerful supercomputers with a ribbon-cutting ceremony for the Orion supercomputer, the fourth-fastest computer system in U.S. academia. Funded by NOAA and managed by MSU’s High Performance Computing Collaboratory, the Orion system is powering research and development advancements in weather and climate modeling, autonomous systems, materials, cybersecurity, computational modeling and more.

    3

    With 3.66 Petaflops of performance on the Linpack benchmark, Orion is 60th most powerful supercomputer in the world according to Top500.org, which ranks the world’s most powerful non-distributed computer systems. It is housed in the Malcolm A. Portera High Performance Computing Center, located in MSU’s Thad Cochran Research, Technology and Economic Development Park.

    “Mississippi State has a long history of using advanced computing power to drive innovative research, making an impact in Mississippi and around the world,” said MSU President Mark E. Keenum. “We also have had many successful collaborations with NOAA in support of the agency’s vital work. I am grateful that NOAA has partnered with us to help meet its computing needs, and I look forward to seeing the many scientific advancements that will take place because of this world-class supercomputer.”

    NOAA has provided MSU with $22 million in grants to purchase, install and run Orion. The Dell-EMC system consists of 28 computer cabinets, each cabinet approximately the size of an industrial refrigerator, 72,000 processing cores and 350 terabytes of Random Access Memory.

    “We’re excited to support this powerhouse of computing capacity at Mississippi State,” said Craig McLean, NOAA assistant administrator for Oceanic and Atmospheric Research. “Orion joins NOAA’s network of computer centers around the country, and boosts NOAA’s ability to conduct innovative research to advance weather, climate and ocean forecasting products vital to protecting American lives and property.”

    MSU’s partnerships with NOAA include the university’s leadership of the Northern Gulf Institute, a consortium of six academic institutions that works with NOAA to address national strategic research and education goals in the Gulf of Mexico region. Additionally, MSU’s High Performance Computing Collaboratory provides the computing infrastructure for NOAA’s Exploration Command Center at the NASA Stennis Space Center. The state-of-the-art communications hub enables research scientists at sea and colleagues on shore to communicate in real time and view live video streams of undersea life.

    “NOAA has been an incredible partner in research with MSU, and this is the latest in a clear demonstration of the benefits of this partnership for both the university and the agency,” said MSU Provost and Executive Vice President David Shaw.”

    Orion supports research operations for several MSU centers and institutes, such as the Center for Computational Sciences, Center for Cyber Innovation, Geosystems Research Institute, Center for Advanced Vehicular Systems, Institute for Genomics, Biocomputing and Biogeotechnology, the Northern Gulf Institute and the FAA Alliance for System Safety of UAS through Research Excellence (ASSURE). These centers use high-performance computing to model and simulate real-world phenomena, generating insights that would be impossible or prohibitively expensive to obtain otherwise.

    “With our faculty expertise and our computing capabilities, MSU is able to remain at the forefront of cutting-edge research areas,” said MSU Interim Vice President for Research and Economic Development Julie Jordan. “The Orion supercomputer is a great asset for the state of Mississippi as we work with state, federal and industry partners to solve complex problems and spur new innovations.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 5:13 pm on December 6, 2019 Permalink | Reply
    Tags: "IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute", , , insideHPC,   

    From insideHPC: “IBM Powers AiMos Supercomputer at Rensselaer Polytechnic Institute” 

    From insideHPC


    AiMos the newest Supercomputer at Rensselaer Polytechnic Institute
    In this video, Christopher Carothers, director of the Center for Computational Innovations, discusses AiMos, the newest supercomputer at Rensselaer Polytechnic Institute.

    The most powerful supercomputer to debut on the November 2019 Top500 ranking of supercomputers will be unveiled today at the Rensselaer Polytechnic Institute Center for Computational Innovations (CCI). Part of a collaboration between IBM, Empire State Development (ESD), and NY CREATES, the eight petaflop IBM POWER9-equipped AI supercomputer is configured to help enable users to explore new AI applications and accelerate economic development from New York’s smallest startups to its largest enterprises.

    1

    Named AiMOS (short for Artificial Intelligence Multiprocessing Optimized System) in honor of Rensselaer co-founder Amos Eaton, the machine will serve as a test bed for the IBM Research AI Hardware Center, which opened on the SUNY Polytechnic Institute (SUNY Poly) campus in Albany earlier this year. The AI Hardware Center aims to advance the development of computing chips and systems that are designed and optimized for AI workloads to push the boundaries of AI performance. AiMOS will provide the modeling, simulation, and computation necessary to support the development of this hardware.

    “Computer artificial intelligence, or more appropriately, human augmented intelligence (AI), will help solve pressing problems, from healthcare to security to climate change. In order to realize AI’s full potential, special purpose computing hardware is emerging as the next big opportunity,” said Dr. John E. Kelly III, IBM Executive Vice President. “IBM is proud to have built the most powerful and smartest computers in the world today, and to be collaborating with New York State, SUNY, and RPI on the new AiMOS system. Our collective goal is to make AI systems 1,000 times more efficient within the next decade.”

    According to the recently released November 2019 Top500 and Green500 supercomputer rankings, AiMOS is the most powerful supercomputer housed at a private university. Overall, it is the 24th most powerful supercomputer in the world and third-most energy efficient. Built using the same IBM Power Systems technology as the world’s smartest supercomputers, the US Dept. of Energy’s Summit and Sierra supercomputers, AiMOS uses a heterogenous system architecture that includes IBM POWER9 CPUs and NVIDIA GPUs. This enables AiMOS with a capacity of eight quadrillion calculations per second.


    In this video, Rensselaer Polytechnic Institute President Shirley Ann Jackson discusses AiMOS, the newest supercomputer at Rensselaer Polytechnic Institute.

    “As the home of one of the top high-performance computing systems in the U.S. and in the world, Rensselaer is excited to accelerate our ongoing research in AI, deep learning, and in fields across a broad intellectual front,” said Rensselaer President Shirley Ann Jackson. “The creation of new paradigms requires forward-thinking collaborators, and we look forward to working with IBM and the state of New York to address global challenges in ways that were previously impossible.”

    AiMOS will be available for use by public and private industry partners.

    Dr. Douglas Grose, Future President of NY CREATES said, “The unveiling of AiMOS and its incredible computational capabilities is a testament to New York State’s international high-tech leadership. As a test bed for the AI Hardware Center, AiMOS furthers the powerful potential of the AI Hardware Center and its partners, including New York State, IBM, Rensselaer, and SUNY, and we look forward to the advancement of AI systems as a result of this milestone.”

    SUNY Polytechnic Institute Interim President Grace Wang said, “SUNY Poly is proud to work with New York State, IBM, and Rensselaer Polytechnic Institute and our other partners as part of the AI Hardware Center initiative which will drive exciting artificial intelligence innovations. We look forward to efforts like this providing unique and collaborative research opportunities for our faculty and researchers, as well as leading-edge educational opportunities for our top-tier students. As the exciting benefits of AiMOS and the AI Hardware Center come to fruition, SUNY Poly is thrilled to play a key role in supporting the continued technological leadership of New York State and the United States in this critical research sector.”

    AiMOS will also support the work of Rensselaer faculty, students, and staff who are engaged in a number of ongoing collaborations that employ and advance AI technology, many of which involve IBM Research. These initiatives include the Rensselaer-IBM Artificial Intelligence Research Collaboration (AIRC), which brings researchers at both institutions together to explore new frontiers in AI, Cognitive and Immersive Systems Lab (CISL), and The Jefferson Project, which combines Internet of Things technology and powerful analytics to help manage and protect one of New York’s largest lakes, while creating a data-based blueprint for preserving bodies of fresh water around the globe.

    “The established expertise in computation and data analytics at Rensselaer, when combined with AiMOS, will enable many of our research projects to make significant strides that simply were not possible on our previous platform,” said Christopher Carothers, director of the CCI and professor of computer science at Rensselaer. “Our message to the campus and beyond is that, if you are doing work on large-scale data analytics, machine learning, AI, and scientific computing then it should be running at the CCI.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: