Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 11:36 am on September 15, 2019 Permalink | Reply
    Tags: " Rosie" Supercomputer, , , Milwaukee School of Engineering, Supercomputing   

    From insideHPC: “NVIDIA Powers Rosie Supercomputer at MSOE” 

    From insideHPC

    2
    “Rosie” supercomputer at the Milwaukee School of Engineering.

    An NVIDIA GPU-powered supercomputer named “Rosie” is at the heart of a new computational science facility at the Milwaukee School of Engineering.

    2
    Diercks Hall at MSOE

    Unique to MSOE, the supercomputer will be available to undergraduate students, offering them the ability to apply their learning in a hands-on environment to prepare for their careers. Thanks to the university’s corporate partnerships, students will access this high performance computing to solve real-world problems in their course work.

    Three decades and hundreds of millions of lines of computer code after graduating from the Milwaukee School of Engineering, NVIDIA’s Dwight Diercks returned today to celebrate a donation that will put his alma mater at the forefront of AI undergraduate education. Diercks’ $34 million gift, the largest from an alum in MSOE’s 116-year history, is the keystone in the school’s efforts to infuse its engineering program with artificial intelligence. Two years ago, MSOE became one of the very few programs, together with Carnegie Mellon, to offer a computer science degree focused on AI.

    Housed in a glass-walled area within the newly constructed four-story Diercks Hall, the new NVIDIA-powered AI supercomputer includes three NVIDIA DGX-1 pods, each with eight NVIDIA V100 Tensor Core GPUs, and 20 servers each with four NVIDIA T4 GPUs. The nodes are joined together by Mellanox networking fabric and share 200TB of network-attached storage. Rare among supercomputers in higher education, the system —which provides 8.2 petaflops of deep learning performance — will be used for teaching undergrad classes.

    “We knew MSOE needed a supercomputer and one that can expand to scale out for students and scale up for local industries and professors,” Diercks said. In an emotional speech, he thanked a high school teacher, MSOE professor and NVIDIA founder and CEO Jensen Huang for reinforcing what his parents taught him about the importance of hard work and continuous learning.

    NVIDIA CEO Jensen Huang, who delivered a keynote after the ceremony, called AI the fourth industrial revolution that will sweep across the work of virtually every industry. MSOE’s new AI push and supercomputer will help it enable generations of computer scientists trained for tomorrow’s challenges.

    “MSOE now has the single most important instrument of knowledge today,” Huang said, delivering the first address in the NVIDIA auditorium. “Without access to the correct instrument, you can’t access knowledge.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:28 pm on August 27, 2019 Permalink | Reply
    Tags: "Satori" cluster, , , Supercomputing   

    From MIT News: “IBM gives artificial intelligence computing at MIT a lift” 

    MIT News

    From MIT News

    August 26, 2019
    Kim Martineau | MIT Quest for Intelligence

    Nearly $12 million machine will let MIT researchers run more ambitious AI models.

    1
    An $11.6 million artificial intelligence computing cluster donated by IBM to MIT will come online this fall at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts. Photo: Helen Hill/MGHPCC

    IBM designed Summit, the fastest supercomputer on Earth, to run the calculation-intensive models that power modern artificial intelligence (AI).

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Now MIT is about to get a slice.

    IBM pledged earlier this year to donate an $11.6 million computer cluster to MIT modeled after the architecture of Summit, the supercomputer it built at Oak Ridge National Laboratory for the U.S. Department of Energy. The donated cluster is expected to come online this fall when the MIT Stephen A. Schwarzman College of Computing opens its doors, allowing researchers to run more elaborate AI models to tackle a range of problems, from developing a better hearing aid to designing a longer-lived lithium-ion battery.

    “We’re excited to see a range of AI projects at MIT get a computing boost, and we can’t wait to see what magic awaits,” says John E. Kelly III, executive vice president of IBM, who announced the gift in February at MIT’s launch celebration of the MIT Schwarzman College of Computing.

    IBM has named the cluster “Satori”, a Zen Buddhism term for “sudden enlightenment.” Physically the size of a shipping container, Satori is intellectually closer to a Ferrari, capable of zipping through 2 quadrillion calculations per second. That’s the equivalent of each person on Earth performing more than 10 million multiplication problems each second for an entire year, making Satori nimble enough to join the middle ranks of the world’s 500 fastest computers.

    Rapid progress in AI has fueled a relentless demand for computing power to train more elaborate models on ever-larger datasets. At the same time, federal funding for academic computing facilities has been on a three-decade decline. Christopher Hill, director of MIT’s Research Computing Project, puts the current demand at MIT at five times what the Institute can offer.

    “IBM’s gift couldn’t come at a better time,” says Maria Zuber, a geophysics professor and MIT’s vice president of research. “The opening of the new college will only increase demand for computing power. Satori will go a long way in helping to ease the crunch.”

    The computing gap was immediately apparent to John Cohn, chief scientist at the MIT-IBM Watson AI Lab, when the lab opened last year. “The cloud alone wasn’t giving us all that we needed for challenging AI training tasks,” he says. “The expense and long run times made us ask, could we bring more compute power here, to MIT?”

    It’s a mission Satori was built to fill, with IBM Power9 processors, a fast internal network, a large memory, and 256 graphics processing units (GPUs). Designed to rapidly process video-game images, graphics processors have become the workhorse for modern AI applications. Satori, like Summit, has been configured to wring as much power from each GPU as possible.

    IBM’s gift follows a history of collaborations with MIT that have paved the way for computing breakthroughs. In 1956, IBM helped launch the MIT Computation Center with the donation of an IBM 704, the first mass-produced computer to handle complex math.

    3
    IBM 704. Wikipedia

    Nearly three decades later, IBM helped fund Project Athena, an initiative that brought networked computing to campus.

    5
    Project Athena. MIT.

    Together, these initiatives spawned time-share operating systems, foundational programming languages, instant messaging, and the network-security protocol, Kerberos, among other technologies.

    More recently, IBM agreed to invest $240 million over 10 years to establish the MIT-IBM Watson AI Lab, a founding sponsor of MIT’s Quest for Intelligence. In addition to filling the computing gap at MIT, Satori will be configured to allow researchers to exchange data with all major commercial cloud providers, as well as prepare their code to run on IBM’s Summit supercomputer.

    Josh McDermott, an associate professor at MIT’s Department of Brain and Cognitive Sciences, is currently using Summit to develop a better hearing aid, but before he and his students could run their models, they spent countless hours getting the code ready. In the future, Satori will expedite the process, he says, and in the longer term, make more ambitious projects possible.

    “We’re currently building computer systems to model one sensory system but we’d like to be able to build models that can see, hear and touch,” he says. “That requires a much bigger scale.”

    Richard Braatz, the Edwin R. Gilliland Professor at MIT’s Department of Chemical Engineering, is using AI to improve lithium-ion battery technologies. He and his colleagues recently developed a machine learning algorithm to predict a battery’s lifespan from past charging cycles, and now, they’re developing multiscale simulations to test new materials and designs for extending battery life. With a boost from a computer like Satori, the simulations could capture key physical and chemical processes that accelerate discovery. “With better predictions, we can bring new ideas to market faster,” he says.

    Satori will be housed at a silk mill-turned data center, the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts, and connect to MIT via dedicated, high-speed fiber optic cables. At 150 kilowatts, Satori will consume as much energy as a mid-sized building at MIT, but its carbon footprint will be nearly fully offset by the use of hydro and nuclear power at the Holyoke facility. Equipped with energy-efficient cooling, lighting, and power distribution, the MGHPCC was the first academic data center to receive LEED-platinum status, the highest green-building award, in 2011.

    “Siting Satori at Holyoke minimizes its carbon emissions and environmental impact without compromising its scientific impact,” says John Goodhue, executive director of the MGHPCC.

    Visit the Satori website for more information.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 3:24 pm on August 18, 2019 Permalink | Reply
    Tags: "Supercomputing Galactic Winds with Cholla", , Cholla, , Supercomputing   

    From insideHPC: “Supercomputing Galactic Winds with Cholla” 

    From insideHPC

    August 18, 2019
    Elizabeth Rosenthal at ORNL


    In this video, a galactic wind simulation depicts interstellar gas and stars (red) and the outflows (blue) captured using the Cholla astrophysics code.

    2

    “Using the Titan supercomputer at Oak Ridge National Laboratory, a team of astrophysicists created a set of galactic wind simulations of the highest resolution ever performed. The simulations will allow researchers to gather and interpret more accurate, detailed data that elucidates how galactic winds affect the formation and evolution of galaxies.”

    ORNL Cray XK7 Titan Supercomputer, once the fastest in the world, to be decommissioned

    Brant Robertson of the University of California, Santa Cruz, and Evan Schneider of Princeton University developed the simulation suite to better understand galactic winds—outflows of gas released by supernova explosions—which could help explain variations in their density and temperature distributions.

    The improved set of galactic wind simulations will be incorporated into larger cosmological simulations.

    “We now have a much clearer idea of how the high speed, high temperature gas produced by clusters of supernovae is ejected after mixing with the cooler, denser gas in the disk of the galaxy,” Schneider said.”

    Cholla is a GPU-based hydrodynamics code I developed as part of my thesis work at the University of Arizona. It was designed to be massively-parallel and extremely efficient, and has been run on some of the largest supercomputers in the world. I am committed to keeping Cholla free and open-source. The most recent public release of the code can be found on GitHub.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 12:07 pm on August 13, 2019 Permalink | Reply
    Tags: "IBM Deploys Triton AI Supercomputer at University of Miami", , , Supercomputing   

    From insideHPC: “IBM Deploys Triton AI Supercomputer at University of Miami” 

    From insideHPC

    August 13, 2019

    Today the University of Miami (UM) announced that their new Triton supercomputer is installed and helping their researchers explore new frontiers of science. The new supercomputer will be UM’s first GPU-accelerated HPC system, representing a completely new approach to computational and data science for the university’s campuses. Built using IBM Power Systems AC922 servers, the new HPC system was designed to maximize data movement between the IBM POWER9 CPU and attached accelerators like GPUs.

    With advances in artificial intelligence and science, we wanted to advance our research with this new generation of supercomputer and enable more discoveries,” said Nicholas Tsinoremas, director of the University of Miami Center for Computational Science and vice provost for data and research computing. “Advances in data science and big data drove us to this new technology.”

    1

    The new high-performance system uses the same AI-optimized architecture as the most powerful supercomputers in the world, the U.S. Department of Energy’s Summit and Sierra supercomputers.

    ORNL IBM AC922 SUMMIT supercomputer, No.1 on the TOP500. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    LLNL IBM NVIDIA Mellanox ATS-2 Sierra Supercomputer, NO.2 on the TOP500

    The $3.7 million system was assembled and validated distally by IBM and the University’s Center for Computational Science (CCS) personnel. CCS personnel along with UM investigators have been installing and testing software since its arrival to UM’s downtown facility last month.

    “Modern computational science requires a system that can handle the demands of Big Data, classic modeling and simulation, as well as the analytical techniques of artificial intelligence,” said David Turek, Vice President of Exascale Systems for IBM Cognitive Systems. “From the purpose-built hardware to the integrated machine learning and deep learning software stack, the IBM technology in Triton represents a new chapter in the way researchers approach data and computation.”

    “Some thought cloud-based computing would eliminate the need for supercomputers. However, many research projects with massive multi-dimensional datasets run much faster on these specially designed high-performance supercomputers like Triton,” said Ernie Fernandez, the University’s vice president for information technology and chief information officer. “Providing a hybrid environment at the University of Miami which offers both cloud options and a dedicated supercomputer is the best way to equip students and faculty to solve some of the world’s biggest problems.”

    The new supercomputer is designed to process data more efficiently, and students will be able to access the supercomputer from their laptops, log in and start processing data independently. Currently, about 1,500 people on UM’s three campuses utilize the supercomputer.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 10:15 am on August 9, 2019 Permalink | Reply
    Tags: , Cray Shasta for US Air Force, , Supercomputing   

    From insideHPC: “Cray Shasta Supercomputer to power weather forecasting for U.S. Air Force” 

    From insideHPC

    August 9, 2019

    Today Cray announced that their first Shasta supercomputing system for operational weather forecasting and meteorology will be acquired by the Air Force Life Cycle Management Center in partnership with Oak Ridge National Laboratory. The powerful high-performance computing capabilities of the new system, named HPC11, will enable higher fidelity weather forecasts for U.S. Air Force and Army operations worldwide. The contract is valued at $25 million.

    1

    “We’re excited with our Oak Ridge National Laboratory strategic partner’s selection of Cray to provide Air Force Weather’s next high performance computing system,” said Steven Wert, Program Executive Officer Digital, Air Force Life Cycle Management Center at Hanscom Air Force Base in Massachusetts, and a member of the Senior Executive Service. “The system’s performance will be a significant increase over the existing HPC capability and will provide Air Force Weather operators with the ability to run the next generation of high-resolution, global and regional models, and satisfy existing and emerging warfighter needs for environmental impacts to operations planning.”

    Oak Ridge National Laboratory (ORNL) has a history of deploying the world’s most powerful supercomputers and through this partnership, will provide supercomputing-as-a-service on the HPC11 Shasta system to the Air Force 557th Weather Wing. The 557th Weather Wing develops and provides comprehensive terrestrial and space weather information to the U.S. Air Force and Army. The new system will feature the revolutionary Cray Slingshot interconnect, with features to better support time-critical numerical weather prediction workloads, and will enhance the Air Force’s capabilities to create improved weather forecasts and weather threat assessments so that Air Force missions can be carried out more effectively.

    “The HPC11 system will be the first Shasta delivery to the production weather segment, and we’re proud to share this milestone with ORNL and the Air Force,” said Peter Ungaro, president and CEO at Cray. “The years of innovation behind Shasta and Slingshot and the success of prior generations of Cray systems continue to demonstrate Cray’s ability to support demanding 24/7 operations like weather forecasting. This is a great example of the upcoming Exascale Era bringing a new set of technologies to bear on challenging problems and empowering the Air Force to more effectively execute on its important mission.”


    In this video, Cray CTO Steve Scott announces Slingshot, the company’s new high-speed, purpose-built supercomputing interconnect, and introduces its many ground-breaking features.

    HPC11 will be ORNL’s first Cray Shasta system, as well as the first supercomputing system with 2nd Gen AMD EPYC™ processors for use in operational weather forecasting. HPC11 will join the 85% bastion of weather centers that rely on Cray, and will feature eight Shasta cabinets in a dual-hall configuration.

    “We are incredibly excited to continue our strategic collaboration with Cray to deliver the first Shasta supercomputer to the U.S. Air Force, helping to improve the fidelity of weather forecasts for U.S. military operations around the globe,” said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Systems Group, AMD. “The 2nd Gen AMD EPYC processors provide exceptional performance in highly complex workloads, a necessary component to power critical weather prediction workloads and deliver more accurate forecasts.”

    The system is expected to be delivered in Q4 2019 and accepted in early 2020.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:31 pm on August 8, 2019 Permalink | Reply
    Tags: ALLOT-See.Control.Secure, , , , Lenovo’s ThinkSystem SR635 and SR655, Supercomputing   

    From insideHPC: “Lenovo Launches Single Socket Servers with AMD EPYC 7002 Series” 

    From insideHPC

    2

    Today Lenovo introduced the Lenovo ThinkSystem SR635 and SR655 server platforms, two of the industry’s most powerful single-socket servers.

    3
    Lenovo ThinkSystem SR635 & SR655 Servers with AMD’s EPYC ‘Rome’ CPUs

    5

    As businesses are tasked with doing more with less, the new Lenovo solutions provide the performance of a dual-socket server at the cost of a single-socket. These new additions to Lenovo’s expansive server portfolio are powered by next-generation AMD EPYC 7002 Series processors and were designed specifically to handle customers’ evolving, data-intensive workloads such as video security, software-defined storage and network intelligence, as well as support for virtualized and edge environments. The result is a solution that packs power along with efficiency for customers who place a premium on balancing throughput and security with easy scalability.

    “Lenovo’s integration of next generation I/O and processing technology gives Allot the ability to manage more network bandwidth at higher speeds, allowing us to pull actionable insights from increasingly heavy traffic without any degradation in performance,” said Mark Shteiman, AVP of Product Management with Allot.

    2

    “The communication service providers and enterprises we support need to see and react to their network needs in real time. We evaluated Lenovo’s ThinkSystem SR635 and SR655 server platform prototypes and were immediately impressed.”

    Organizations are juggling business priorities with tight budgets. Lenovo’s new ThinkSystem SR635 and SR655 server platforms not only allow customers to run more workloads on fewer servers, but also offer up to 73 percent savings on potential software licensing, empowering users to accelerate emerging workloads more efficiently. Additionally, customers can realize a reduction in total cost of ownership (TCO) by up to 46 percent. Further supporting these advances in workload efficiency and TCO savings are the ThinkSystem SR635 and SR655 servers’ world records for energy efficiency. The net result of all these enhancements is better price for performance.

    The Lenovo ThinkSystem SR635 and SR655 provide more throughput, lower latency and higher core density, as well as the largest NVMe drive capacity of any single-socket on the market. Beyond that, the new 2nd Gen AMD EPYC processor based systems also provide a solid opportunity for the enablement of additional hyperconverged infrastructure solutions. This gives Lenovo the ability to offer customers VX and other certified nodes and appliances for simple deployment, management, and scalability.

    Unleashing Smarter Networks with Data Intelligence

    Many customers have been eagerly anticipating these systems due to their ability to handle data-intensive workloads, including Allot, a leading global provider of innovative network intelligence and security solutions for communications service providers (CSPs) and enterprises. They require solutions that can turn network, application, usage and security data into actionable intelligence that make their customers’ networks smarter and their users more secure. The market dictates that those solutions be able to match their ever-evolving needs and help them to address new pain points that surface as IT demands continue to change.

    ‘Lenovo’s integration of next generation I/O and processing technology gives Allot the ability to manage more network bandwidth at higher speeds, allowing us to pull actionable insights from increasingly heavy traffic without any degradation in performance,” said Mark Shteiman, AVP of Product Management with Allot. “The communication service providers and enterprises we support need to see and react to their network needs in real time. We evaluated Lenovo’s ThinkSystem SR635 and SR655 server platform prototypes and were immediately impressed.”

    The new Lenovo ThinkSystem SR635 and SR655 solutions are now available through Lenovo sales representatives and channel partners across the globe.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:02 pm on August 8, 2019 Permalink | Reply
    Tags: , , Army Research Lab (ARL), , ERDC-U.S. Army Engineering and Research Development Center, , Supercomputing   

    From insideHPC: “AMD to Power Two Cray CS500 Systems at Army Research Centers” 

    From insideHPC

    August 8, 2019

    Today Cray announced that the U.S. Department of Defense (DOD) has selected two Cray CS500 systems for its High Performance Computing Modernization Program (HPCMP) annual technology procurement known as TI-18.

    3

    The Army Research Lab (ARL) and the U.S. Army Engineering and Research Development Center (ERDC) will each deploy a Cray CS500 to help serve the U.S. through accelerated research in science and technology.

    2
    Cray CS500

    The two contracts are valued at more than $46M and the CS500 systems are expected to be delivered to ARL and ERDC in the fourth quarter of 2019.

    “We’re proud to continue to support the DOD and its advanced use of high-performance computing in providing ARL and ERDC new systems for their research programs,” said Peter Ungaro, CEO at Cray. “We’re looking forward to continued collaboration with the DOD in leveraging the capabilities of these new systems to achieve their important mission objectives.”

    Cray has a long history of delivering high-performance computing technologies to ARL and ERDC and continues to play a vital role in helping the organizations deliver on their missions to ensure the U.S. remains a leader in science. Both organizations’ CS500 systems will be equipped with 2nd Gen AMD EPYC processors and NVIDIA Tensor Core GPUs, and will provide access to high-performance capabilities and resources that make it possible for researchers, scientists and engineers across the Department of Defense to better understand insights and enable new discoveries across diverse research disciplines to address the Department’s most challenging problems.

    “We are truly proud to partner with Cray to create the world’s most powerful supercomputing platforms. To be selected to help accelerate scientific research and discovery is a testament to our commitment to datacenter innovation,” said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Solutions Business Group, AMD. “By leveraging breakthrough CPU performance and robust feature set of the 2nd Gen AMD EPYC processors with Cray CS500 supercomputers, the DOD has a tremendous opportunity to grow its computing capabilities and deliver on its missions.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

    If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

    insideHPC
    2825 NW Upshur
    Suite G
    Portland, OR 97239

    Phone: (503) 877-5048

     
  • richardmitnick 1:18 pm on August 5, 2019 Permalink | Reply
    Tags: "Fermilab’s HEPCloud goes live", , , , , , , Supercomputing   

    From Fermi National Accelerator Lab: “Fermilab’s HEPCloud goes live” 

    FNAL Art Image
    FNAL Art Image by Angela Gonzales

    From Fermi National Accelerator Lab , an enduring source of strength for the US contribution to scientific research world wide.

    August 5, 2019
    Marcia Teckenbrock

    To meet the evolving needs of high-energy physics experiments, the underlying computing infrastructure must also evolve. Say hi to HEPCloud, the new, flexible way of meeting the peak computing demands of high-energy physics experiments using supercomputers, commercial services and other resources.

    Five years ago, Fermilab scientific computing experts began addressing the computing resource requirements for research occurring today and in the next decade. Back then, in 2014, some of Fermilab’s neutrino programs were just starting up. Looking further into future, plans were under way for two big projects. One was Fermilab’s participation in the future High-Luminosity Large Hadron Collider at the European laboratory CERN.

    The other was the expansion of the Fermilab-hosted neutrino program, including the international Deep Underground Neutrino Experiment. All of these programs would be accompanied by unprecedented data demands.

    To meet these demands, the experts had to change the way they did business.

    HEPCloud, the flagship project pioneered by Fermilab, changes the computing landscape because it employs an elastic computing model. Tested successfully over the last couple of years, it officially went into production as a service for Fermilab researchers this spring.

    2
    Scientists on Fermilab’s NOvA experiment were able to execute around 2 million hardware threads at a supercomputer [NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science the Office of Science’s National Energy Research Scientific Computing Center.] And scientists on CMS experiment have been running workflows using HEPCloud at NERSC as a pilot project. Photo: Roy Kaltschmidt, Lawrence Berkeley National Laboratory]

    Experiments currently have some fixed computing capacity that meets, but doesn’t overshoot, its everyday needs. For times of peak demand, HEPCloud enables elasticity, allowing experiments to rent computing resources from other sources, such as supercomputers and commercial clouds, and manages them to satisfy peak demand. The prior method was to purchase local resources that on a day-to-day basis, overshoot the needs. In this new way, HEPCloud reduces the costs of providing computing capacity.

    “Traditionally, we would buy enough computers for peak capacity and put them in our local data center to cover our needs,” said Fermilab scientist Panagiotis Spentzouris, former HEPCloud project sponsor and a driving force behind HEPCloud. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.”

    In addition, HEPCloud optimizes resource usage across all types, whether these resources are on site at Fermilab, on a grid such as Open Science Grid, in a cloud such as Amazon or Google, or at supercomputing centers like those run by the DOE Office of Science Advanced Scientific Computing Research program (ASCR). And it provides a uniform interface for scientists to easily access these resources without needing expert knowledge about where and how best to run their jobs.

    The idea to create a virtual facility to extend Fermilab’s computing resources began in 2014, when Spentzouris and Fermilab scientist Lothar Bauerdick began exploring ways to best provide resources for experiments at CERN’s Large Hadron Collider. The idea was to provide those resources based on the overall experiment needs rather than a certain amount of horsepower. After many planning sessions with computing experts from the CMS experiment at the LHC and beyond, and after a long period of hammering out the idea, a scientific facility called “One Facility” was born. DOE Associate Director of Science for High Energy Physics Jim Siegrist coined the name “HEPCloud” — a computing cloud for high-energy physics — during a general discussion about a solution for LHC computing demands. But interest beyond high-energy physics was also significant. DOE Associate Director of Science for Advanced Scientific Computing Research Barbara Helland was interested in HEPCloud for its relevancy to other Office of Science computing needs.

    3
    The CMS detector at CERN collects data from particle collisions at the Large Hadron Collider. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud. Photo: CERN

    The project was a collaborative one. In addition to many individuals at Fermilab, Miron Livny at the University of Wisconsin-Madison contributed to the design, enabling HEPCloud to use the workload management system known as Condor (now HTCondor), which is used for all of the lab’s current grid activities.

    Since its inception, HEPCloud has achieved several milestones as it moved through the several development phases leading up to production. The project team first demonstrated the use of cloud computing on a significant scale in February 2016, when the CMS experiment used HEPCloud to achieve about 60,000 cores on the Amazon cloud, AWS. In November 2016, CMS again used HEPCloud to run 160,000 cores using Google Cloud Services , doubling the total size of the LHC’s computing worldwide. Most recently in May 2018, NOvA scientists were able to execute around 2 million hardware threads at a supercomputer the Office of Science’s National Energy Research Scientific Computing Center (NERSC), increasing both the scale and the amount of resources provided. During these activities, the experiments were executing and benefiting from real physics workflows. NOvA was even able to report significant scientific results at the Neutrino 2018 conference in Germany, one of the most attended conferences in neutrino physics.

    CMS has been running workflows using HEPCloud at NERSC as a pilot project. Now that HEPCloud is in production, CMS scientists will be able to run all of their physics workflows on the expanded resources made available through HEPCloud.

    Next, HEPCloud project members will work to expand the reach of HEPCloud even further, enabling experiments to use the leadership-class supercomputing facilities run by ASCR at Argonne National Laboratory and Oak Ridge National Laboratory.

    Fermilab experts are working to see that, eventually, all Fermilab experiments be configured to use these extended computing resources.

    This work is supported by the DOE Office of Science.

    See the full here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    FNAL Icon

    Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
    collaborate at Fermilab on experiments at the frontiers of discovery.

     
  • richardmitnick 12:01 pm on August 5, 2019 Permalink | Reply
    Tags: "Large cosmological simulation to run on Mira", , , , , , , Supercomputing   

    From Argonne Leadership Computing Facility: “Large cosmological simulation to run on Mira” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne Leadership Computing Facility

    An extremely large cosmological simulation—among the five most extensive ever conducted—is set to run on Mira this fall and exemplifies the scope of problems addressed on the leadership-class supercomputer at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Argonne physicist and computational scientist Katrin Heitmann leads the project. Heitmann was among the first to leverage Mira’s capabilities when, in 2013, the IBM Blue Gene/Q system went online at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Among the largest cosmological simulations ever performed at the time, the Outer Rim Simulation she and her colleagues carried out enabled further scientific research for many years.

    For the new effort, Heitmann has been allocated approximately 800 million core-hours to perform a simulation that reflects cutting-edge observational advances from satellites and telescopes and will form the basis for sky maps used by numerous surveys. Evolving a massive number of particles, the simulation is designed to help resolve mysteries of dark energy and dark matter.

    “By transforming this simulation into a synthetic sky that closely mimics observational data at different wavelengths, this work can enable a large number of science projects throughout the research community,” Heitmann said. “But it presents us with a big challenge.” That is, in order to generate synthetic skies across different wavelengths, the team must extract relevant information and perform analysis either on the fly or after the fact in post-processing. Post-processing requires the storage of massive amounts of data—so much, in fact, that merely reading the data becomes extremely computationally expensive.

    Since Mira was launched, Heitmann and her team have implemented in their Hardware/Hybrid Accelerated Cosmology Code (HACC) more sophisticated analysis tools for on-the-fly processing. “Moreover, compared to the Outer Rim Simulation, we’ve effected three major improvements,” she said. “First, our cosmological model has been updated so that we can run a simulation with the best possible observational inputs. Second, as we’re aiming for a full-machine run, volume will be increased, leading to better statistics. Most importantly, we set up several new analysis routines that will allow us to generate synthetic skies for a wide range of surveys, in turn allowing us to study a wide range of science problems.”

    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing predictive tools and aid the development of new models, impacting both ongoing and upcoming cosmological surveys, including the Dark Energy Spectroscopic Instrument (DESI), the Large Synoptic Survey Telescope (LSST), SPHEREx, and the “Stage-4” ground-based cosmic microwave background experiment (CMB-S4).

    LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018


    NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

    LSST

    LSST Camera, built at SLAC



    LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


    LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

    NASA’s SPHEREx Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer depiction

    4

    The value of the simulation derives from its tremendous volume (which is necessary to cover substantial portions of survey areas) and from attaining levels of mass and force resolution sufficient to capture the small structures that host faint galaxies.

    The volume and resolution pose steep computational requirements, and because they are not easily met, few large-scale cosmological simulations are carried out. Contributing to the difficulty of their execution is the fact that the memory footprints of supercomputers have not advanced proportionally with processing speed in the years since Mira’s introduction. This makes that system, despite its relative age, rather optimal for a large-scale campaign when harnessed in full.

    “A calculation of this scale is just a glimpse at what the exascale resources in development now will be capable of in 2021/22,” said Katherine Riley, ALCF Director of Science. “The research community will be taking advantage of this work for a very long time.”

    Funding for the simulation is provided by DOE’s High Energy Physics program. Use of ALCF computing resources is supported by DOE’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF
    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:31 pm on August 1, 2019 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Lawrence Berkeley National Lab: “Is your Supercomputer Stumped? There May Be a Quantum Solution” 

    Berkeley Logo

    From Lawrence Berkeley National Lab

    August 1, 2019
    Glenn Roberts Jr.
    geroberts@lbl.gov
    (510) 486-5582

    Berkeley Lab-led team solves a tough math problem with quantum computing.

    1
    (Credit: iStock/metamorworks)

    Some math problems are so complicated that they can bog down even the world’s most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

    A new study led by a physicist at Lawrence Berkeley National Laboratory (Berkeley Lab), published in the journal Scientific Reports, details how a quantum computing technique called “quantum annealing” can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

    Seeking a quantum solution to really big problems

    “No quantum annealing algorithm exists for the problems that we are trying to solve,” said Chia Cheng “Jason” Chang, a RIKEN iTHEMS fellow in Berkeley Lab’s Nuclear Science Division and a research scientist at RIKEN, a scientific institute in Japan.

    “The problems we are looking at are really, really big,” said Chang, who led the international team behind the study, published in the Scientific Reports journal. “The idea here is that the quantum annealer can evaluate a large number of variables at the same time and return the right solution in the end.”

    The same problem-solving algorithm that Chang devised for the latest study, and that is available to the public via open-source code, could potentially be adapted and scaled for use in systems engineering and operations research, for example, or in other industry applications.

    Classical algebra with a quantum computer

    “We are cooking up small ‘toy’ examples just to develop how an algorithm works. The simplicity of current quantum annealers is that the solution is classical – akin to doing algebra with a quantum computer. You can check and understand what you are doing with a quantum annealer in a straightforward manner, without the massive overhead of verifying the solution classically.”

    Chang’s team used a commercial quantum annealer located in Burnaby, Canada, called the D-Wave 2000Q that features superconducting electronic elements chilled to extreme temperatures to carry out its calculations.

    Access to the D-Wave annealer was provided via the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory (ORNL).

    “These methods will help us test the promise of quantum computers to solve problems in applied mathematics that are important to the U.S. Department of Energy’s scientific computing mission,” said Travis Humble, director of ORNL’s Quantum Computing Institute.

    Quantum data: A one, a zero, or both at the same time

    There are currently two of these machines in operation that are available to the public. They work by applying a common rule in physics: Systems in physics tend to seek out their lowest-energy state. For example, in a series of steep hills and deep valleys, a person traversing this terrain would tend to end up in the deepest valley, as it takes a lot of energy to climb out of it and the least amount of energy to settle in this valley.

    The annealer applies this rule to calculations. In a typical computer, memory is stored in a series of bits that are occupied by either one or a zero. But quantum computing introduces a new paradigm in calculations: quantum bits, or qubits. With qubits, information can exist as either a one, a zero, or both at the same time. This trait makes quantum computers better suited to solving some problems with a very large number of possible variables that must be considered for a solution.

    Each of the qubits used in the latest study ultimately produces a result of either a one or a zero by applying the lowest-energy-state rule, and researchers tested the algorithm using up to 30 logical qubits.

    The algorithm that Chang developed to run on the quantum annealer can solve polynomial equations, which are equations that can have both numbers and variables and are set to add up to zero. A variable can represent any number in a large range of numbers.

    When there are ‘fewer but very dense calculations’

    Berkeley Lab and neighboring UC Berkeley have become a hotbed for R&D in the emerging field of quantum information science, and last year announced the formation of a partnership called Berkeley Quantum to advance this field.

    3
    Berkeley Quantum

    Chang said that the quantum annealing approach used in the study, also known as adiabatic quantum computing, “works well for fewer but very dense calculations,” and that the technique appealed to him because the rules of quantum mechanics are familiar to him as a physicist.

    The data output from the annealer was a series of solutions for the equations sorted into columns and rows. This data was then mapped into a representation of the annealer’s qubits, Chang explained, and the bulk of the algorithm was designed to properly account for the strength of the interaction between the annealer’s qubits. “We repeated the process thousands of times” to help validate the results, he said.

    “Solving the system classically using this approach would take an exponentially long time to complete, but verifying the solution was very quick” with the annealer, he said, because it was solving a classical problem with a single solution. If the problem was quantum in nature, the solution would be expected to be different every time you measure it.

    Some math problems are so complicated that they can bog down even the world’s most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

    A new study led by a physicist at Lawrence Berkeley National Laboratory (Berkeley Lab), published in the journal Scientific Reports, details how a quantum computing technique called “quantum annealing” can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

    Real-world applications for a quantum algorithm

    As quantum computers are equipped with more qubits that allow them to solve more complex problems more quickly, they can also potentially lead to energy savings by reducing the use of far larger supercomputers that could take far longer to solve the same problems.

    The quantum approach brings within reach direct and verifiable solutions to problems involving “nonlinear” systems – in which the outcome of an equation does not match up proportionately to the input values. Nonlinear equations are problematic because they may appear more unpredictable or chaotic than other “linear” problems that are far more straightforward and solvable.

    Chang sought the help of quantum-computing experts in quantum computing both in the U.S. and in Japan to develop the successfully tested algorithm. He said he is hopeful the algorithm will ultimately prove useful to calculations that can test how subatomic quarks behave and interact with other subatomic particles in the nuclei of atoms.

    While it will be an exciting next step to work to apply the algorithm to solve nuclear physics problems, “This algorithm is much more general than just for nuclear science,” Chang noted. “It would be exciting to find new ways to use these new computers.”

    The Oak Ridge Leadership Computing Facility is a DOE Office of Science User Facility.

    Researchers from Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and the RIKEN Computational Materials Science Research Team also participated in the study.

    The study was supported by the U.S. Department of Energy Office of Science; and by Oak Ridge National Laboratory and its Laboratory Directed Research and Development funds. The Oak Ridge Leadership Computing Facility is supported by the DOE Office of Science’s Advanced Scientific Computing Research program.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LBNL campus

    Bringing Science Solutions to the World
    In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

    Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

    Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

    A U.S. Department of Energy National Laboratory Operated by the University of California.

    University of California Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: