November 26, 2014
This month US energy secretary Ernest Moniz announced the Department of Energy will spend $325m to research extreme-scale computing and build two new GPU-accelerated supercomputers. The goal: to put the nation on a fast-track to exascale computing, and thereby leading scientific research that addresses challenging issues in government, academia, and industry.
Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US. Image courtesy Amber Harmon.
Moniz also announced funding awards, totaling $100m, for partnerships with HPC companies developing exascale technologies under the FastForward 2 program managed by Lawrence Livermore National Laboratory in California, US.
The combined spending comes at a critical juncture, as just last week the Organization for Economic Co-operation and Development (OECD) released its 2014 Science, Technology and Industry Outlook report. With research and development budgets in advanced economies not yet fully recovered from the 2008 economic crisis, China is on track to lead the world in R&D spending by 2019.
The DOE-sponsored collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) national labs will ensure each is able to deploy a supercomputer expected to provide about five times the performance of today’s top systems.
The Summit supercomputer will outperform Titan, the Oak Ridge Leadership Computing Facility’s (OLCF) current flagship system. Research pursuits include combustion and climate science, as well as energy storage and nuclear power. “Summit builds on the hybrid multi-core architecture that the OLCF pioneered with Titan,” says Buddy Bland, director of the Summit project.
The other system, Sierra, will serve the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program. “Sierra will allow us to begin laying the groundwork for exascale systems,” says Bob Meisner, ASC program head, “as the heterogeneous accelerated node architecture represents one of the most promising architectural paths.” Argonne is expected to finalize a contract for a system at a later date.
The announcements came just ahead of the 2014 International Conference for High Performance Computing, Networking, Storage and Analysis (SC14). Also ahead of SC14, organizers launched the HPC Matters campaign and announced the first HPC Matters plenary, aimed at sharing real stories about how HPC makes an everyday difference.
When asked why the US was pushing the HPC Matters initiative, conference advisor Wilfred Pinfold, director of research and advanced technology development at Intel Federal, focused on informing and educating a broader audience. “To a large extent, growth in the use of HPC — and the benefits that come from it — will develop as more people understand in detail those benefits.” Pinfold also noted the effort the US must make to continue to lead in HPC technology. “I think other countries are catching up and there is real competition ahead — all of which is good.”
The HPC domain is in many ways defined by two sometimes opposing drives: the push of international collaborations to solve fundamental societal issues, and the pull of national security, innovation, and economic competitiveness — a point that Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US, says we shouldn’t shy away from. Simon participated in an SC14 panel discussion of international funding strategies for HPC software, noting issues the discipline needs to overcome.
“In principle all supercomputers are easily accessible worldwide. But while our openness as an international community in principal makes it easier, it is less of a necessity that we work out how to actually work together.” This results in very soft collaboration agreements, says Simon, that go nowhere without grassroots efforts by researchers who already have relationships and are interested in working together.
According to Irene Qualters, division director of advanced cyberinfrastructure at the US National Science Foundation, expectations are increasing. “The community we support is not only multidisciplinary and highly internationally collaborative, but researchers expect their work to have broad societal impact.” Collective motivation is so strong, Qualters notes, that we’re moving away from a history of bilateral agreements. “The ability to do multilateral and broader umbrella agreements is an important efficiency that we’re poised for.”
See the full article here.
iSGTW is an international weekly online publication that covers distributed computing and the research it enables.
“We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)
In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.
You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”