Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:31 pm on March 21, 2017 Permalink | Reply
    Tags: , Breaking the supermassive black hole speed limit, , Supercomputing   

    From LANL: “Breaking the supermassive black hole speed limit” 

    LANL bloc

    Los Alamos National Laboratory

    March 21, 2017
    Kevin Roark
    Communications Office
    (505) 665-9202
    knroark@lanl.gov

    1
    Quasar growing under intense accretion streams. No image credit

    A new computer simulation helps explain the existence of puzzling supermassive black holes observed in the early universe. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.

    “Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory, “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”

    Using computer codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.

    “It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?”

    It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.

    “This was largely unexpected,” said Smidt. “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”

    A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials. Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.

    “We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”

    Research paper available at https://arxiv.org/pdf/1703.00449.pdf

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus
    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

    DOE Main

    NNSA

     
  • richardmitnick 11:02 am on March 14, 2017 Permalink | Reply
    Tags: , , , Supercomputing, Vector boson plus jet event   

    From ALCF: “High-precision calculations help reveal the physics of the universe” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    March 9, 2017
    Joan Koka

    1
    With the theoretical framework developed at Argonne, researchers can more precisely predict particle interactions such as this simulation of a vector boson plus jet event. Credit: Taylor Childers, Argonne National Laboratory

    On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before.

    Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified.

    The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales.

    “So far, the Standard Model of particle physics has been very successful in describing the particle interactions we have seen experimentally, but we know that there are things that this model doesn’t describe completely.


    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    We don’t know the full theory,” said Argonne theorist Radja Boughezal, who developed the framework with her team.

    “The first step in discovering the full theory and new models involves looking for deviations with respect to the physics we know right now. Our hope is that there is deviation, because it would mean that there is something that we don’t understand out there,” she said.

    The theoretical method developed by the Argonne team is currently being deployed on Mira, one of the fastest supercomputers in the world, which is housed at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Using Mira, researchers are applying the new framework to analyze the production of missing energy in association with a jet, a particle interaction of particular interest to researchers at the Large Hadron Collider (LHC) in Switzerland.




    LHC at CERN

    Physicists at the LHC are attempting to produce new particles that are known to exist in the universe but have yet to be seen in the laboratory, such as the dark matter that comprises a quarter of the mass and energy of the universe.


    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Although scientists have no way today of observing dark matter directly — hence its name — they believe that dark matter could leave a “missing energy footprint” in the wake of a collision that could indicate the presence of new particles not included in the Standard Model. These particles would interact very weakly and therefore escape detection at the LHC. The presence of a “jet”, a spray of Standard Model particles arising from the break-up of the protons colliding at the LHC, would tag the presence of the otherwise invisible dark matter.

    In the LHC detectors, however, the production of a particular kind of interaction — called the Z-boson plus jet process — can mimic the same signature as the potential signal that would arise from as-yet-unknown dark matter particles. Boughezal and her colleagues are using their new framework to help LHC physicists distinguish between the Z-boson plus jet signature predicted in the Standard Model from other potential signals.

    Previous attempts using less precise calculations to distinguish the two processes had so much uncertainty that they were simply not useful for being able to draw the fine mathematical distinctions that could potentially identify a new dark matter signal.

    “It is only by calculating the Z-boson plus jet process very precisely that we can determine whether the signature is indeed what the Standard Model predicts, or whether the data indicates the presence of something new,” said Frank Petriello, another Argonne theorist who helped develop the framework. “This new framework opens the door to using Z-boson plus jet production as a tool to discover new particles beyond the Standard Model.”

    Applications for this method go well beyond studies of the Z-boson plus jet. The framework will impact not only research at the LHC, but also studies at future colliders which will have increasingly precise, high-quality data, Boughezal and Petriello said.

    “These experiments have gotten so precise, and experimentalists are now able to measure things so well, that it’s become necessary to have these types of high-precision tools in order to understand what’s going on in these collisions,” Boughezal said.

    “We’re also so lucky to have supercomputers like Mira because now is the moment when we need these powerful machines to achieve the level of precision we’re looking for; without them, this work would not be possible.”

    Funding and resources for this work was previously allocated through the Argonne Leadership Computing Facility’s (ALCF’s) Director’s Discretionary program; the ALCF is supported by the DOE’s Office of Science’s Advanced Scientific Computing Research program. Support for this work will continue through allocations coming from the Innovation and Novel Computational Impact on Theory and Experiment (INCITE) program.

    The INCITE program promotes transformational advances in science and technology through large allocations of time on state-of-the-art supercomputers.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:12 pm on February 26, 2017 Permalink | Reply
    Tags: , , , Supercomputing   

    From Nature: “[International] supercomputer, BOINC, needs more people power” 

    Nature Mag
    Nature

    22 February 2017
    Ivy Shih

    1
    Xinhua / Alamy Stock Photo

    A citizen science initiative that encourages public donations of idle computer processing power to run complex calculations is struggling to increase participation.

    Berkeley Open Infrastructure for Network Computing (BOINC), a large grid that harnesses volunteered power for scientific computing, has been running for 15 years to support research projects in medicine, mathematics, climate change, linguistics and astrophysics.

    boinclarge

    boinc-wallpaper

    But, despite strong demand by scientists for supercomputers or computer networks that can rapidly analyse high volumes of data, the volunteer run BOINC has struggled to maintain and grow its network of users donating their spare computer power. Of its 4 million-plus registered users, only 6% are active, a number that has been falling since 2014.

    “I’m constantly looking for ways to expose sectors of the general population to BOINC and it’s a struggle,” says David Anderson, a co-founder and computer scientist at the University of California Berkeley.

    How many people use BOINC?

    Many more people have registered with BOINC than actually donate their computer power (active users).
    Anderson says BOINC, which is [no longer] funded by the National Science Foundation, currently hosts 56 scientific projects that span an international network of more than 760,000 computers. [current: 24-hour average: 17.367 PetaFLOPS. Active: 267,932 volunteers, 680,893 computers.]The platform’s combined processing power simulates a supercomputer whose performance is among the world’s top 10.

    Access to such supercomputers can be expensive and require lengthy waits, so BOINC offers research groups access to processing power at a fraction of the prohibitive cost.

    “A typical BOINC project uses a petaflop of computing — which typically costs maybe USD $100,000 a year. If you were to go to buy the same amount of computing power on the Amazon cloud, it would cost around $40 million,” says Anderson.

    Kevin Vinsen, a scientist at the International Centre for Radio Astronomy Research, Australia, leads a project that analyses photos of galaxies. BOINC’s helps analyse the SkyNet’s huge dataset, which is especially valuable given the project’s shoestring budget.

    “In BOINC I can have 20,000 people working on it at the same time. Each one is doing a small portion of the galaxy,” he says.

    Anderson wants to connect BOINC to major supercomputer facilities in the United States, to reduce the lengthy wait researchers have to process their data. He is working to add the network to the Texas Advanced Computing Center as an additional resource for researchers.

    Access to such supercomputers can be expensive and require lengthy waits, so BOINC offers research groups access to processing power at a fraction of the prohibitive cost.

    “A typical BOINC project uses a petaflop of computing — which typically costs maybe USD $100,000 a year. If you were to go to buy the same amount of computing power on the Amazon cloud, it would cost around $40 million,” says Anderson.

    Kevin Vinsen, a scientist at the International Centre for Radio Astronomy Research, Australia, leads a project that analyses photos of galaxies. BOINC’s helps analyse the SkyNet’s huge dataset, which is especially valuable given the project’s shoestring budget.

    “In BOINC I can have 20,000 people working on it at the same time. Each one is doing a small portion of the galaxy,” he says.

    Anderson wants to connect BOINC to major supercomputer facilities in the United States, to reduce the lengthy wait researchers have to process their data. He is working to add the network to the Texas Advanced Computing Center as an additional resource for researchers.

    boincstatsimage-new

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 1:48 pm on February 22, 2017 Permalink | Reply
    Tags: Supercomputing, , Tokyo Tech supercomputer TSUBAME3.0   

    From Tokyo Tech: “Tokyo Tech supercomputer TSUBAME3.0 scheduled to start operating in summer 2017” 

    tokyo-tech-bloc

    Tokyo Institute of Technology

    February 22, 2017

    1
    Rendering of TSUBAME3.0

    Tokyo Tech supercomputer TSUBAME3.0 scheduled to start operating in summer 2017
    With 47.2 petaflops performance at half precision set to meet the surge in demand for artificial intelligence applications.

    The Tokyo Institute of Technology (Tokyo Tech) Global Scientific Information and Computing Center (GSIC) has started development and construction of TSUBAME3.01—the next-generation supercomputer that is scheduled to start operating in the summer of 2017.

    The theoretical performance of the TSUBAME3.0 is 47.2 petaflops in 16-bit half precision mode or above, and once the new TSUBAME3.0 is operating alongside the current TSUBAME2.5, Tokyo Tech GSIC will be able to provide a total computation performance of 64.3 petaflops in half precision mode or above, making it the largest supercomputer center in Japan.

    The majority of scientific calculation requires 64-bit double precision, however, artificial intelligence (AI) and Big Data processing can be performed at 16-bit half precision, and the TSUBAME3.0 is expected to be widely used in these fields, where demand is continuing to increase.

    Background and details

    Since the TSUBAME2.0 and 2.5 started operations in November 2010 as the fastest supercomputers in Japan, these computers have become “supercomputers for everyone” having significantly contributed to industry-academia-government research and development both in Japan and overseas over the six years. As a result, much attention has also been drawn to Tokyo Tech GSIC as the most advanced cutting-edge supercomputer center in the world. Furthermore, Tokyo Tech GSIC is continuing to partner with related companies in research into not only high performance computing (HPC), but also Big Data and AI—areas with increasing demand in recent years. These research results and the experience gained through operating TSUBAME2.0 and 2.5, and the energy-saving supercomputer TSUBAME-KFC2 were all applied in the design process for TSUBAME3.0.

    As a result of Japanese government procurement for the development of TSUBAME3.0, SGI Japan, Ltd. (SGI) was awarded the contract to work on the project. Tokyo Tech is developing TSUBAME3.0 in partnership with SGI and NVIDIA as well as other companies.

    The TSUBAME series feature the most recent NVIDIA GPUs available at the time, namely Tesla for TSUBAME1.2, Fermi for TSUBAME2.0, and Kepler for TSUBAME2.5. The upcoming TSUBAME3.0 will feature the fourth-generation Pascal GPU to ensure high compatibility. TSUBAME3.0 will contain 2,160 GPUs, making a total of 6,720 GPUs in operation at GSIC once operating alongside TSUBAME2.5 and TSUBAME-KFC.

    “Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can tackle once unsolvable problems. Tokyo Tech’s TSUBAME3.0 supercomputer, powered by more than 2,000 NVIDIA Pascal GPUs, will enable life-changing advances in such fields as healthcare, energy, and transportation.”

    TSUBAME3.0 has the theoretical performance of 12.15 petaflops in double precision mode (enabling calculation of 12,150 trillion floating point numbers/second); performance that is set to exceed the K supercomputer. In single precision mode, the TSUBAME3.0 performs at 24.3 petaflops, and in half precision mode this increases to 47.2 petaflops. Using the latest GPUs enables improved performance and energy efficiency as well as higher speed and larger capacity storage. The overall computation speed and capacity has also been improved through the NVMe-compatible, high-speed 1.08 PB SSDs on the computation nodes; resulting in significant advances in high-speed processing for Big Data applications. TSUBAME3.0 also incorporates a variety of cloud technology, including virtualization, and is expected to become the most advanced Science Cloud in Japan.

    System cooling efficiency has also been optimized in TSUBAME3.0. The processor cooling system uses an outdoor cooling tower, therefore enabling cold water to be supplied at a temperature close to ambient temperature for minimum energy. Its PUE (Power Usage Effectiveness) value, the value that indicates cooling efficiency, is 1.033; indicating extremely high efficiency and making more electricity available for computation.

    TSUBAME3.0 system uses a total of 540 computation nodes, all of which are ICE® XA computation nodes manufactured by SGI. Each computation node contains two Intel® Xeon® E5-2680 v4 processors, four TESLA P100 for NVLink-Optimized Servers with NVIDIA GPUs, 256 GiB of main storage, and four Intel® Omni-Path network interface ports. Its storage system consists of a 15.9 PB DataDirect Networks Lustre file system as well as 2 TB of NVMe-compatible high-speed SSD memory on each computation node. The computation nodes and the storage system are connected to a high-speed network through Omni-Path as well as to the Internet at a speed of 100 Gbps through SINET5.

    The computation power of TSUBAME3.0 will not only be used for education and cutting-edge research within the university. Importantly, the supercomputer will continue to serve as “supercomputing for everyone” as a leading information base for Japan’s top universities, contributing to development in cutting-edge science and technology and increased international competitiveness through the provision of its services to researchers and companies within and outside of the university for research and development through the Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) and the High Performance Computing Infrastructure (HPCI), two leading information bases for Japan’s top universities, and GSIC’s own TSUBAME Joint Usage Service.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    tokyo-tech-campus

    Tokyo Tech is the top national university for science and technology in Japan with a history spanning more than 130 years. Of the approximately 10,000 students at the Ookayama, Suzukakedai, and Tamachi Campuses, half are in their bachelor’s degree program while the other half are in master’s and doctoral degree programs. International students number 1,200. There are 1,200 faculty and 600 administrative and technical staff members.

    In the 21st century, the role of science and technology universities has become increasingly important. Tokyo Tech continues to develop global leaders in the fields of science and technology, and contributes to the betterment of society through its research, focusing on solutions to global issues. The Institute’s long-term goal is to become the world’s leading science and technology university.

     
  • richardmitnick 6:50 pm on December 11, 2016 Permalink | Reply
    Tags: , , Supercomputing, The right way to simulate the Milky Way   

    From Science Node: “The right way to simulate the Milky Way” 

    Science Node bloc
    Science Node

    13 Sep, 2016 [Where oh where has this been?]
    Whitney Clavin

    Astronomers have created the most detailed computer simulation to date of our Milky Way galaxy’s formation, from its inception billions of years ago as a loose assemblage of matter to its present-day state as a massive, spiral disk of stars.

    The simulation solves a decades-old mystery surrounding the tiny galaxies that swarm around the outside of our much larger Milky Way. Previous simulations predicted that thousands of these satellite, or dwarf, galaxies should exist. However, only about 30 of the small galaxies have ever been observed. Astronomers have been tinkering with the simulations, trying to understand this ‘missing satellites’ problem to no avail.


    Access mp4 video here .
    Supercomputers and superstars. Caltech associate professor of theoretical astrophysics Phil Hopkins and Carnegie-Caltech research fellow Andrew Wetzel use XSEDE supercomputers to build the most detailed and realistic simulation of galaxy formation ever created. The results solve a decades-long mystery regarding dwarf galaxies around our Milky Way. Courtesy Caltech.

    Now, with the new simulation — which used resources from the Extreme Science and Engineering Discovery Environment (XSEDE) running in parallel for 700,000 central processing unit (CPU) hours — astronomers at the California Institute of Technology (Caltech) have created a galaxy that looks like the one we live in today, with the correct, smaller number of dwarf galaxies.

    “That was the aha moment, when I saw that the simulation can finally produce a population of dwarf galaxies like the ones we observe around the Milky Way,” says Andrew Wetzel, postdoctoral fellow at Caltech and Carnegie Observatories in Pasadena, and lead author of a paper about the new research, published August 20 in Astrophysical Journal Letters.

    One of the main updates to the new simulation relates to how supernovae, explosions of massive stars, affect their surrounding environments. In particular, the simulation incorporated detailed formulas that describe the dramatic effects that winds from these explosions can have on star-forming material and dwarf galaxies. These winds, which reach speeds up to thousands of kilometers per second, “can blow gas and stars out of a small galaxy,” says Wetzel.

    Indeed, the new simulation showed the winds can blow apart young dwarf galaxies, preventing them from reaching maturity. Previous simulations that were producing thousands of dwarf galaxies weren’t taking the full effects of supernovae into account.

    “We had thought before that perhaps our understanding of dark matter was incorrect in these simulations, but these new results show we don’t have to tinker with dark matter,” says Wetzel. “When we more precisely model supernovae, we get the right answer.”

    Astronomers simulate our galaxy to understand how the Milky Way, and our solar system within it, came to be. To do this, the researchers tell a computer what our universe was like in the early cosmos. They write complex codes for the basic laws of physics and describe the ingredients of the universe, including everyday matter like hydrogen gas as well as dark matter, which, while invisible, exerts gravitational tugs on other matter. The computers then go to work, playing out all the possible interactions between particles, gas, and stars over billions of years.

    “In a galaxy, you have 100 billion stars, all pulling on each other, not to mention other components we don’t see, like dark matter,” says Caltech’s Phil Hopkins, associate professor of theoretical astrophysics and principal scientist for the new research. “To simulate this, we give a supercomputer equations describing those interactions and then let it crank through those equations repeatedly and see what comes out at the end.”

    The researchers are not done simulating our Milky Way. They plan to use even more computing time, up to 20 million CPU hours, in their next rounds. This should lead to predictions about the very faintest and smallest of dwarf galaxies yet to be discovered. Not a lot of these faint galaxies are expected to exist, but the more advanced simulations should be able to predict how many are left to find.

    The study was funded by Caltech, a Sloan Research Fellowship, the US National Science Foundation (NSF), NASA, an Einstein Postdoctoral Fellowship, the Space Telescope Science Institute, UC San Diego, and the Simons Foundation.

    Other coauthors on the study are: Ji-Hoon Kim of Stanford University, Claude-André Faucher-Giguére of Northwestern University, Dušan Kereš of UC San Diego, and Eliot Quataert of UC Berkeley.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:06 am on November 29, 2016 Permalink | Reply
    Tags: , , Supercomputing   

    From INVERSE: “Japan Reveals Plan to Build the World’s Fastest Supercomputer” 

    INVERSE

    INVERSE

    November 25, 2016
    Mike Brown

    Japan is about to try and build the fastest computer the world has ever known. The Japanese ministry of economy, trade and industry has decided to spend 19.5 billion yen ($173 million) on creating the fastest supercomputer known to the public. The machine will be used to propel Japan into a new era of technological advancement, aiding research into autonomous cars, renewable energy, robots and artificial intelligence (A.I.).

    “As far as we know, there is nothing out there that is as fast,” Satoshi Sekiguchi, director general at Japan’s ‎National Institute of Advanced Industrial Science and Technology, said in a report published Friday.

    The computer is currently called ACBI, which stands for A.I. Bridging Cloud Infrastructure. Companies have already begun bidding for the project, with bidding set to close December 8. The machine is targeted at achieving 130 petaflops, or 130 quadrillion calculations per second.

    Private companies will be able to tap into ACBI’s power for a fee. The machine is aimed at helping develop deep learning applications, which will be vital for future A.I. advancements. One area where deep learning will be crucial is in autonomous vehicles, as systems will be able to analyze the real-world data collected from a car’s sensors to improve its ability to avoid collisions.

    The move follows plans revealed in September for Japan to lead the way in self-driving map technology. The country is aiming to secure its position as world leaders in technological innovation, and the map project aims to set the global standard for autonomous vehicle road maps by getting a head start on data collection. Self-driving cars will need 3D maps to accurately interpret sensor input data and understand its current position.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:28 pm on November 23, 2016 Permalink | Reply
    Tags: , , , Computerworld, , Supercomputing   

    From ALCF via Computerworld: “U.S. sets plan to build two exascale supercomputers” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    1

    COMPUTERWORLD

    Nov 21, 2016
    Patrick Thibodeau

    2
    ARM

    The U.S believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

    The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials.

    But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump’s administration will change directions. The incoming administration is a wild card. Supercomputing wasn’t a topic during the campaign, and Trump’s dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer.

    At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated.”

    Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

    Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project, appeared sanguine. “We believe that an important goal of the exascale computing project is to help economic competitiveness and economic security,” said Messina. “I could imagine that the administration would think that those are important things.”

    Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

    The ongoing exascale research funding (the U.S. budget is $150 million this year) will help with advances in software, memory, processors and other technologies that ultimately filter out to the broader commercial market.

    This is very much a global race, which is something the Trump administration will have to be mindful of. China, Europe and Japan are all developing exascale systems.

    China plans to have an exascale system ready by 2020. These nations see exascale — and the computing advances required to achieve it — as a pathway to challenging America’s tech dominance.

    “I’m not losing sleep over it yet,” said Messina, of the possibility that the incoming Trump administration may have different supercomputing priorities. “Maybe I will in January.”

    The U.S. will award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

    The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

    The last big performance milestone in supercomputing occurred in 2008 with the development of a petaflop system. An exaflop is a 1,000-petaflop system and building it is challenging because of the limits of Moore’s Law, a 1960s-era observation that noted the number of transistors on a chip doubles about every two years.

    “Now we’re at the point where Moore’s Law is just about to end,” said Messina in an interview. That means the key to building something faster “is by having much more parallelism, and many more pieces. That’s how you get the extra speed.”

    An exascale system will solve a problem 50 times faster than the 20-petaflop systems in use in government labs today.

    Development work has begun on the systems and applications that can utilize hundreds of millions of simultaneous parallel events. “How do you manage it — how do you get it all to work smoothly?” said Messina.

    Another major problem is energy consumption. An exascale machine can be built today using current technology, but such a system would likely need its own power plant. The U.S. wants an exascale system that can operate on 20 megawatts and certainly no more than 30 megawatts.

    Scientists will have to come up with a way “to vastly reduce the amount of energy it takes to do a calculation,” said Messina. The applications and software development are critical because most of the energy is used to move data. And new algorithms will be needed.

    About 500 people are working at universities and national labs on the DOE’s coordinated effort to develop the software and other technologies exascale will need.

    Aside from the cost of building the systems, the U.S. will spend millions funding the preliminary work. Vendors want to maintain the intellectual property of what they develop. If it cost, for instance, $50 million to develop a certain aspect of a system, the U.S. may ask the vendor to pay 40% of that cost if they want to keep the intellectual property.

    A key goal of the U.S. research funding is to avoid creation of one-off technologies that can only be used in these particular exascale systems.

    “We have to be careful,” Terri Quinn, a deputy associate director for HPC at Lawrence Livermore National Laboratory, said at the SC16 panel session. “We don’t want them (vendors) to give us capabilities that are not sustainable in a business market.”

    The work under way will help ensure that the technology research is far enough along to enable the vendors to respond to the 2019 request for proposals.

    Supercomputers can deliver advances in modeling and simulation. Instead of building physical prototypes of something, a supercomputer can allow modeling virtually. This can speed the time it takes something to get to market, whether a new drug or car engine. Increasingly, HPC is used in big data and is helping improve cybersecurity through rapid analysis; artificial intelligence and robotics are other fields with strong HPC demand.

    China will likely beat the U.S. in developing an exascale system, but the real test will be their usefulness.

    Messina said the U.S. approach is to develop an exascale eco-system involving vendors, universities and the government. The hope is that the exascale systems will not only a have a wide range of applications ready for them, but applications that are relatively easy to program. Messina wants to see these systems quickly put to immediate and broad use.

    “Economic competitiveness does matter to a lot of people,” said Messina.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 7:17 am on November 19, 2016 Permalink | Reply
    Tags: , , , HPE Annie, Supercomputing   

    From BNL: “Brookhaven Lab Advances its Computational Science and Data Analysis Capabilities” 

    Brookhaven Lab

    November 18, 2016
    Ariana Tantillo
    atantillo@bnl.gov

    Using leading-edge computer systems and participating in computing standardization groups, Brookhaven will enhance its ability to support data-driven scientific discoveries

    1
    Members of the commissioning team—(from left to right) Imran Latif, David Free, Mark Lukasczyk, Shigeki Misawa, Tejas Rao, Frank Burstein, and Costin Caramarcu—in front of the newly installed institutional computing cluster at Brookhaven Lab’s Scientific Data and Computing Center.

    At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are producing vast amounts of scientific data. To rapidly process and interpret these data, scientists require advanced computing capabilities—programming tools, numerical models, data-mining algorithms—as well as a state-of-the-art data, computing, and networking infrastructure.

    History of scientific computing at Brookhaven

    Brookhaven Lab has a long-standing history of providing computing resources for large-scale scientific programs. For more than a decade, scientists have been using data analytics capabilities to interpret results from the STAR and PHENIX experiments at the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science User Facility at Brookhaven, and the ATLAS experiment at the Large Hadron Collider (LHC) in Europe.

    BNL RHIC Campus
    BNL RHIC Campus

    Brookhaven STAR
    Brookhaven STAR

    Brookhaven Phenix
    Brookhaven Phenix

    CERN/ATLAS detector
    CERN/ATLAS detector

    Each second, millions of particle collisions at RHIC and billions at LHC produce hundreds of petabytes of data—one petabyte is equivalent to approximately 13 years of HDTV video, or nearly 60,000 movies—about the collision events and the emergent particles. More than 50,000 computing cores, 250 computer racks, and 85,000 magnetic storage tapes store, process, and distribute these data that help scientists understand the basic forces that shaped the early universe. Brookhaven’s tape archive for storing data files is the largest one in the United States and the fourth largest worldwide. As the U.S. ATLAS Tier 1 computing center—the largest one worldwide—Brookhaven provides about 25 percent of the total computing and storage capacity for LHC’s ATLAS experiment, receiving and delivering approximately two hundred terabytes of data (picture 62 million photos) to more than 100 data centers around the world each day.

    3
    Physicist Srinivasan Rajagopalan with hardware located at Brookhaven Lab that is used to support the ATLAS particle physics experiment at the Large Hadron Collider of CERN, the European Organization for Nuclear Research.

    “This capability to deliver large amounts of computational power makes Brookhaven one of the largest high-throughput computing resources in the country,” said Kerstin Kleese van Dam, director of Brookhaven’s Computational Science Initiative (CSI), which was launched in 2014 to consolidate the lab’s data-centric activities under one umbrella.

    Brookhaven Lab has a more recent history in operating high-performance computing clusters specifically designed for applications involving heavy numerical calculations. In particular, Brookhaven has been home to some of the most powerful supercomputers, including three generations of supercomputers from the New York State–funded IBM Blue Gene series. One generation, New York Blue/L, debuted at number five on the June 2007 top 500 list of the world’s fastest computers. With these high-performance computers, scientists have made calculations critical to research in biology, medicine, materials science, nanoscience, and climate science.

    4
    New York Blue/L is a massively parallel supercomputer that Brookhaven Lab acquired in 2007. At the time, it was the fifth most powerful supercomputer in the world. It was decommissioned in 2014.

    In addition to having supported high-throughput and high-performance computing, Brookhaven has hosted cloud-based computing services for smaller applications, such as analyzing data from cryo-electron microscopy studies of proteins.

    Advanced tools for solving large, complex scientific problems

    Brookhaven is now revitalizing its capabilities for computational science and data analysis so that scientists can more effectively and efficiently solve scientific problems.

    “All of the experiments going on at Brookhaven’s facilities have undergone a technological revolution to some extent,” explained Kleese van Dam. This is especially true with the state-of-the-art National Synchrotron Light Source II (NSLS-II) and the Center for Functional Nanomaterials (CFN)—both DOE Office of Science User Facilities at Brookhaven—each of which continues to attract more users and experiments.

    BNL NSLS-II Interior
    BNL NSLS II
    BNL NSLS II

    BNL Center for Functional Nanomaterials interior
    bnl-cfn-campus
    BNL Center for Functional Nanomaterials

    “The scientists are detecting more things at faster rates and in greater detail. As a result, we have data rates that are so big that no human could possibly make sense of all the generated data—unless they had something like 500 years to do so!” Kleese van Dam continued.

    In addition to analyzing data from experimental user facilities such as NSLS-II, CFN, RHIC, and ATLAS, scientists run numerical models and computer simulations of the behavior of complex systems. For example, they use models based on quantum chromodynamics theory to predict how elementary particles called quarks and gluons interact. They can then compare these predictions with the interactions experimentally studied at RHIC, when the particles are released after ions are smashed together at nearly the speed of light. Other models include those for materials science—to study materials with unique properties such as strong electron interactions that lead to superconductivity, magnetic ordering, and other phenomena—and for chemistry—to calculate the structures and properties of catalysts and other molecules involved in chemical processes.

    To support these computationally intensive tasks, Brookhaven recently installed at its Scientific Data and Computing Center a new institutional computing system from Hewlett Packard Enterprise (HPE). This institutional cluster—a set of computers that work together as a single integrated computing resource—initially consists of more than 100 compute nodes with processing, storage, and networking capabilities.

    4
    BNL Annie HPE supercomputer

    Each node includes both central processing units (CPUs)—the general-purpose processors commonly referred to as the “brains” of the computer—and graphics processing units (GPUs)—processors that are optimized to perform specific calculations. The nodes have error-correcting code memory, a type of data storage that detects and corrects memory corruption caused, for example, by voltage fluctuations on the computer’s motherboard. This error-correcting capability is critical to ensuring the reliability of Brookhaven’s scientific data, which are stored in different places on multiple hard disks so that reading and writing of the data can be done more efficiently and securely. A “file system” software separates the data into groups called “files” that are named so they can be easily found, similar to how paper documents are sorted and put into labeled file folders. Communication between nodes is enabled by a network that can send and receive data at 100 gigabytes per second—a data rate fast enough to copy a Blu-ray disc in mere seconds—with less than microseconds between each unit of data transferred.

    The institutional computing cluster will support a range of high-profile projects, including near-real-time data analysis at the CFN and NSLS-II. This analysis will help scientists understand the structures of biological proteins, the real-time operation of batteries, and other complex problems.

    This cluster will also be used for exascale numerical model development efforts, such as for the new Center for Computational Design of Strongly Correlated Materials and Theoretical Spectroscopy. Led by Brookhaven Lab and Rutgers University with partners from the University of Tennessee and DOE’s Ames Laboratory, this center is developing next-generation methods and software to accurately describe electronic correlations in high-temperature superconductors and other complex materials and a companion database to predict targeted properties with energy-related application to thermoelectric materials. Brookhaven scientists collaborating on two exascale computing application projects that were recently awarded full funding by DOE—“NWChemEx: Tackling Chemical, Materials and Biomolecular Challenges in the Exascale Era” and “Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics”—will also access the institutional cluster.

    “As the complexity of a material increases, more computer resources are required to efficiently perform quantum mechanical calculations that help us understand the material’s properties. For example, we may want to sort through all the different ways that lithium ions can enter a battery electrode, determining the capacity of the resulting battery and the voltage it can support,” explained Mark Hybertsen, leader of the CFN’s Theory and Computation Group. “With the computing capacity of the new cluster, we’ll be able to conduct in-depth research using complicated models of structures involving more than 100 atoms to understand how catalyzed reactions and battery electrodes work.”

    5
    This figure shows the computer-assisted catalyst design for the oxygen reduction reaction (ORR), which is one of the key challenges to advancing the application of fuel cells in clean transportation. Theoretical calculations based on nanoparticle models provide a way to not only speed up this reaction on conventional platinum (Pt) catalysts and enhance their durability, but also to lower the cost of fuel cell production by alloying (combining) Pt catalysts with the less expensive elements nickel (Ni) and gold (Au).

    In the coming year, Brookhaven plans to upgrade the institutional cluster to 200 nodes, which will subsequently be expanded to 300 nodes in the long term.

    “With these additional nodes, we’ll be able to serve a wider user community. Users don’t get just a share of the system; they get to use the whole system at a given time so that they can address very large scientific problems,” Kleese van Dam said.

    Data-driven scientific discovery

    Brookhaven is also building a novel computer architecture test bed for the data analytics community. Using this test bed, CSI scientists will explore different hardware and software, determining which are most important to enabling data-driven scientific discovery.

    Scientists are initially exploring a newly installed Koi Computers system that comprises more than 100 Intel parallel processors for high-performance computing. This system makes use of solid-state drives—storage devices that function like a flash drive or memory stick but reside inside the computer. Unlike traditional hard drives, solid-state drives do not consecutively read and write information by moving a tiny magnet with a motor; instead, the data are directly stored on electronic memory chips. As a result, solid-state drives consume far less power and thus would enable Brookhaven scientists to run computations much more efficiently and cost-effectively.

    “We are enthusiastic to operate these ultimate hardware technologies at the SDCC [Scientific Data and Computing Center] for the benefit of Brookhaven research programs,” said SDCC Director Eric Lançon.

    Next, the team plans to explore architectures and accelerators that could be of particular value to data-intensive applications.

    For data-intensive applications, such as analyzing experimental results using machine learning, the system’s memory input/output (I/O) rate and compute power are important. “A lot of systems have very slow I/O rates, so it’s very time consuming to get the data, only a small portion of which can be worked on at a time,” Kleese van Dam explained. “Right now, scientists collect data during their experiment and take the data home on a hard drive to analyze. In the future, we’d like to provide near-real-time data analysis, which would enable the scientists to optimize their experiments as they are doing them.”

    Participation in computing standards groups

    In conjunction to updating Brookhaven’s high-performance computing infrastructure, CSI scientists are becoming more involved in standardization groups for leading parallel programming models.

    “By participating in these groups, we’re making sure the high-performance computing standards are highly supportive of our scientists’ requirements for their experiments,” said Kleese van Dam.

    Currently, CSI scientists are in the process of applying to join the OpenMP (for Multi-Processing) Architecture Review Board. This nonprofit technology consortium manages the OpenMP application programming interface (API) specification for parallel programming on shared-memory systems—those in which individual processes can communicate and share data by using a common memory.

    In June 2016, Brookhaven became a member of the OpenACC (short for Open Accelerators) consortium. As part of this community of more than 20 research institutions, supercomputing centers, and technology developers, Brookhaven will help determine the future direction of the OpenACC programming standard for parallel computing.

    6
    (From left to right) Robert Riccobono, Nicholas D’Imperio, and Rafael Perez with a NVIDIA Tesla graphics processing unit (GPU) and a Hewlett Packard compute node, where the GPU resides. No image credit.

    The OpenACC API simplifies the programming of computer systems that combine traditional core processors with accelerator devices, such as GPUs and co-processors. The standard describes a set of instructions that identify computationally intensive parts of the code in common programming languages to be offloaded from a primary host processor to an accelerator. Distributing the processing load enables more efficient computations—like the many that Brookhaven scientists require to analyze the data they generate at RHIC, NSLS-II, and CFN.

    “From imaging the complex structures of biological proteins at NSLS-II, to capturing the real-time operation of batteries at CFN, to recreating exotic states of matter at RHIC, scientists are rapidly producing very large and varied datasets,” said Kleese van Dam. “The scientists need sufficient computational resources for interpreting these data and extracting key information to make scientific discoveries that could lead to the next pharmaceutical drug, longer-lasting battery, or discovery in physics.”

    As an OpenACC member, Brookhaven Lab will help implement the features of the latest C++ programming language standard into OpenACC software. This effort will directly support the new institutional cluster that Brookhaven purchased from HPE.

    “When programming advanced computer systems such as the institutional cluster, scientific software developers face several challenges, one of which is transferring data resident in main system memory to local memory resident on an accelerator such as a GPU,” said computer scientist Nicholas D’Imperio, chair of Brookhaven’s Computational Science Laboratory for advanced algorithm development and optimization. “By contributing to the capabilities of OpenACC, we hope to reduce the complexity inherent in such challenges and enable programming at a higher level of abstraction.”

    A centralized Computational Science Initiative

    These standards development efforts and technology upgrades come at a time when CSI is bringing all of its computer science and applied mathematics research under one roof. In October 2016, CSI staff moved into a building that will accommodate a rapidly growing team and will include collaborative spaces where Brookhaven Lab scientists and facility users can work with CSI experts. The building comprises approximately 60,000 square feet of open space that Brookhaven Lab, with the support of DOE, will develop into a new data center to house its growing data, computing, and networking infrastructure.

    “We look forward to the data-driven scientific discoveries that will come from these collaborations and the use of the new computing technology,” said Kleese van Dam.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 11:48 pm on November 18, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From CSIRO: “CSIRO seeks new petaflop computer to power research innovation” 

    CSIRO bloc

    Commonwealth Scientific and Industrial Research Organisation

    14 November 2016
    Andrew Warren

    CSIRO has commenced the search for the next generation of high-performance computer to replace their current Bragg accelerator cluster.

    1
    Bragg accelerator cluster, by Xenon Systems of Melbourne and is located at a data centre in Canberra, Australia.

    In addition to being a key partner and investor in Australia’s national peak computing facilities, the science agency is a global leader in scientific computing and an early pioneer of graphics processing unit-accelerated computing in Australia.

    Bragg debuted in the Top500 list of supercomputers in the world in 2012 at number 156, then in 2014 it achieved number 7 on the Green500 – a ranking of the performance to energy efficiency of the world’s supercomputers.

    Bragg’s replacement will be capable of ‘petaflop’ speeds, significantly exceeding the existing computer’s performance.

    It will boost CSIRO’s already impressive high-performance computing (HPC) capability and is expected to rank highly on the Green500.

    CSIRO’s acting Deputy Chief Information Officer, Scientific Computing Angus Macoustra said this replacement computer would be essential to maintaining CSIRO’s ability to solve many of the most important emerging science problems.

    “It’s an integral part of our strategy working alongside national peak computing facilities to build Australian HPC capacity to accelerate great science and innovation,” Mr Macoustra said.

    The cluster will power a new generation of ground-breaking scientific research, including data analysis, modelling, and simulation in a variety of science domains, such as biophysics, material science, molecular modelling, marine science, geochemical modelling, computational fluid dynamics, and more recently, artificial intelligence and data analytics using deep learning.

    The tender for the new machine is calling for a ‘heterogeneous’ system combining traditional central processing units with coprocessors to accelerate both the machine’s performance and energy efficiency.

    The successful bidder will be asked to deliver and support the system for three years within a $4m proposed budget.

    The tender process is currently open for submission at AusTender, and will close on Monday 19 December 2016.

    The winning system is expected to be up and running during the first half of 2017.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    CSIRO campus

    CSIRO, the Commonwealth Scientific and Industrial Research Organisation, is Australia’s national science agency and one of the largest and most diverse research agencies in the world.

     
  • richardmitnick 11:58 am on October 12, 2016 Permalink | Reply
    Tags: "Oak Ridge Scientists Are Writing Code That Not Even The World's Fastest Computers Can Run (Yet), Department of Energy’s Exascale Computing Project, , , Summit supercomputer, Supercomputing   

    From ORNL via Nashville Public Radio: “Oak Ridge Scientists Are Writing Code That Not Even The World’s Fastest Computers Can Run (Yet)” 

    i1

    Oak Ridge National Laboratory

    1

    Nashville Public Radio

    Oct 10, 2016
    Emily Siner

    2
    The current supercomputer at Oak Ridge National Lab, Titan, will be replaced by what could be the fastest computer in the world, Summit — and even that won’t even be fast enough for some of the programs that are being written at the lab. Oak Ridge National Laboratory, U.S. Dept. of Energy

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer depiction

    Scientists at Oak Ridge National Laboratory are starting to build applications for a supercomputer that might not go live for another seven years.

    The lab recently received more than $5 million from the Department of Energy to start developing several longterm projects.

    Thomas Evans’s research is among those funded, and it’s a daunting task: His team is trying to predict how small sections of particles inside a nuclear reactor will behave over a long period time.

    The more precisely they can simulate nuclear reactors on a computer, the better engineers can build them in real life.

    “Analysts can use that [data] to design facilities, experiments and working engineering platforms,” Evans says.

    But these very elaborate simulations that Evans is creating take so much computing power that they cannot run on Oak Ridge’s current supercomputer, Titan — nor will it be able to run on the lab’s new supercomputer, Summit, which could be the fastest in the world when it goes live in two years.

    So Evans is thinking ahead, he says, “to ultimately harness the power of the next generation — technically two generations from now — of supercomputing.

    “And of course, the challenge is, that machine doesn’t exist yet.”

    The current estimate is that this exascale computer, as it’s called, will be several times faster than Summit and go live around 2023. And it could very well take that long for Evans’s team to write code for it.

    The machine won’t just be faster, Evans says. It’s also going to work in a totally new way, which changes how applications are written.

    “In other words, I can’t take a simulation code that we’ve been using now and just drop it in the new machine and expect it to work,” he says.

    The computer will not necessarily be housed at Oak Ridge, but Tennessee researchers are playing a major role in the Department of Energy’s Exascale Computing Project. In addition to Evans’ nuclear reactor project, scientists at Oak Ridge will be leading the development of two other applications, including one that will simulate complex 3D printing. They’ll also assist in developing nine other projects.

    Doug Kothe, who leads the lab’s exascale application development, says the goal is not just to think ahead to 2023. The code that the researchers write should be able run on any supercomputer built in next several decades, he says.

    Despite the difficulty, working on incredibly fast computers is also an exciting prospect, Kothe says.

    “For a lot of very inquisitive scientists who love challenges, it’s just a way cool toy that you can’t resist.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: