Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:54 am on April 23, 2017 Permalink | Reply
    Tags: , , , Computer modelling, , , , , Modified Newtonian Dynamics, or MOND, , Simulating galaxies, Supercomputing   

    From Durham: “Simulated galaxies provide fresh evidence of dark matter” 

    Durham U bloc

    Durham University

    21 April 2017
    No writer credit

    1
    A simulated galaxy is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey). No image credit.

    Further evidence of the existence of dark matter – the mysterious substance that is believed to hold the Universe together – has been produced by Cosmologists at Durham University.

    Using sophisticated computer modelling techniques, the research team simulated the formation of galaxies in the presence of dark matter and were able to demonstrate that their size and rotation speed were linked to their brightness in a similar way to observations made by astronomers.

    One of the simulations is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey).

    Alternative theories

    Until now, theories of dark matter have predicted a much more complex relationship between the size, mass and brightness (or luminosity) of galaxies than is actually observed, which has led to dark matter sceptics proposing alternative theories that are seemingly a better fit with what we see.

    The research led by Dr Aaron Ludlow of the Institute for Computational Cosmology, is published in the academic journal, Physical Review Letters.

    Most cosmologists believe that more than 80 per cent of the total mass of the Universe is made up of dark matter – a mysterious particle that has so far not been detected but explains many of the properties of the Universe such as the microwave background measured by the Planck satellite.

    CMB per ESA/Planck

    ESA/Planck

    Convincing explanations

    Alternative theories include Modified Newtonian Dynamics, or MOND. While this does not explain some observations of the Universe as convincingly as dark matter theory it has, until now, provided a simpler description of the coupling of the brightness and rotation velocity, observed in galaxies of all shapes and sizes.

    The Durham team used powerful supercomputers to model the formation of galaxies of various sizes, compressing billions of years of evolution into a few weeks, in order to demonstrate that the existence of dark matter is consistent with the observed relationship between mass, size and luminosity of galaxies.

    Long-standing problem resolved

    Dr Ludlow said: “This solves a long-standing problem that has troubled the dark matter model for over a decade. The dark matter hypothesis remains the main explanation for the source of the gravity that binds galaxies. Although the particles are difficult to detect, physicists must persevere.”

    Durham University collaborated on the project with Leiden University, Netherlands; Liverpool John Moores University, England and the University of Victoria, Canada. The research was funded by the European Research Council, the Science and Technology Facilities Council, Netherlands Organisation for Scientific Research, COFUND and The Royal Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Durham U campus

    Durham University is distinctive – a residential collegiate university with long traditions and modern values. We seek the highest distinction in research and scholarship and are committed to excellence in all aspects of education and transmission of knowledge. Our research and scholarship affect every continent. We are proud to be an international scholarly community which reflects the ambitions of cultures from around the world. We promote individual participation, providing a rounded education in which students, staff and alumni gain both the academic and the personal skills required to flourish.

     
  • richardmitnick 6:01 am on March 30, 2017 Permalink | Reply
    Tags: , , CARC - Center for Advanced Research Computing, , Supercomputing, U New Mexico   

    From LANL: “LANL donation adding to UNM supercomputing power” 

    LANL bloc

    Los Alamos National Laboratory

    1

    University of New Mexico

    A new computing system, Wheeler, to be donated to The University of New Mexico Center for Advanced Research Computing (CARC) by Los Alamos National Laboratory (LANL) will put the “super” in supercomputing.

    1
    The new Cray system is nine times more powerful than the combined computing power of the four machines it is replacing, said CARC interim director Patrick Bridges. The new system has yet to be named.

    The machine was acquired from LANL through the National Science Foundation-sponsored PR0bE project, which is run by the New Mexico Consortium. The NMC, comprising UNM, New Mexico State, and New Mexico Tech universities, engages universities and industry in scientific research in the nation’s interest and to increase the role of LANL in science, education, and economic development.

    The new system includes:

    Over 500 nodes, each featuring two quad-core 2.66 GHz Intel Xeon 5550 CPUs and 24 GB of memory
    Over 4,000 cores and 12 terabytes of RAM
    45-50 trillion floating-point operations per second (45-50 teraflops)

    Additional memory, storage, and specialized compute facilities to augment this system are also being planned.

    “This is roughly 20 percent more powerful than any other remaining system at UNM,” Bridges said. “Not only will the new machine be easier to administer and maintain, but also easier for students, faculty, and staff to use. The machine will provide cutting-edge computation for users and will be the fastest of all the machines.”

    Andree Jacobson, chief information officer of the NMC, says that he is pleased that donation will benefit educational efforts.

    “Through a very successful collaboration between the National Science Foundation, New Mexico Consortium, and the Los Alamos National Laboratory called PRObE, we’ve been able to repurpose this retired machine to significantly improve the research computing environment in New Mexico,” he said. “It is truly wonderful to see old computers get a new life, and also an outstanding opportunity to assist the New Mexico universities.”

    To make space for the new machine, the Metropolis, Pequeña, and Ulam systems at UNM will be phased out over the next couple of months. As they are taken offline, the new machine will be installed and brought online. Users of existing systems and their research will be transitioned to the new machine as part of this process.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus

    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

    DOE Main

    NNSA

     
  • richardmitnick 1:13 pm on March 29, 2017 Permalink | Reply
    Tags: , Supercomputing, , , What's next for Titan?   

    From OLCF via TheNextPlatform: “Scaling Deep Learning on an 18,000 GPU Supercomputer” 

    i1

    Oak Ridge National Laboratory

    OLCF

    2

    TheNextPlatform

    March 28, 2017
    Nicole Hemsoth


    ORNL Cray XK7 Titan Supercomputer

    It is one thing to scale a neural network on a single GPU or even a single system with four or eight GPUs. But it is another thing entirely to push it across thousands of nodes. Most centers doing deep learning have relatively small GPU clusters for training and certainly nothing on the order of the Titan supercomputer at Oak Ridge National Laboratory.

    The emphasis on machine learning scalability has often been focused on node counts in the past for single-model runs. This is useful for some applications, but as neural networks become more integrated into existing workflows, including those in HPC, there is another way to consider scalability. Interestingly, the lesson comes from an HPC application area like weather modeling where, instead of one monolithic model to predict climate, an ensemble of forecasts run in parallel on a massive supercomputer are meshed together for the best result. Using this ensemble method on deep neural networks allows for scalability across thousands of nodes, with the end result being derived from an average of the ensemble–something that is acceptable in an area that does not require the kind of precision (in more ways than one) that some HPC calculations do.

    This approach has been used on the Titan supercomputer at Oak Ridge, which is a powerhouse for deep learning training given its high GPU counts. Titan’s 18,688 Tesla K20X GPUs have proven useful for a large number of scientific simulations and are now pulling double-duty on deep learning frameworks, including Caffe, to boost the capabilities of HPC simulations (classification, filtering of noise, etc.). The next generation supercomputer at the lab, the future “Summit” machine (expected to be operational at the end of 2017) will provide even more GPU power with the “Volta” generation Tesla graphics coprocessors from Nvidia, high-bandwidth memory, NVLink for faster data movement, and IBM Power9 CPUs.


    ORNL IBM Summit supercomputer depiction

    ORNL researchers used this ensemble approach to neural networks and were able to stretch these across all of the GPUs in the machine. This is a notable feat, even for the types of large simulations that are built to run on big supercomputers. What is interesting is that while the frameworks might come from the deep learning (Caffe in ORNL’s case), the node to node communication is rooted in HPC. As we have described before, MPI is still the best method out there for fast communication across InfiniBand-connected nodes and like researchers elsewhere, ORNL has adapted it to deep learning at scale.

    Right now, the team is using each individual node to train an individual deep learning network, but all of those different networks need to have the same data if training from the same set. The question is how to feed that same data to over 18,000 different GPUs at almost the same time—and on a system that wasn’t designed with that in mind? The answer is in a custom MPI-based layer that can divvy up the data and distribute it. With the coming Summit supercomputer—the successor to Titan, which will sport six Volta GPUs per node—the other problem is multi-GPU scaling, something application teams across HPC are tackling as well.

    Ultimately, the success of MPI for deep learning at such scale will depend on how many messages the system and MPI can handle since there is both results between nodes in addition to thousands of synchronous updates for training iterations. Each iteration will cause a number of neurons within the network to be updated, so if the network is spread across multiple nodes, all of that will have to be communicated. That is large enough task on its own—but also consider the delay of the data that needs to be transferred to and from disk (although a burst buffer can be of use here). “There are also new ways of looking at MPI’s guarantees for robustness, which limits certain communication patterns. HPC needs this, but neural networks are more fault-tolerant than many HPC applications,” Patton says. “Going forward, that the same I/O is being used to communicate between the nodes and from disk, so when the datasets are large enough the bandwidth could quickly dwindle.

    In addition to their work scaling deep neural networks across Titan, the team has also developed a method of automatically designing neural networks for use across multiple datasets. Before, a network designed for image recognition could not be reused for speech, but their own auto-designing code has scaled beyond 5,000 (single GPU) nodes on Titan with up to 80 percent accuracy.

    “The algorithm is evolutionary, so it can take design parameters of a deep learning network and evolve those automatically,” Robert Patton, a computational analytics scientist at Oak Ridge, tells The Next Platform. “We can take a dataset that no one has looked at before and automatically generate a network that works well on that dataset.”

    Since developing the auto-generating neural networks, Oak Ridge researchers have been working with key application groups that can benefit from the noise filtering and data classification that large-scale neural nets can provide. These include high-energy particle physics, where they are working with Fermi National Lab to classify neutrinos and subatomic particles. “Simulations produce so much data and it’s too hard to go through it all or even keep it all on disk,” says Patton. “We want to identify things that are interesting in data in real time in a simulation so we can snapshot parts of the data in high resolution and go back later.”

    It is with an eye on “Summit” and the challenges to programming the system that teams at Oak Ridge are swiftly figuring out where deep learning fits into existing HPC workflows and how to maximize the hardware they’ll have on hand.

    “We started taking notice of deep learning in 2012 and things really took off then, in large part because of the move of those algorithms to the GPU, which allowed researchers to speed the development process,” Patton explains. “There has since been a lot of progress made toward tackling some of the hardest problems and by 2014, we started seeing that if one GPU is good for deep learning, what could we do with 18,000 of them on the Titan supercomputer.”

    While large supercomputers like Titan have the hybrid GPU/CPU horsepower for deep learning at scale, they are not built for these kinds of workloads. Some hardware changes in Summit will go a long way toward speeding through some bottlenecks, but the right combination of hardware might include some non-standard accelerators like neuromorphic devices and other chips to bolster training or inference. “Right now, if we were to use machine learning in real-time for HPC applications, we still have the problem of training. We are loading the data from disk and the processing can’t continue until the data comes off disk, so we are excited for Summit, which will give us the ability to get the data off disk faster in the nodes, which will be thicker, denser and have more memory and storage,” Patton says.

    “It takes a lot of computation on expensive HPC systems to find the distinguishing features in all the noise,” says Patton. “The problem is, we are throwing away a lot of good data. For a field like materials science, for instance, it’s not unlikely for them to pitch more than 90 percent of their data because it’s so noisy and they lack the tools to deal with it.” He says this is also why his teams are looking at integrating novel architectures to offload to, including neuromorphic and quantum computers—something we will talk about more later this week in an interview with ORNL collaborator, Thomas Potok.

    [I WANT SOMEONE TO TELL ME WHAT HAPPENS TO TITAN NEXT.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 12:31 pm on March 21, 2017 Permalink | Reply
    Tags: , Breaking the supermassive black hole speed limit, , Supercomputing   

    From LANL: “Breaking the supermassive black hole speed limit” 

    LANL bloc

    Los Alamos National Laboratory

    March 21, 2017
    Kevin Roark
    Communications Office
    (505) 665-9202
    knroark@lanl.gov

    1
    Quasar growing under intense accretion streams. No image credit

    A new computer simulation helps explain the existence of puzzling supermassive black holes observed in the early universe. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.

    “Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory, “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”

    Using computer codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.

    “It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?”

    It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.

    “This was largely unexpected,” said Smidt. “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”

    A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials. Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.

    “We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”

    Research paper available at https://arxiv.org/pdf/1703.00449.pdf

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus
    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

    DOE Main

    NNSA

     
  • richardmitnick 11:02 am on March 14, 2017 Permalink | Reply
    Tags: , , , Supercomputing, Vector boson plus jet event   

    From ALCF: “High-precision calculations help reveal the physics of the universe” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    March 9, 2017
    Joan Koka

    1
    With the theoretical framework developed at Argonne, researchers can more precisely predict particle interactions such as this simulation of a vector boson plus jet event. Credit: Taylor Childers, Argonne National Laboratory

    On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before.

    Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified.

    The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales.

    “So far, the Standard Model of particle physics has been very successful in describing the particle interactions we have seen experimentally, but we know that there are things that this model doesn’t describe completely.


    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    We don’t know the full theory,” said Argonne theorist Radja Boughezal, who developed the framework with her team.

    “The first step in discovering the full theory and new models involves looking for deviations with respect to the physics we know right now. Our hope is that there is deviation, because it would mean that there is something that we don’t understand out there,” she said.

    The theoretical method developed by the Argonne team is currently being deployed on Mira, one of the fastest supercomputers in the world, which is housed at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Using Mira, researchers are applying the new framework to analyze the production of missing energy in association with a jet, a particle interaction of particular interest to researchers at the Large Hadron Collider (LHC) in Switzerland.




    LHC at CERN

    Physicists at the LHC are attempting to produce new particles that are known to exist in the universe but have yet to be seen in the laboratory, such as the dark matter that comprises a quarter of the mass and energy of the universe.


    Dark matter cosmic web and the large-scale structure it forms The Millenium Simulation, V. Springel et al

    Although scientists have no way today of observing dark matter directly — hence its name — they believe that dark matter could leave a “missing energy footprint” in the wake of a collision that could indicate the presence of new particles not included in the Standard Model. These particles would interact very weakly and therefore escape detection at the LHC. The presence of a “jet”, a spray of Standard Model particles arising from the break-up of the protons colliding at the LHC, would tag the presence of the otherwise invisible dark matter.

    In the LHC detectors, however, the production of a particular kind of interaction — called the Z-boson plus jet process — can mimic the same signature as the potential signal that would arise from as-yet-unknown dark matter particles. Boughezal and her colleagues are using their new framework to help LHC physicists distinguish between the Z-boson plus jet signature predicted in the Standard Model from other potential signals.

    Previous attempts using less precise calculations to distinguish the two processes had so much uncertainty that they were simply not useful for being able to draw the fine mathematical distinctions that could potentially identify a new dark matter signal.

    “It is only by calculating the Z-boson plus jet process very precisely that we can determine whether the signature is indeed what the Standard Model predicts, or whether the data indicates the presence of something new,” said Frank Petriello, another Argonne theorist who helped develop the framework. “This new framework opens the door to using Z-boson plus jet production as a tool to discover new particles beyond the Standard Model.”

    Applications for this method go well beyond studies of the Z-boson plus jet. The framework will impact not only research at the LHC, but also studies at future colliders which will have increasingly precise, high-quality data, Boughezal and Petriello said.

    “These experiments have gotten so precise, and experimentalists are now able to measure things so well, that it’s become necessary to have these types of high-precision tools in order to understand what’s going on in these collisions,” Boughezal said.

    “We’re also so lucky to have supercomputers like Mira because now is the moment when we need these powerful machines to achieve the level of precision we’re looking for; without them, this work would not be possible.”

    Funding and resources for this work was previously allocated through the Argonne Leadership Computing Facility’s (ALCF’s) Director’s Discretionary program; the ALCF is supported by the DOE’s Office of Science’s Advanced Scientific Computing Research program. Support for this work will continue through allocations coming from the Innovation and Novel Computational Impact on Theory and Experiment (INCITE) program.

    The INCITE program promotes transformational advances in science and technology through large allocations of time on state-of-the-art supercomputers.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 1:12 pm on February 26, 2017 Permalink | Reply
    Tags: , , , Supercomputing   

    From Nature: “[International] supercomputer, BOINC, needs more people power” 

    Nature Mag
    Nature

    22 February 2017
    Ivy Shih

    1
    Xinhua / Alamy Stock Photo

    A citizen science initiative that encourages public donations of idle computer processing power to run complex calculations is struggling to increase participation.

    Berkeley Open Infrastructure for Network Computing (BOINC), a large grid that harnesses volunteered power for scientific computing, has been running for 15 years to support research projects in medicine, mathematics, climate change, linguistics and astrophysics.

    boinclarge

    boinc-wallpaper

    But, despite strong demand by scientists for supercomputers or computer networks that can rapidly analyse high volumes of data, the volunteer run BOINC has struggled to maintain and grow its network of users donating their spare computer power. Of its 4 million-plus registered users, only 6% are active, a number that has been falling since 2014.

    “I’m constantly looking for ways to expose sectors of the general population to BOINC and it’s a struggle,” says David Anderson, a co-founder and computer scientist at the University of California Berkeley.

    How many people use BOINC?

    Many more people have registered with BOINC than actually donate their computer power (active users).
    Anderson says BOINC, which is [no longer] funded by the National Science Foundation, currently hosts 56 scientific projects that span an international network of more than 760,000 computers. [current: 24-hour average: 17.367 PetaFLOPS. Active: 267,932 volunteers, 680,893 computers.]The platform’s combined processing power simulates a supercomputer whose performance is among the world’s top 10.

    Access to such supercomputers can be expensive and require lengthy waits, so BOINC offers research groups access to processing power at a fraction of the prohibitive cost.

    “A typical BOINC project uses a petaflop of computing — which typically costs maybe USD $100,000 a year. If you were to go to buy the same amount of computing power on the Amazon cloud, it would cost around $40 million,” says Anderson.

    Kevin Vinsen, a scientist at the International Centre for Radio Astronomy Research, Australia, leads a project that analyses photos of galaxies. BOINC’s helps analyse the SkyNet’s huge dataset, which is especially valuable given the project’s shoestring budget.

    “In BOINC I can have 20,000 people working on it at the same time. Each one is doing a small portion of the galaxy,” he says.

    Anderson wants to connect BOINC to major supercomputer facilities in the United States, to reduce the lengthy wait researchers have to process their data. He is working to add the network to the Texas Advanced Computing Center as an additional resource for researchers.

    Access to such supercomputers can be expensive and require lengthy waits, so BOINC offers research groups access to processing power at a fraction of the prohibitive cost.

    “A typical BOINC project uses a petaflop of computing — which typically costs maybe USD $100,000 a year. If you were to go to buy the same amount of computing power on the Amazon cloud, it would cost around $40 million,” says Anderson.

    Kevin Vinsen, a scientist at the International Centre for Radio Astronomy Research, Australia, leads a project that analyses photos of galaxies. BOINC’s helps analyse the SkyNet’s huge dataset, which is especially valuable given the project’s shoestring budget.

    “In BOINC I can have 20,000 people working on it at the same time. Each one is doing a small portion of the galaxy,” he says.

    Anderson wants to connect BOINC to major supercomputer facilities in the United States, to reduce the lengthy wait researchers have to process their data. He is working to add the network to the Texas Advanced Computing Center as an additional resource for researchers.

    boincstatsimage-new

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology on the basis of its originality, importance, interdisciplinary interest, timeliness, accessibility, elegance and surprising conclusions. Nature also provides rapid, authoritative, insightful and arresting news and interpretation of topical and coming trends affecting science, scientists and the wider public.

     
  • richardmitnick 1:48 pm on February 22, 2017 Permalink | Reply
    Tags: Supercomputing, , Tokyo Tech supercomputer TSUBAME3.0   

    From Tokyo Tech: “Tokyo Tech supercomputer TSUBAME3.0 scheduled to start operating in summer 2017” 

    tokyo-tech-bloc

    Tokyo Institute of Technology

    February 22, 2017

    1
    Rendering of TSUBAME3.0

    Tokyo Tech supercomputer TSUBAME3.0 scheduled to start operating in summer 2017
    With 47.2 petaflops performance at half precision set to meet the surge in demand for artificial intelligence applications.

    The Tokyo Institute of Technology (Tokyo Tech) Global Scientific Information and Computing Center (GSIC) has started development and construction of TSUBAME3.01—the next-generation supercomputer that is scheduled to start operating in the summer of 2017.

    The theoretical performance of the TSUBAME3.0 is 47.2 petaflops in 16-bit half precision mode or above, and once the new TSUBAME3.0 is operating alongside the current TSUBAME2.5, Tokyo Tech GSIC will be able to provide a total computation performance of 64.3 petaflops in half precision mode or above, making it the largest supercomputer center in Japan.

    The majority of scientific calculation requires 64-bit double precision, however, artificial intelligence (AI) and Big Data processing can be performed at 16-bit half precision, and the TSUBAME3.0 is expected to be widely used in these fields, where demand is continuing to increase.

    Background and details

    Since the TSUBAME2.0 and 2.5 started operations in November 2010 as the fastest supercomputers in Japan, these computers have become “supercomputers for everyone” having significantly contributed to industry-academia-government research and development both in Japan and overseas over the six years. As a result, much attention has also been drawn to Tokyo Tech GSIC as the most advanced cutting-edge supercomputer center in the world. Furthermore, Tokyo Tech GSIC is continuing to partner with related companies in research into not only high performance computing (HPC), but also Big Data and AI—areas with increasing demand in recent years. These research results and the experience gained through operating TSUBAME2.0 and 2.5, and the energy-saving supercomputer TSUBAME-KFC2 were all applied in the design process for TSUBAME3.0.

    As a result of Japanese government procurement for the development of TSUBAME3.0, SGI Japan, Ltd. (SGI) was awarded the contract to work on the project. Tokyo Tech is developing TSUBAME3.0 in partnership with SGI and NVIDIA as well as other companies.

    The TSUBAME series feature the most recent NVIDIA GPUs available at the time, namely Tesla for TSUBAME1.2, Fermi for TSUBAME2.0, and Kepler for TSUBAME2.5. The upcoming TSUBAME3.0 will feature the fourth-generation Pascal GPU to ensure high compatibility. TSUBAME3.0 will contain 2,160 GPUs, making a total of 6,720 GPUs in operation at GSIC once operating alongside TSUBAME2.5 and TSUBAME-KFC.

    “Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can tackle once unsolvable problems. Tokyo Tech’s TSUBAME3.0 supercomputer, powered by more than 2,000 NVIDIA Pascal GPUs, will enable life-changing advances in such fields as healthcare, energy, and transportation.”

    TSUBAME3.0 has the theoretical performance of 12.15 petaflops in double precision mode (enabling calculation of 12,150 trillion floating point numbers/second); performance that is set to exceed the K supercomputer. In single precision mode, the TSUBAME3.0 performs at 24.3 petaflops, and in half precision mode this increases to 47.2 petaflops. Using the latest GPUs enables improved performance and energy efficiency as well as higher speed and larger capacity storage. The overall computation speed and capacity has also been improved through the NVMe-compatible, high-speed 1.08 PB SSDs on the computation nodes; resulting in significant advances in high-speed processing for Big Data applications. TSUBAME3.0 also incorporates a variety of cloud technology, including virtualization, and is expected to become the most advanced Science Cloud in Japan.

    System cooling efficiency has also been optimized in TSUBAME3.0. The processor cooling system uses an outdoor cooling tower, therefore enabling cold water to be supplied at a temperature close to ambient temperature for minimum energy. Its PUE (Power Usage Effectiveness) value, the value that indicates cooling efficiency, is 1.033; indicating extremely high efficiency and making more electricity available for computation.

    TSUBAME3.0 system uses a total of 540 computation nodes, all of which are ICE® XA computation nodes manufactured by SGI. Each computation node contains two Intel® Xeon® E5-2680 v4 processors, four TESLA P100 for NVLink-Optimized Servers with NVIDIA GPUs, 256 GiB of main storage, and four Intel® Omni-Path network interface ports. Its storage system consists of a 15.9 PB DataDirect Networks Lustre file system as well as 2 TB of NVMe-compatible high-speed SSD memory on each computation node. The computation nodes and the storage system are connected to a high-speed network through Omni-Path as well as to the Internet at a speed of 100 Gbps through SINET5.

    The computation power of TSUBAME3.0 will not only be used for education and cutting-edge research within the university. Importantly, the supercomputer will continue to serve as “supercomputing for everyone” as a leading information base for Japan’s top universities, contributing to development in cutting-edge science and technology and increased international competitiveness through the provision of its services to researchers and companies within and outside of the university for research and development through the Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) and the High Performance Computing Infrastructure (HPCI), two leading information bases for Japan’s top universities, and GSIC’s own TSUBAME Joint Usage Service.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    tokyo-tech-campus

    Tokyo Tech is the top national university for science and technology in Japan with a history spanning more than 130 years. Of the approximately 10,000 students at the Ookayama, Suzukakedai, and Tamachi Campuses, half are in their bachelor’s degree program while the other half are in master’s and doctoral degree programs. International students number 1,200. There are 1,200 faculty and 600 administrative and technical staff members.

    In the 21st century, the role of science and technology universities has become increasingly important. Tokyo Tech continues to develop global leaders in the fields of science and technology, and contributes to the betterment of society through its research, focusing on solutions to global issues. The Institute’s long-term goal is to become the world’s leading science and technology university.

     
  • richardmitnick 6:50 pm on December 11, 2016 Permalink | Reply
    Tags: , , Supercomputing, The right way to simulate the Milky Way   

    From Science Node: “The right way to simulate the Milky Way” 

    Science Node bloc
    Science Node

    13 Sep, 2016 [Where oh where has this been?]
    Whitney Clavin

    Astronomers have created the most detailed computer simulation to date of our Milky Way galaxy’s formation, from its inception billions of years ago as a loose assemblage of matter to its present-day state as a massive, spiral disk of stars.

    The simulation solves a decades-old mystery surrounding the tiny galaxies that swarm around the outside of our much larger Milky Way. Previous simulations predicted that thousands of these satellite, or dwarf, galaxies should exist. However, only about 30 of the small galaxies have ever been observed. Astronomers have been tinkering with the simulations, trying to understand this ‘missing satellites’ problem to no avail.


    Access mp4 video here .
    Supercomputers and superstars. Caltech associate professor of theoretical astrophysics Phil Hopkins and Carnegie-Caltech research fellow Andrew Wetzel use XSEDE supercomputers to build the most detailed and realistic simulation of galaxy formation ever created. The results solve a decades-long mystery regarding dwarf galaxies around our Milky Way. Courtesy Caltech.

    Now, with the new simulation — which used resources from the Extreme Science and Engineering Discovery Environment (XSEDE) running in parallel for 700,000 central processing unit (CPU) hours — astronomers at the California Institute of Technology (Caltech) have created a galaxy that looks like the one we live in today, with the correct, smaller number of dwarf galaxies.

    “That was the aha moment, when I saw that the simulation can finally produce a population of dwarf galaxies like the ones we observe around the Milky Way,” says Andrew Wetzel, postdoctoral fellow at Caltech and Carnegie Observatories in Pasadena, and lead author of a paper about the new research, published August 20 in Astrophysical Journal Letters.

    One of the main updates to the new simulation relates to how supernovae, explosions of massive stars, affect their surrounding environments. In particular, the simulation incorporated detailed formulas that describe the dramatic effects that winds from these explosions can have on star-forming material and dwarf galaxies. These winds, which reach speeds up to thousands of kilometers per second, “can blow gas and stars out of a small galaxy,” says Wetzel.

    Indeed, the new simulation showed the winds can blow apart young dwarf galaxies, preventing them from reaching maturity. Previous simulations that were producing thousands of dwarf galaxies weren’t taking the full effects of supernovae into account.

    “We had thought before that perhaps our understanding of dark matter was incorrect in these simulations, but these new results show we don’t have to tinker with dark matter,” says Wetzel. “When we more precisely model supernovae, we get the right answer.”

    Astronomers simulate our galaxy to understand how the Milky Way, and our solar system within it, came to be. To do this, the researchers tell a computer what our universe was like in the early cosmos. They write complex codes for the basic laws of physics and describe the ingredients of the universe, including everyday matter like hydrogen gas as well as dark matter, which, while invisible, exerts gravitational tugs on other matter. The computers then go to work, playing out all the possible interactions between particles, gas, and stars over billions of years.

    “In a galaxy, you have 100 billion stars, all pulling on each other, not to mention other components we don’t see, like dark matter,” says Caltech’s Phil Hopkins, associate professor of theoretical astrophysics and principal scientist for the new research. “To simulate this, we give a supercomputer equations describing those interactions and then let it crank through those equations repeatedly and see what comes out at the end.”

    The researchers are not done simulating our Milky Way. They plan to use even more computing time, up to 20 million CPU hours, in their next rounds. This should lead to predictions about the very faintest and smallest of dwarf galaxies yet to be discovered. Not a lot of these faint galaxies are expected to exist, but the more advanced simulations should be able to predict how many are left to find.

    The study was funded by Caltech, a Sloan Research Fellowship, the US National Science Foundation (NSF), NASA, an Einstein Postdoctoral Fellowship, the Space Telescope Science Institute, UC San Diego, and the Simons Foundation.

    Other coauthors on the study are: Ji-Hoon Kim of Stanford University, Claude-André Faucher-Giguére of Northwestern University, Dušan Kereš of UC San Diego, and Eliot Quataert of UC Berkeley.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:06 am on November 29, 2016 Permalink | Reply
    Tags: , , Supercomputing   

    From INVERSE: “Japan Reveals Plan to Build the World’s Fastest Supercomputer” 

    INVERSE

    INVERSE

    November 25, 2016
    Mike Brown

    Japan is about to try and build the fastest computer the world has ever known. The Japanese ministry of economy, trade and industry has decided to spend 19.5 billion yen ($173 million) on creating the fastest supercomputer known to the public. The machine will be used to propel Japan into a new era of technological advancement, aiding research into autonomous cars, renewable energy, robots and artificial intelligence (A.I.).

    “As far as we know, there is nothing out there that is as fast,” Satoshi Sekiguchi, director general at Japan’s ‎National Institute of Advanced Industrial Science and Technology, said in a report published Friday.

    The computer is currently called ACBI, which stands for A.I. Bridging Cloud Infrastructure. Companies have already begun bidding for the project, with bidding set to close December 8. The machine is targeted at achieving 130 petaflops, or 130 quadrillion calculations per second.

    Private companies will be able to tap into ACBI’s power for a fee. The machine is aimed at helping develop deep learning applications, which will be vital for future A.I. advancements. One area where deep learning will be crucial is in autonomous vehicles, as systems will be able to analyze the real-world data collected from a car’s sensors to improve its ability to avoid collisions.

    The move follows plans revealed in September for Japan to lead the way in self-driving map technology. The country is aiming to secure its position as world leaders in technological innovation, and the map project aims to set the global standard for autonomous vehicle road maps by getting a head start on data collection. Self-driving cars will need 3D maps to accurately interpret sensor input data and understand its current position.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 3:28 pm on November 23, 2016 Permalink | Reply
    Tags: , , , Computerworld, , Supercomputing   

    From ALCF via Computerworld: “U.S. sets plan to build two exascale supercomputers” 

    Argonne Lab
    News from Argonne National Laboratory

    ANL Cray Aurora supercomputer
    Cray Aurora supercomputer at the Argonne Leadership Computing Facility

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ALCF

    1

    COMPUTERWORLD

    Nov 21, 2016
    Patrick Thibodeau

    2
    ARM

    The U.S believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 million to $300 million each — by 2019.

    The two systems will be built at the same time and will be ready for use by 2023, although it’s possible one of the systems could be ready a year earlier, according to U.S. Department of Energy officials.

    But the scientists and vendors developing exascale systems do not yet know whether President-Elect Donald Trump’s administration will change directions. The incoming administration is a wild card. Supercomputing wasn’t a topic during the campaign, and Trump’s dismissal of climate change as a hoax, in particular, has researchers nervous that science funding may suffer.

    At the annual supercomputing conference SC16 last week in Salt Lake City, a panel of government scientists outlined the exascale strategy developed by President Barack Obama’s administration. When the session was opened to questions, the first two were about Trump. One attendee quipped that “pointed-head geeks are not going to be well appreciated.”

    Another person in the audience, John Sopka, a high-performance computing software consultant, asked how the science community will defend itself from claims that “you are taking the money from the people and spending it on dreams,” referring to exascale systems.

    Paul Messina, a computer scientist and distinguished fellow at Argonne National Labs who heads the Exascale Computing Project, appeared sanguine. “We believe that an important goal of the exascale computing project is to help economic competitiveness and economic security,” said Messina. “I could imagine that the administration would think that those are important things.”

    Politically, there ought to be a lot in HPC’s favor. A broad array of industries rely on government supercomputers to conduct scientific research, improve products, attack disease, create new energy systems and understand climate, among many other fields. Defense and intelligence agencies also rely on large systems.

    The ongoing exascale research funding (the U.S. budget is $150 million this year) will help with advances in software, memory, processors and other technologies that ultimately filter out to the broader commercial market.

    This is very much a global race, which is something the Trump administration will have to be mindful of. China, Europe and Japan are all developing exascale systems.

    China plans to have an exascale system ready by 2020. These nations see exascale — and the computing advances required to achieve it — as a pathway to challenging America’s tech dominance.

    “I’m not losing sleep over it yet,” said Messina, of the possibility that the incoming Trump administration may have different supercomputing priorities. “Maybe I will in January.”

    The U.S. will award the exascale contracts to vendors with two different architectures. This is not a new approach and is intended to help keep competition at the highest end of the market. Recent supercomputer procurements include systems built on the IBM Power architecture, Nvidia’s Volta GPU and Cray-built systems using Intel chips.

    The timing of these exascale systems — ready for 2023 — is also designed to take advantage of the upgrade cycles at the national labs. The large systems that will be installed in the next several years will be ready for replacement by the time exascale systems arrive.

    The last big performance milestone in supercomputing occurred in 2008 with the development of a petaflop system. An exaflop is a 1,000-petaflop system and building it is challenging because of the limits of Moore’s Law, a 1960s-era observation that noted the number of transistors on a chip doubles about every two years.

    “Now we’re at the point where Moore’s Law is just about to end,” said Messina in an interview. That means the key to building something faster “is by having much more parallelism, and many more pieces. That’s how you get the extra speed.”

    An exascale system will solve a problem 50 times faster than the 20-petaflop systems in use in government labs today.

    Development work has begun on the systems and applications that can utilize hundreds of millions of simultaneous parallel events. “How do you manage it — how do you get it all to work smoothly?” said Messina.

    Another major problem is energy consumption. An exascale machine can be built today using current technology, but such a system would likely need its own power plant. The U.S. wants an exascale system that can operate on 20 megawatts and certainly no more than 30 megawatts.

    Scientists will have to come up with a way “to vastly reduce the amount of energy it takes to do a calculation,” said Messina. The applications and software development are critical because most of the energy is used to move data. And new algorithms will be needed.

    About 500 people are working at universities and national labs on the DOE’s coordinated effort to develop the software and other technologies exascale will need.

    Aside from the cost of building the systems, the U.S. will spend millions funding the preliminary work. Vendors want to maintain the intellectual property of what they develop. If it cost, for instance, $50 million to develop a certain aspect of a system, the U.S. may ask the vendor to pay 40% of that cost if they want to keep the intellectual property.

    A key goal of the U.S. research funding is to avoid creation of one-off technologies that can only be used in these particular exascale systems.

    “We have to be careful,” Terri Quinn, a deputy associate director for HPC at Lawrence Livermore National Laboratory, said at the SC16 panel session. “We don’t want them (vendors) to give us capabilities that are not sustainable in a business market.”

    The work under way will help ensure that the technology research is far enough along to enable the vendors to respond to the 2019 request for proposals.

    Supercomputers can deliver advances in modeling and simulation. Instead of building physical prototypes of something, a supercomputer can allow modeling virtually. This can speed the time it takes something to get to market, whether a new drug or car engine. Increasingly, HPC is used in big data and is helping improve cybersecurity through rapid analysis; artificial intelligence and robotics are other fields with strong HPC demand.

    China will likely beat the U.S. in developing an exascale system, but the real test will be their usefulness.

    Messina said the U.S. approach is to develop an exascale eco-system involving vendors, universities and the government. The hope is that the exascale systems will not only a have a wide range of applications ready for them, but applications that are relatively easy to program. Messina wants to see these systems quickly put to immediate and broad use.

    “Economic competitiveness does matter to a lot of people,” said Messina.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: