Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:12 pm on May 27, 2017 Permalink | Reply
    Tags: 1 millionº and breezy: Your solar forecast, , , , , , Supercomputing   

    From Science Node: “1 millionº and breezy: Your solar forecast” 

    Science Node bloc
    Science Node

    24 May, 2017
    Alisa Alering

    Space is a big place, so modeling activities out there calls for supercomputers that match. PRACE provided scientists the resources to run the Vlasiator code and simulate the solar wind around the earth.

    1
    Courtesy Minna Palmroth; Finnish Meteorological Institute.

    Outer space is a tough place to be a lonely blue planet.

    With only a thin atmosphere standing between a punishing solar wind and the 1.5 million species living on its surface, any indication of the solar mood is appreciated.

    The sun emits a continuous flow of plasma traveling at speeds up to 900 km/s and temperatures as high as 1 millionº Celsius. The earth’s magnetosphere blocks this wind and allows it to flow harmlessly around the planet like water around a stone in the middle of a stream.

    Magnetosphere of Earth, original bitmap from NASA. SVG rendering by Aaron Kaase

    But under the force of the solar bombardment, the earth’s magnetic field responds dramatically, changing size and shape. The highly dynamic conditions this creates in near-Earth space is known as space weather.

    Vlasiator, a new simulation developed by Minna Palmroth, professor in computational space physics at the University of Helsinki, models the entire magnetosphere. It helps scientists to better understand interesting and hard-to-predict phenomena that occur in near-Earth space weather.

    Unlike previous models that could only simulate a small segment of the magnetosphere, Vlasiator allows scientists to study causal relationships between plasma phenomena for the first time and to consider smaller scale phenomena in a larger context.

    “With Vlasiator, we are simulating near-Earth space with better accuracy than has even been possible before,” says Palmroth.

    Navigating near-Earth

    Over 1,000 satellites and other near-Earth spacecraft are currently in operation around the earth, including the International Space Station and the Hubble Telescope.

    Nearly all communications on Earth — including television and radio, telephone, internet, and military — rely on links to these spacecraft.

    Still other satellites support navigation and global positioning and meteorological observation.

    New spacecraft are launched every day, and the future promises even greater dependence on their signals. But we are launching these craft into a sea of plasma that we barely understand.

    “Consider a shipping company that would send its vessel into an ocean without knowing what the environment was,” says Palmroth. “That wouldn’t be very smart.”

    Space weather has an enormous impact on spacecraft, capable of deteriorating signals to the navigation map on your phone and disrupting aviation. Solar storms even have the potential to overwhelm transformers and black out the power grid.

    Through better comprehension and prediction of space weather, Vlasiator’s comprehensive model will help scientists protect vital communications and other satellite functions.

    Three-level parallelization

    The Vlasiator’s simulations are so detailed that it can model the most important physical phenomena in the near-Earth space at the ion-kinetic scale. This amounts to a volume of 1 million km3 — a massive computational challenge that has not previously been possible.

    After being awarded several highly competitive grants from the European Research Council, Palmroth secured computation time on HPC resources managed by the Partnership for Advanced Computing in Europe (PRACE).

    4
    Hazel Hen

    She began with the Hornet supercomputer and then its successor Hazel Hen, both at the High-Performance Computing Center Stuttgart. Most recently she has been using the Marconi supercomputer at CINECA in Italy.

    7
    Marconi supercomputer at CINECA in Italy

    Palmroth’s success is due to three-level parallelization of the simulation code. Her team uses domain decomposition to split the near-Earth space into grid cells within each area they wish to simulate.

    They use load-balancing to divide the simulations and then parallelize using OpenMP. Finally, they vectorize the code to parallelize through the supercomputer’s cores.

    Even so, simulation datasets range from 1 to 100 terabytes, depending on how often they save the simulations, and require anywhere between 500 – 100,000 cores, possibly beyond, on Hazel Hen.

    “We are continuously making algorithmic improvements in the code, making new optimizations, and utilizing the latest advances in HPC to improve the efficiency of the calculations all the time,” says Palmroth.

    Taking off into the future

    In addition to advancing our knowledge of space weather, Vlasiator also helps scientists to better understand plasma physics. Until now, most fundamental plasma physical phenomena have been discovered from space because it’s the best available laboratory.

    But the universe is comprised of 99.9 percent plasma, the fourth state of matter. In order to understand the universe, you need to understand plasma physics. For scientists undertaking any kind of matter research, Vlasiator’s capacity to simulate the near-Earth space is significant.

    “As a scientist, I’m curious about what happens in the world,” says Palmroth. “I can’t really draw a line beyond which I don’t want to know what happens.”

    Significantly, Vlasiator has recently helped to explain some features of ultra-low frequency waves in the earth’s foreshock that have perplexed scientists for decades.

    A collaboration with NASA in the US helped validate those results with the THEMIS spacecraft, a constellation of five identical probes designed to gather information about large-scale space physics.

    Exchanging information with her colleagues at NASA allows Palmroth to get input from THEMIS’s direct observation of space phenomena and to exchange modeling results with the observational community.

    “The work we are doing now is important for the next generation,” says Palmroth. “We’re learning all the time. If future generations build upon our advances, their understanding of the universe will be on much more certain ground.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:53 pm on May 6, 2017 Permalink | Reply
    Tags: Apolo, Purdue University, , Supercomputing, Universidad EAFIT in Medellin   

    From Science Node: “Supercomputing sister sites” 

    Science Node bloc
    Science Node

    03 May, 2017
    Kirsten Gibson

    Juan Carlos Vergara used to have go two weeks at a time without his personal computer while it was busy modeling earthquakes. Then he found Apolo.

    1
    Long-distance relationship. Purdue University and EAFIT have teamed up to bring supercomputing to Colombia. Here, Gerry McCartney and Donna Cumberland from Purdue University discuss Apolo with Juan David Pineda from EAFIT. Courtesy Purdue University, EAFIT.

    Apolo, the first research supercomputer at the Universidad EAFIT in Medellin, is the fruit of a partnership between Purdue University’s research computing unit and the Apolo Scientific Computing Center.

    With Apolo, Vergara finishes his work in days instead of months, and can expand the scale of his simulations five million times.

    With Apolo comes a staff to run it, including Juan David Pineda, Apolo’s technology coordinator, and Mateo Gomez, a high-performance computing (HPC) analyst.

    “Sometimes we would be up until 1am helping me solve problems,” says Vergara, a doctoral student of applied mechanics at EAFIT. “I saw them as part of my team, fundamental to what I do every day.”

    For their part of the partnership, Purdue brought a lot of experience accelerating discoveries in science and engineering. Purdue’s central information technology organization has built and operated nine HPC systems for faculty researchers in as many years, most rated among the world’s top 500 supercomputers.

    Hardware from one of those machines, the retired Steele cluster, became the foundation of Apolo.

    2
    Steele cluster

    People powered

    While the hardware is important, the partnership is more about people. Purdue research computing staff have traveled to Colombia to help train and to work with EAFIT colleagues. EAFIT students have participated in Purdue’s Summer Undergraduate Research Fellow (SURF) program working with many supercomputing experts.

    EAFIT and Purdue have also sent joint teams to student supercomputing competitions in New Orleans and Frankfurt, Germany. Some of the Colombian students on the teams have become key staff members at the Apolo center, which, in turn, trains the next generation of Colombia’s high-performance computing experts.

    Juan Luis Mejía, rector at Universidad EAFIT, says EAFIT had been searching for an international partner to help reverse decades of isolation. What it found in Purdue was unexpected.

    “Finding an alliance with a true interest in sharing knowledge of technology and without a hidden agenda allows us to progress,” Mejía says. “I believe that the relationship between our university and Purdue is one of the most valuable.”

    Quantum leap

    Because of the partnership with Purdue, Apolo has enabled research ranging from earthquake science, to a groundbreaking examination of the tropical disease leishmaniasis, to the most ‘green’ way to process cement, to quantum mechanics – in all cases, Apolo accelerates EAFIT researchers’ time to science.

    And since EAFIT is one of the few Colombian universities with a supercomputer and a strong partnership with a major American research university, it is poised to receive big money from the Colombia Científica program.

    EAFIT has already attracted the attention of Grupo Nutresa, a Latin American food processing company headquartered in Medellín, and researchers like Pilar Cossio, a Colombian HIV researcher working for the Max Planck Institute in Germany.

    When Cossio came home to Colombia after studying and working in Italy, the US, and Germany, the biophysicist figured that one big task she was going to face would be building her own supercomputer and finding someone to run it.

    But thanks to the partnership with Purdue, she conducts her research at the Universidad de Antioquia in Medellín with help from the Apolo Scientific Computing Center at EAFIT.

    Cossio’s research combines physics, computational biology, and chemistry. She’s studying protein changes at the atomic level which can help design drugs to cure HIV. That endeavor requires examining around two million different compounds to see which ones bind the best with particular proteins.

    “There are only two supercomputers in Colombia for bioinformatics,” Cossio says. “Apolo is the only one that focuses on satisfying scientific needs. It’s important for us in the developing countries to have partnerships with universities that can help us access these crucial scientific tools.”

    As it is for many scientists, high-performance and parallel computing power are vital for her research — she just didn’t anticipate finding a ready-made solution in her home country.

    Then she found Apolo.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:38 pm on May 6, 2017 Permalink | Reply
    Tags: , , , , , Simulating the universe, Supercomputing   

    From physicsworld.com: “Simulating the universe” 

    physicsworld
    physicsworld.com

    May 4, 2017
    Tom Giblin
    James Mertens
    Glenn Starkman

    Powerful computers are now allowing cosmologists to solve Einstein’s frighteningly complex equations of general relativity in a cosmological setting for the first time. Tom Giblin, James Mertens and Glenn Starkman describe how this new era of simulations could transform our understanding of the universe.

    1
    A visualization of the curved space–time “sea” No image credit.

    From the Genesis story in the Old Testament to the Greek tale of Gaia (Mother Earth) emerging from chaos and giving birth to Uranus (the god of the sky), people have always wondered about the universe and woven creation myths to explain why it looks the way it does. One hundred years ago, however, Albert Einstein gave us a different way to ask that question. Newton’s law of universal gravitation, which was until then our best theory of gravity, describes how objects in the universe interact. But in Einstein’s general theory of relativity, spacetime (the marriage of space and time) itself evolves together with its contents. And so cosmology, which studies the universe and its evolution, became at least in principle a modern science – amenable to precise description by mathematical equations, able to make firm predictions, and open to observational tests that could falsify those predictions.

    Our understanding of the mathematics of the universe has advanced alongside observations of ever-increasing precision, leading us to an astonishing contemporary picture. We live in an expanding universe in which the ordinary material of our everyday lives – protons, neutrons and electrons – makes up only about 5% of the contents of the universe. Roughly 25% is in the form of “dark matter” – material that behaves like ordinary matter as far as gravity is concerned, but is so far invisible except through its gravitational pull. The other 70% of the universe is something completely different, whose gravity pushes things apart rather than pulling them together, causing the expansion of the universe to accelerate over the last few billion years. Naming this unknown substance “dark energy” teaches us nothing about its true nature.

    Universe map Sloan Digital Sky Survey (SDSS) 2dF Galaxy Redshift Survey

    Now, a century into its work, cosmology is brimming with existential questions. If there is dark matter, what is it and how can we find it? Is dark energy the energy of empty space, also known as vacuum energy, or is it the cosmological constant, Λ, as first suggested by Einstein in 1917? He introduced the constant after mistakenly thinking it would stop the universe from expanding or contracting, and so – in what he later called his “greatest blunder” – failed to predict the expansion of the universe, which was discovered a dozen years later. Or is one or both of these invisible substances a figment of the cosmologist’s imagination and it is general relativity that must be changed?

    At the same time as being faced with these fundamental questions, cosmologists are testing their currently accepted model of the universe – dubbed ΛCDM – to greater and greater precision observationally.

    2
    Lambda Coild Dark Matter. No image credit

    (CDM indicates the dark-matter particles are cold because they must move slowly, like the molecules in a cold drink, so as not to evaporate from the galaxies they help bind together.) And yet, while we can use general relativity to describe how the universe expanded throughout its history, we are only just starting to use the full theory to model specific details and observations of how galaxies, clusters of galaxies and superclusters are formed and created. How this happens is simple – the equations of general relativity aren’t.

    Horribly complex

    While they fit neatly onto a T-shirt or a coffee mug, Einstein’s field equations are horrible to solve even using a computer. The equations involve 10 separate functions of the four dimensions of space and time, which characterize the curvature of space–time in each location, along with 40 functions describing how those 10 functions change, as well as 100 further functions describing how those 40 changes change, all multiplied and added together in complicated ways. Exact solutions exist only in highly simplified approximations to the real universe. So for decades cosmologists have used those idealized solutions and taken the departures from them to be small perturbations – reckoning, in particular, that any departures from homogeneity can be treated independently from the homogeneous part and from one another.

    3
    Not at your leisure. No image credit.

    This “first-order perturbation theory” has taught us a lot about the early development of cosmic structures – galaxies, clusters of galaxies and superclusters – from barely perceptible concentrations of matter and dark matter in the early universe. The theory also has the advantage that we can do much of the analysis by hand, and follow the rest on computer. But to track the development of galaxies and other structures from after they were formed to the present day, we’ve mostly reverted to Newton’s theory of gravity, which is probably a good approximation.

    To make progress, we will need to improve on first-order perturbation theory, which treats cosmic structures as independent entities that are affected by the average expansion of the universe, but neither alter the average expansion themselves, nor influence one another. Unfortunately, higher-order perturbation theory is much more complicated – everything affects everything else. Indeed, it’s not clear there is anything to gain from using these higher-order approximations rather than “just solving” the full equations of general relativity instead.

    Improving the precision of our calculations – how well we think we know the answer – is one thing, as discussed above. But the complexity of Einstein’s equations has made us wonder just how accurate the perturbative description really is. In other words, it might give us answers, but are they the right ones? Nonlinear equations, after all, can have surprising features that appear unexpectedly when you solve them in their full glory, and it is hard to predict surprises. Some leading cosmologists, for example, claim that the accelerating expansion of the universe, which dark energy was invented to explain, is caused instead by the collective effects of cosmic structures in the universe acting through the magic of general relativity. Other cosmologists argue this is nonsense.

    The only way to be sure is to use the full equations of general relativity. And the good news is that computers are finally becoming fast enough that modelling the universe using the full power of general relativity – without the traditional approximations – is not such a crazy prospect. With some hard work, it may finally be feasible over the next decade.

    Computers to the rescue

    Numerical general relativity itself is not new. As far back as the late 1950s, Richard Arnowitt, Stanley Deser and Charles Misner – together known as ADM – laid out a basic framework in which space–time could be carefully separated into space and time – a vital first step in solving general relativity with a computer. Other researchers also got in on the act, including Thomas Baumgarte, Stuart Shapiro, Masaru Shibata and Takashi Nakamura, who made important improvements to the numerical properties of the ADM system in the 1980s and 1990s so that the dynamics of systems could be followed accurately over long enough times to be interesting.

    4
    Beam on. No image credit.

    Other techniques for obtaining such long-time stability were also developed, including one imported from fluid mechanics. Known as adaptive mesh refinement, it allowed scarce computer memory resources to be focused only on those parts of problems where they were needed most. Such advances have allowed numerical relativists to simulate with great precision what happens when two black holes merge and create gravitational waves – ripples in space–time. The resulting images are more than eye candy; they were essential in allowing members of the US-based Laser Interferometer Gravitational-Wave Observatory (LIGO) collaboration to announce last year that they had directly detected gravitational waves for the first time.


    Caltech/MIT Advanced aLigo Hanford, WA, USA installation


    Caltech/MIT Advanced aLigo detector installation Livingston, LA, USA


    Gravitational waves. Credit: MPI for Gravitational Physics/W.Benger-Zib

    By modelling many different possible configurations of pairs of black holes – different masses, different spins and different orbits – LIGO’s numerical relativists produced a template of the gravitational-wave signal that would result in each case. Other researchers then compared those simulations over and over again to what the experiment had been measuring, until the moment came when a signal was found that matched one of the templates. The signal in question was coming to us from a pair of black holes a billion light-years away spiralling into one another and merging to form a single larger black hole.

    Cornell SXS, the Simulating eXtreme Spacetimes (SXS) project

    Using numerical relativity to model cosmology has its own challenges compared to simulating black-hole mergers, which are just single astrophysical events. Some qualitative cosmological questions can be answered by reasonably small-scale simulations, and there are state-of-the-art “N-body” simulations that use Newtonian gravity to follow trillions of independent masses over billions of years to see where gravity takes them. But general relativity offers at least one big advantage over Newtonian gravity – it is local.

    The difficulty with calculating the gravity experienced by any particular mass in a Newtonian simulation is that you need to add up the effects of all the other masses. Even Isaac Newton himself regarded this “action at a distance” as a failing of his model, since it means that information travels from one side of the simulated universe to the other instantly, violating the speed-of-light limit. In general relativity, however, all the equations are “local”, which means that to determine the gravity at any time or location you only need to know what the gravity and matter distribution were nearby just moments before. This should, in other words, simplify the numerical calculations.

    Recently, the three of us at Kenyon College and Case Western Reserve University showed that the cosmological problem is finally becoming tractable (Phys. Rev. Lett. 116 251301 and Phys. Rev. D 93 124059). Just days after our paper appeared, Eloisa Bentivegna at the University of Catania in Italy and Marco Bruni at the University of Portsmouth, UK, had similar success (Phys. Rev. Lett. 116 251302). The two groups each presented the results of low-resolution simulations, where grid points are separated by 40 million light-years, with only long-wavelength perturbations. The simulations followed the universe for only a short time by cosmic standards – long enough only for the universe to somewhat more than double in size – but both tracked the evolution of these perturbations in full general relativity with no simplifications or approximations whatsoever. As the eminent Italian cosmologist Sabino Matarese wrote in Nature Physics, “the era of general relativistic numerical simulations in cosmology ha[s] begun”.

    These preliminary studies are still a long way from competing with modern N-body simulations for resolution, duration or dynamic range. To do so will require advances in the software so that the code can run on much larger computer clusters. We will also need to make the code more stable numerically so that it can model much longer periods of cosmic expansion. The long-term goal is for our numerical simulations to match as far as possible the actual evolution of the universe and its contents, which means using the full theory of general relativity. But given that our existing simulations using full general relativity have revealed no fluctuations driving the accelerated expansion of the universe, it appears instead that accelerated expansion will need new physics – whether dark energy or a modified gravitational theory.

    Both groups also observe what appear to be small corrections to the dynamics of space–time when compared with simple perturbation theory. Bentivegna and Bruni studied the collapse of structures in the early universe and suggested that they appear to coalesce somewhat more quickly than in the standard simplified theory.

    Future perfect

    Drawing specific conclusions about simulations is a subtle matter in general relativity. At the mathematical heart of the theory is the principle of “co-ordinate invariance”, which essentially says that the laws of physics should be the same no matter what set of labels you use for the locations and times of events. We are all familiar with milder versions of this symmetry: we wouldn’t expect the equations governing basic scientific laws to depend on whether we measure our positions in, say, New York or London, and we don’t need new versions of science textbooks whenever we switch from standard time to daylight savings time and back. Co-ordinate invariance in the context of general relativity is just a more extreme version of that, but it means we must ensure that any information we extract from our simulations does not depend on how we label the points in our simulations.

    Our Ohio group has taken particular care with this subtlety by sending simulated beams of light from distant points in the distant past at the speed of light through space–time to arrive at the here and now. We then use those beams to simulate observations of the expansion history of our universe. The universe that emerges exhibits an average behaviour that agrees with a corresponding smooth, homogeneous model, but with inhomogeneous structures on top. These additional structures contribute to deviations in observable quantities across the simulated observer’s sky that should soon be accessible to real observers.

    This work is therefore just the start of a journey. Creating codes that are accurate and sensitive enough to make realistic predictions for future observational programmes – such as the all-sky surveys to be carried out by the Large Scale Synoptic Telescope or the Euclid satellite – will require us to study larger volumes of space.


    LSST Camera, built at SLAC



    LSST telescope, currently under construction at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.

    ESA/Euclid spacecraft

    These studies will also have to incorporate ultra-large-scale structures some hundreds of millions of light-years across as well as much smaller-scale structures, such as galaxies and clusters of galaxies. They will also have to follow these volumes for longer stretches of time than is currently possible.

    All this will require us to introduce some of the same refinements that made it possible to predict the gravitational-wave ripples produced by a merging black hole, such as adaptive mesh refinement to resolve the smaller structures like galaxies, and N-body simulations to allow matter to flow naturally across these structures. These refinements will let us characterize more precisely and more accurately the statistical properties of galaxies and clusters of galaxies – as well as the observations we make of them – taking general relativity fully into account. Doing so will, however, require clusters of computers with millions of cores, rather than the hundreds we use now.

    These improvements to code will take time, effort and collaboration. Groups around the world – in addition to the two mentioned – are likely to make important contributions. Numerical general-relativistic cosmology is still in its infancy, but the next decade will see huge strides to make the best use of the new generation of cosmological surveys that are being designed and built today. This work will either give us increased confidence in our own scientific genesis story – ΛCDM – or teach us that we still have a lot more thinking to do about how the universe got itself to where it is today.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    PhysicsWorld is a publication of the Institute of Physics. The Institute of Physics is a leading scientific society. We are a charitable organisation with a worldwide membership of more than 50,000, working together to advance physics education, research and application.

    We engage with policymakers and the general public to develop awareness and understanding of the value of physics and, through IOP Publishing, we are world leaders in professional scientific communications.
    IOP Institute of Physics

     
  • richardmitnick 12:29 pm on May 6, 2017 Permalink | Reply
    Tags: , , , , , Supercomputing,   

    From Universe Today: “Faster Supercomputer! NASA Announces the High Performance Fast Computing Challenge” 

    universe-today

    Universe Today

    5 May , 2017
    Matt Williams

    1
    Looking to the future of space exploration, NASA and TopCoder have launched the “High Performance Fast Computing Challenge” to improve the performance of their Pleiades supercomputer. Credit: NASA/MSFC

    For decades, NASA’s Aeronautics Research Mission Directorate (ARMD) has been responsible for developing the technologies that put satellites into orbit, astronauts on the Moon, and sent robotic missions to other planets. Unfortunately, after many years of supporting NASA missions, some of their machinery is getting on in years and is in need of an upgrade.

    Consider the Pleiades supercomputer, the distributed-memory machine that is responsible for conducting modeling and simulations for NASA missions. Despite being one of the fastest supercomputers in the world, Pleiades will need to be upgraded in order to stay up to task in the years ahead. Hence why NASA has come together with TopCoder (and with the support of HeroX) to launch the High Performance Fast Computing Challenge (HPFCC).

    With a prize purse of $55,000, NASA and TopCoder are seeking programmers and computer specialists to help them upgrade Pleiades so it can perform computations faster. Specifically, they want to improve its FUN3D software so that flow analysis which previously took months can now be done in days or hours. In short, they want to speed up their supercomputers by a factor of 10 to 1000 while relying on its existing hardware, and without any decreases in accuracy.

    3
    The addition of Haswell processors in 2015 increased the theoretical peak processing capability of Pleiades from 4.5 petaflops to 5.3 petaflops. Credit: NASA

    Those hoping to enter need to be familiar with FUN3D software, which is used to calculate the nonlinear partial differential equations (aka. Navier-Stokes equations) that are used for steady and unsteady flow computations. These include large eddy simulations in computational fluid dynamics (CFD), which are of particular importance when it comes to supersonic aircraft, space flight, and the development launch vehicles and planetary reentry systems.

    NASA has partnered to launch this challenge with TopCoder, the world’s largest online community of designers, developers and data scientists. Since it was founded in 2001, this company has hosted countless online competitions (known as “single round matches”, or SRMs) designed to foster better programming. They also host weekly competitions to stimulate developments in graphic design.

    Overall, the HPFSCC will consist of two challenges – the Ideation Challenge and the Architecture Challenge. For the Ideation Challenge (hosted by NASA), competitors must propose ideas that can help optimize the Pleiades source code. As they state, may include (but is not limited to) “exploiting algorithmic developments in such areas as grid adaptation, higher-order methods and efficient solution techniques for high performance computing hardware.”

    4
    The computation of fluid dynamics is of particular importance when plotting space launches and reentry. Credit: NASA/JPL-Caltech

    The Architecture Challenge (hosted by TopCoder), is focused less on strategy and more on measurable improvements. As such, participants will be tasked with showing how to optimize processing in order to reduce the overall time and increase the efficiency of computing models. Ideally, says TopCoder, this would include “algorithm optimization of the existing code base, inter-node dispatch optimization, or a combination of the two.”

    NASA is providing $20,000 in prizes for the Ideation challenge, with $10,000 awarded for first place, and two runner-up awards of $5000 each. TopCoder, meanwhile, is offering $35,000 for the Architecture challenge – a top prize of $15,000 for first place, $10,000 for second place, with $10,000 set aside for the Qualified Improvement Candidate Prize Pool.

    The competition will remain open to submissions until June 29th, 2017, at which point, the judging will commence. This will wrap up on August 7th, and the winners of both competitions will be announced on August 9th. So if you are a coder, computer engineer, or someone familiar with FUN3D software, be sure to head on over to HeroX and accept the challenge!

    Human space exploration continues to advance, with missions planned for the Moon, Mars, and beyond. With an ever-expanding presence in space and new challenges awaiting us, it is necessary that we have the right tools to make it all happen. By leveraging improvements in computer programming, we can ensure that one of the most important aspects of mission planning remains up to task!

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 7:35 am on April 28, 2017 Permalink | Reply
    Tags: , , , , Quasar pairs, , Ripples in cosmic web measured using rare double quasars, Supercomputing, ,   

    From UCSC: “Ripples in cosmic web measured using rare double quasars” 

    UC Santa Cruz

    UC Santa Cruz

    [PREVIOUSLY COVERED HERE .]

    April 27, 2017
    Julie Cohen
    stephens@ucsc.edu

    1
    Astronomers identified rare pairs of quasars right next to each other on the sky and measured subtle differences in the absorption of intergalactic atoms measured along the two sightlines. This enabled them to detect small-scale fluctuations in primeval hydrogen gas.(Credit: UC Santa Barbara)

    2
    Snapshot of a supercomuter simulation showing part of the cosmic web, 11.5 billion years ago. The researchers created this and other models of the universe and directly compared them with quasar pair data in order to measure the small-scale ripples in the cosmic web. The cube is 24 million light-years on a side. © J. Oñorbe / MPIA

    The most barren regions of the universe are the far-flung corners of intergalactic space. In these vast expanses between the galaxies, a diffuse haze of hydrogen gas left over from the Big Bang is spread so thin there’s only one atom per cubic meter. On the largest scales, this diffuse material is arranged in a vast network of filamentary structures known as the “cosmic web,” its tangled strands spanning billions of light years and accounting for the majority of atoms in the Universe.

    Now a team of astronomers including J. Xavier Prochaska, professor of astronomy and astrophysics at UC Santa Cruz, has made the first measurements of small-scale ripples in this primeval hydrogen gas. Although the regions of cosmic web they studied lie nearly 11 billion light years away, they were able to measure variations in its structure on scales a 100,000 times smaller, comparable to the size of a single galaxy. The researchers presented their findings in a paper published April 27 in Science.

    Intergalactic gas is so tenuous that it emits no light of its own. Instead astronomers study it indirectly, by observing how it selectively absorbs the light coming from faraway sources known as quasars. Quasars constitute a brief hyper-luminous phase of the galactic life-cycle, powered by the infall of matter onto a galaxy’s central supermassive black hole. They thus act like cosmic lighthouses—bright, distant beacons that allow astronomers to study intergalactic atoms residing between the quasars location and Earth.

    Because these hyper-luminous episodes last only a tiny fraction of a galaxy’s lifetime, quasars are correspondingly rare on the sky, and are typically separated by hundreds of millions of light years from each other. In order to probe the cosmic web on much smaller scales, the astronomers exploited a fortuitous cosmic coincidence: they identified exceedingly rare pairs of quasars, right next to each other on the sky, and measured subtle differences in the absorption of intergalactic atoms measured along the two sightlines.

    “One of the biggest challenges was developing the mathematical and statistical tools to quantify the tiny differences we measure in this new kind of data,” said Alberto Rorai, a post-doctoral researcher at Cambridge university and lead author of the study. Rorai developed these tools as part of the research for his doctoral degree, and applied his tools to spectra of quasars obtained by the team on the largest telescopes in the world, including the 10-meter Keck telescopes at the W. M. Keck Observatory on Mauna Kea, Hawaii.

    The astronomers compared their measurements to supercomputer models that simulate the formation of cosmic structures from the Big Bang to the present.

    “The input to our simulations are the laws of physics and the output is an artificial universe which can be directly compared to astronomical data. I was delighted to see that these new measurements agree with the well-established paradigm for how cosmic structures form,” said Jose Oñorbe, a post-doctoral researcher at the Max Planck Institute for Astronomy, who led the supercomputer simulation effort. On a single laptop, these complex calculations would have required almost a thousand years to complete, but modern supercomputers enabled the researchers to carry them out in just a few weeks.

    “One reason why these small-scale fluctuations are so interesting is that they encode information about the temperature of gas in the cosmic web just a few billion years after the Big Bang,” said Joseph Hennawi, a professor of physics at UC Santa Barbara who led the search for quasar pairs.

    Astronomers believe that the matter in the universe went through phase transitions billions of years ago, which dramatically changed its temperature. These phase transitions, known as cosmic reionization, occurred when the collective ultraviolet glow of all stars and quasars in the universe became intense enough to strip electrons off of the atoms in intergalactic space. How and when reionization occurred is one of the biggest open questions in the field of cosmology, and these new measurements provide important clues that will help narrate this chapter of the history of the universe.

    Telescopes in this study:

    Keck Observatory, Mauna Kea, Hawaii, USA

    ESO/VLT at Cerro Paranal, with an elevation of 2,635 metres (8,645 ft) above sea level

    Carnegie 6.5 meter Magellan Baade and Clay Telescopes located at Carnegie’s Las Campanas Observatory, Chile.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UCO Lick Shane Telescope
    UCO Lick Shane Telescope interior
    Shane Telescope at UCO Lick Observatory, UCSC

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA

    Lick Automated Planet Finder telescope, Mount Hamilton, CA, USA

    UC Santa Cruz campus
    The University of California, Santa Cruz, opened in 1965 and grew, one college at a time, to its current (2008-09) enrollment of more than 16,000 students. Undergraduates pursue more than 60 majors supervised by divisional deans of humanities, physical & biological sciences, social sciences, and arts. Graduate students work toward graduate certificates, master’s degrees, or doctoral degrees in more than 30 academic fields under the supervision of the divisional and graduate deans. The dean of the Jack Baskin School of Engineering oversees the campus’s undergraduate and graduate engineering programs.

    UCSC is the home base for the Lick Observatory.

    Lick Observatory's Great Lick 91-centimeter (36-inch) telescope housed in the South (large) Dome of main building
    Lick Observatory’s Great Lick 91-centimeter (36-inch) telescope housed in the South (large) Dome of main building

    Search for extraterrestrial intelligence expands at Lick Observatory
    New instrument scans the sky for pulses of infrared light
    March 23, 2015
    By Hilary Lebow
    1
    The NIROSETI instrument saw first light on the Nickel 1-meter Telescope at Lick Observatory on March 15, 2015. (Photo by Laurie Hatch) UCSC Lick Nickel telescope

    Astronomers are expanding the search for extraterrestrial intelligence into a new realm with detectors tuned to infrared light at UC’s Lick Observatory. A new instrument, called NIROSETI, will soon scour the sky for messages from other worlds.

    “Infrared light would be an excellent means of interstellar communication,” said Shelley Wright, an assistant professor of physics at UC San Diego who led the development of the new instrument while at the University of Toronto’s Dunlap Institute for Astronomy & Astrophysics.

    Wright worked on an earlier SETI project at Lick Observatory as a UC Santa Cruz undergraduate, when she built an optical instrument designed by UC Berkeley researchers. The infrared project takes advantage of new technology not available for that first optical search.

    Infrared light would be a good way for extraterrestrials to get our attention here on Earth, since pulses from a powerful infrared laser could outshine a star, if only for a billionth of a second. Interstellar gas and dust is almost transparent to near infrared, so these signals can be seen from great distances. It also takes less energy to send information using infrared signals than with visible light.

    5
    UCSC alumna Shelley Wright, now an assistant professor of physics at UC San Diego, discusses the dichroic filter of the NIROSETI instrument. (Photo by Laurie Hatch)

    Frank Drake, professor emeritus of astronomy and astrophysics at UC Santa Cruz and director emeritus of the SETI Institute, said there are several additional advantages to a search in the infrared realm.

    “The signals are so strong that we only need a small telescope to receive them. Smaller telescopes can offer more observational time, and that is good because we need to search many stars for a chance of success,” said Drake.

    The only downside is that extraterrestrials would need to be transmitting their signals in our direction, Drake said, though he sees this as a positive side to that limitation. “If we get a signal from someone who’s aiming for us, it could mean there’s altruism in the universe. I like that idea. If they want to be friendly, that’s who we will find.”

    Scientists have searched the skies for radio signals for more than 50 years and expanded their search into the optical realm more than a decade ago. The idea of searching in the infrared is not a new one, but instruments capable of capturing pulses of infrared light only recently became available.

    “We had to wait,” Wright said. “I spent eight years waiting and watching as new technology emerged.”

    Now that technology has caught up, the search will extend to stars thousands of light years away, rather than just hundreds. NIROSETI, or Near-Infrared Optical Search for Extraterrestrial Intelligence, could also uncover new information about the physical universe.

    “This is the first time Earthlings have looked at the universe at infrared wavelengths with nanosecond time scales,” said Dan Werthimer, UC Berkeley SETI Project Director. “The instrument could discover new astrophysical phenomena, or perhaps answer the question of whether we are alone.”

    NIROSETI will also gather more information than previous optical detectors by recording levels of light over time so that patterns can be analyzed for potential signs of other civilizations.

    “Searching for intelligent life in the universe is both thrilling and somewhat unorthodox,” said Claire Max, director of UC Observatories and professor of astronomy and astrophysics at UC Santa Cruz. “Lick Observatory has already been the site of several previous SETI searches, so this is a very exciting addition to the current research taking place.”

    NIROSETI will be fully operational by early summer and will scan the skies several times a week on the Nickel 1-meter telescope at Lick Observatory, located on Mt. Hamilton east of San Jose.

    The NIROSETI team also includes Geoffrey Marcy and Andrew Siemion from UC Berkeley; Patrick Dorval, a Dunlap undergraduate, and Elliot Meyer, a Dunlap graduate student; and Richard Treffers of Starman Systems. Funding for the project comes from the generous support of Bill and Susan Bloomfield.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UCSC is the home base for the Lick Observatory.

     
  • richardmitnick 10:08 am on April 26, 2017 Permalink | Reply
    Tags: , , Building the Bridge to Exascale, ECP- Exascale Computing Project, , , , Supercomputing   

    From OLCF at ORNL: “Building the Bridge to Exascale” 

    i1

    Oak Ridge National Laboratory

    OLCF

    April 18, 2017 [Where was this hiding?]
    Katie Elyce Jones

    Building an exascale computer—a machine that could solve complex science problems at least 50 times faster than today’s leading supercomputers—is a national effort.

    To oversee the rapid research and development (R&D) of an exascale system by 2023, the US Department of Energy (DOE) created the Exascale Computing Project (ECP) last year. The project brings together experts in high-performance computing from six DOE laboratories with the nation’s most powerful supercomputers—including Oak Ridge, Argonne, Lawrence Berkeley, Lawrence Livermore, Los Alamos, and Sandia—and project members work closely with computing facility staff from the member laboratories.

    ORNL IBM Summit supercomputer depiction.

    At the Exascale Computing Project’s (ECP’s) annual meeting in February 2017, Oak Ridge Leadership Computing Facility (OLCF) staff discussed OLCF resources that could be leveraged for ECP research and development, including the facility’s next flagship supercomputer, Summit, expected to go online in 2018.

    At the first ECP annual meeting, held January 29–February 3 in Knoxville, Tennessee, about 450 project members convened to discuss collaboration in breakout sessions focused on project organization and upcoming R&D milestones for applications, software, hardware, and exascale systems focus areas. During facility-focused sessions, senior staff from the Oak Ridge Leadership Computing Facility (OLCF) met with ECP members to discuss opportunities for the project to use current petascale supercomputers, test beds, prototypes, and other facility resources for exascale R&D. The OLCF is a DOE Office of Science User Facility located at DOE’s Oak Ridge National Laboratory (ORNL).

    “The ECP’s fundamental responsibilities are to provide R&D to build exascale machines more efficiently and to prepare the applications and software that will run on them,” said OLCF Deputy Project Director Justin Whitt. “The facilities’ responsibilities are to acquire, deploy, and operate the machines. We are currently putting advanced test beds and prototypes in place to evaluate technologies and enable R&D efforts like those in the ECP.”

    ORNL has a unique connection to the ECP. The Tennessee-based laboratory is the location of the project office that manages collaboration within the ECP and among its facility partners. ORNL’s Laboratory Director Thom Mason delivered the opening talk at the conference, highlighting the need for coordination in a project of this scope.

    On behalf of facility staff, Mark Fahey, director of operations at the Argonne Leadership Computing Facility, presented the latest delivery and deployment plans for upcoming computing resources during a plenary session. From the OLCF, Project Director Buddy Bland and Director of Science Jack Wells provided a timeline for the availability of Summit, OLCF’s next petascale supercomputer, which is expected to go online in 2018; it will be at least 5 times more powerful than the OLCF’s 27-petaflop Titan supercomputer.

    ORNL Cray XK7 Titan Supercomputer.

    “Exascale hardware won’t be around for several more years,” Wells said. “The ECP will need access to Titan, Summit, and other leadership computers to do the work that gets us to exascale.”

    Wells said he was able to highlight the spring 2017 call for Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, proposals, which will give 2-year projects the first opportunity for computing time on Summit. OLCF staff also introduced a handful of computing architecture test beds—including the developmental environment for Summit known as Summitdev, NVIDIA’s deep learning and accelerated analytics system DGX-1, an experimental cluster of ARM 64-bit compute nodes, and a Cray XC40 cluster of 168 nodes known as Percival—that are now available for OLCF users.

    In addition to leveraging facility resources for R&D, the ECP must understand the future needs of facilities to design an exascale system that is ready for rigorous computational science simulations. Facilities staff can offer insight about the level of performance researchers will expect from science applications on exascale systems and estimate the amount of space and electrical power that will be available in the 2023 timeframe.

    “Getting to capable exascale systems will require careful coordination between the ECP and the user facilities,” Whitt said.

    One important collaboration so far was the development of a request for information, or RFI, for exascale R&D that the ECP released in February to industry vendors. The RFI enables the ECP to evaluate potential software and hardware technologies for exascale systems—a step in the R&D process that facilities often undertake. Facilities will later release requests for proposals when they are ready to begin building exascale systems

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 9:54 am on April 23, 2017 Permalink | Reply
    Tags: , , , Computer modelling, , , , , Modified Newtonian Dynamics, or MOND, , Simulating galaxies, Supercomputing   

    From Durham: “Simulated galaxies provide fresh evidence of dark matter” 

    Durham U bloc

    Durham University

    21 April 2017
    No writer credit

    1
    A simulated galaxy is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey). No image credit.

    Further evidence of the existence of dark matter – the mysterious substance that is believed to hold the Universe together – has been produced by Cosmologists at Durham University.

    Using sophisticated computer modelling techniques, the research team simulated the formation of galaxies in the presence of dark matter and were able to demonstrate that their size and rotation speed were linked to their brightness in a similar way to observations made by astronomers.

    One of the simulations is pictured, showing the main ingredients that make up a galaxy: the stars (blue), the gas from which the stars are born (red), and the dark matter halo that surrounds the galaxy (light grey).

    Alternative theories

    Until now, theories of dark matter have predicted a much more complex relationship between the size, mass and brightness (or luminosity) of galaxies than is actually observed, which has led to dark matter sceptics proposing alternative theories that are seemingly a better fit with what we see.

    The research led by Dr Aaron Ludlow of the Institute for Computational Cosmology, is published in the academic journal, Physical Review Letters.

    Most cosmologists believe that more than 80 per cent of the total mass of the Universe is made up of dark matter – a mysterious particle that has so far not been detected but explains many of the properties of the Universe such as the microwave background measured by the Planck satellite.

    CMB per ESA/Planck

    ESA/Planck

    Convincing explanations

    Alternative theories include Modified Newtonian Dynamics, or MOND. While this does not explain some observations of the Universe as convincingly as dark matter theory it has, until now, provided a simpler description of the coupling of the brightness and rotation velocity, observed in galaxies of all shapes and sizes.

    The Durham team used powerful supercomputers to model the formation of galaxies of various sizes, compressing billions of years of evolution into a few weeks, in order to demonstrate that the existence of dark matter is consistent with the observed relationship between mass, size and luminosity of galaxies.

    Long-standing problem resolved

    Dr Ludlow said: “This solves a long-standing problem that has troubled the dark matter model for over a decade. The dark matter hypothesis remains the main explanation for the source of the gravity that binds galaxies. Although the particles are difficult to detect, physicists must persevere.”

    Durham University collaborated on the project with Leiden University, Netherlands; Liverpool John Moores University, England and the University of Victoria, Canada. The research was funded by the European Research Council, the Science and Technology Facilities Council, Netherlands Organisation for Scientific Research, COFUND and The Royal Society.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Durham U campus

    Durham University is distinctive – a residential collegiate university with long traditions and modern values. We seek the highest distinction in research and scholarship and are committed to excellence in all aspects of education and transmission of knowledge. Our research and scholarship affect every continent. We are proud to be an international scholarly community which reflects the ambitions of cultures from around the world. We promote individual participation, providing a rounded education in which students, staff and alumni gain both the academic and the personal skills required to flourish.

     
  • richardmitnick 6:01 am on March 30, 2017 Permalink | Reply
    Tags: , , CARC - Center for Advanced Research Computing, , Supercomputing, U New Mexico   

    From LANL: “LANL donation adding to UNM supercomputing power” 

    LANL bloc

    Los Alamos National Laboratory

    1

    University of New Mexico

    A new computing system, Wheeler, to be donated to The University of New Mexico Center for Advanced Research Computing (CARC) by Los Alamos National Laboratory (LANL) will put the “super” in supercomputing.

    1
    The new Cray system is nine times more powerful than the combined computing power of the four machines it is replacing, said CARC interim director Patrick Bridges. The new system has yet to be named.

    The machine was acquired from LANL through the National Science Foundation-sponsored PR0bE project, which is run by the New Mexico Consortium. The NMC, comprising UNM, New Mexico State, and New Mexico Tech universities, engages universities and industry in scientific research in the nation’s interest and to increase the role of LANL in science, education, and economic development.

    The new system includes:

    Over 500 nodes, each featuring two quad-core 2.66 GHz Intel Xeon 5550 CPUs and 24 GB of memory
    Over 4,000 cores and 12 terabytes of RAM
    45-50 trillion floating-point operations per second (45-50 teraflops)

    Additional memory, storage, and specialized compute facilities to augment this system are also being planned.

    “This is roughly 20 percent more powerful than any other remaining system at UNM,” Bridges said. “Not only will the new machine be easier to administer and maintain, but also easier for students, faculty, and staff to use. The machine will provide cutting-edge computation for users and will be the fastest of all the machines.”

    Andree Jacobson, chief information officer of the NMC, says that he is pleased that donation will benefit educational efforts.

    “Through a very successful collaboration between the National Science Foundation, New Mexico Consortium, and the Los Alamos National Laboratory called PRObE, we’ve been able to repurpose this retired machine to significantly improve the research computing environment in New Mexico,” he said. “It is truly wonderful to see old computers get a new life, and also an outstanding opportunity to assist the New Mexico universities.”

    To make space for the new machine, the Metropolis, Pequeña, and Ulam systems at UNM will be phased out over the next couple of months. As they are taken offline, the new machine will be installed and brought online. Users of existing systems and their research will be transitioned to the new machine as part of this process.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus

    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

    DOE Main

    NNSA

     
  • richardmitnick 1:13 pm on March 29, 2017 Permalink | Reply
    Tags: , Supercomputing, , , What's next for Titan?   

    From OLCF via TheNextPlatform: “Scaling Deep Learning on an 18,000 GPU Supercomputer” 

    i1

    Oak Ridge National Laboratory

    OLCF

    2

    TheNextPlatform

    March 28, 2017
    Nicole Hemsoth


    ORNL Cray XK7 Titan Supercomputer

    It is one thing to scale a neural network on a single GPU or even a single system with four or eight GPUs. But it is another thing entirely to push it across thousands of nodes. Most centers doing deep learning have relatively small GPU clusters for training and certainly nothing on the order of the Titan supercomputer at Oak Ridge National Laboratory.

    The emphasis on machine learning scalability has often been focused on node counts in the past for single-model runs. This is useful for some applications, but as neural networks become more integrated into existing workflows, including those in HPC, there is another way to consider scalability. Interestingly, the lesson comes from an HPC application area like weather modeling where, instead of one monolithic model to predict climate, an ensemble of forecasts run in parallel on a massive supercomputer are meshed together for the best result. Using this ensemble method on deep neural networks allows for scalability across thousands of nodes, with the end result being derived from an average of the ensemble–something that is acceptable in an area that does not require the kind of precision (in more ways than one) that some HPC calculations do.

    This approach has been used on the Titan supercomputer at Oak Ridge, which is a powerhouse for deep learning training given its high GPU counts. Titan’s 18,688 Tesla K20X GPUs have proven useful for a large number of scientific simulations and are now pulling double-duty on deep learning frameworks, including Caffe, to boost the capabilities of HPC simulations (classification, filtering of noise, etc.). The next generation supercomputer at the lab, the future “Summit” machine (expected to be operational at the end of 2017) will provide even more GPU power with the “Volta” generation Tesla graphics coprocessors from Nvidia, high-bandwidth memory, NVLink for faster data movement, and IBM Power9 CPUs.


    ORNL IBM Summit supercomputer depiction

    ORNL researchers used this ensemble approach to neural networks and were able to stretch these across all of the GPUs in the machine. This is a notable feat, even for the types of large simulations that are built to run on big supercomputers. What is interesting is that while the frameworks might come from the deep learning (Caffe in ORNL’s case), the node to node communication is rooted in HPC. As we have described before, MPI is still the best method out there for fast communication across InfiniBand-connected nodes and like researchers elsewhere, ORNL has adapted it to deep learning at scale.

    Right now, the team is using each individual node to train an individual deep learning network, but all of those different networks need to have the same data if training from the same set. The question is how to feed that same data to over 18,000 different GPUs at almost the same time—and on a system that wasn’t designed with that in mind? The answer is in a custom MPI-based layer that can divvy up the data and distribute it. With the coming Summit supercomputer—the successor to Titan, which will sport six Volta GPUs per node—the other problem is multi-GPU scaling, something application teams across HPC are tackling as well.

    Ultimately, the success of MPI for deep learning at such scale will depend on how many messages the system and MPI can handle since there is both results between nodes in addition to thousands of synchronous updates for training iterations. Each iteration will cause a number of neurons within the network to be updated, so if the network is spread across multiple nodes, all of that will have to be communicated. That is large enough task on its own—but also consider the delay of the data that needs to be transferred to and from disk (although a burst buffer can be of use here). “There are also new ways of looking at MPI’s guarantees for robustness, which limits certain communication patterns. HPC needs this, but neural networks are more fault-tolerant than many HPC applications,” Patton says. “Going forward, that the same I/O is being used to communicate between the nodes and from disk, so when the datasets are large enough the bandwidth could quickly dwindle.

    In addition to their work scaling deep neural networks across Titan, the team has also developed a method of automatically designing neural networks for use across multiple datasets. Before, a network designed for image recognition could not be reused for speech, but their own auto-designing code has scaled beyond 5,000 (single GPU) nodes on Titan with up to 80 percent accuracy.

    “The algorithm is evolutionary, so it can take design parameters of a deep learning network and evolve those automatically,” Robert Patton, a computational analytics scientist at Oak Ridge, tells The Next Platform. “We can take a dataset that no one has looked at before and automatically generate a network that works well on that dataset.”

    Since developing the auto-generating neural networks, Oak Ridge researchers have been working with key application groups that can benefit from the noise filtering and data classification that large-scale neural nets can provide. These include high-energy particle physics, where they are working with Fermi National Lab to classify neutrinos and subatomic particles. “Simulations produce so much data and it’s too hard to go through it all or even keep it all on disk,” says Patton. “We want to identify things that are interesting in data in real time in a simulation so we can snapshot parts of the data in high resolution and go back later.”

    It is with an eye on “Summit” and the challenges to programming the system that teams at Oak Ridge are swiftly figuring out where deep learning fits into existing HPC workflows and how to maximize the hardware they’ll have on hand.

    “We started taking notice of deep learning in 2012 and things really took off then, in large part because of the move of those algorithms to the GPU, which allowed researchers to speed the development process,” Patton explains. “There has since been a lot of progress made toward tackling some of the hardest problems and by 2014, we started seeing that if one GPU is good for deep learning, what could we do with 18,000 of them on the Titan supercomputer.”

    While large supercomputers like Titan have the hybrid GPU/CPU horsepower for deep learning at scale, they are not built for these kinds of workloads. Some hardware changes in Summit will go a long way toward speeding through some bottlenecks, but the right combination of hardware might include some non-standard accelerators like neuromorphic devices and other chips to bolster training or inference. “Right now, if we were to use machine learning in real-time for HPC applications, we still have the problem of training. We are loading the data from disk and the processing can’t continue until the data comes off disk, so we are excited for Summit, which will give us the ability to get the data off disk faster in the nodes, which will be thicker, denser and have more memory and storage,” Patton says.

    “It takes a lot of computation on expensive HPC systems to find the distinguishing features in all the noise,” says Patton. “The problem is, we are throwing away a lot of good data. For a field like materials science, for instance, it’s not unlikely for them to pitch more than 90 percent of their data because it’s so noisy and they lack the tools to deal with it.” He says this is also why his teams are looking at integrating novel architectures to offload to, including neuromorphic and quantum computers—something we will talk about more later this week in an interview with ORNL collaborator, Thomas Potok.

    [I WANT SOMEONE TO TELL ME WHAT HAPPENS TO TITAN NEXT.]

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 12:31 pm on March 21, 2017 Permalink | Reply
    Tags: , Breaking the supermassive black hole speed limit, , Supercomputing   

    From LANL: “Breaking the supermassive black hole speed limit” 

    LANL bloc

    Los Alamos National Laboratory

    March 21, 2017
    Kevin Roark
    Communications Office
    (505) 665-9202
    knroark@lanl.gov

    1
    Quasar growing under intense accretion streams. No image credit

    A new computer simulation helps explain the existence of puzzling supermassive black holes observed in the early universe. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.

    “Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory, “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”

    Using computer codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.

    “It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?”

    It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.

    “This was largely unexpected,” said Smidt. “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”

    A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials. Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.

    “We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”

    Research paper available at https://arxiv.org/pdf/1703.00449.pdf

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Los Alamos National Laboratory’s mission is to solve national security challenges through scientific excellence.

    LANL campus
    Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, The Babcock & Wilcox Company, and URS for the Department of Energy’s National Nuclear Security Administration.

    Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

    Operated by Los Alamos National Security, LLC for the U.S. Dept. of Energy’s NNSA

    DOE Main

    NNSA

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: