Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:07 pm on June 20, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    Rutgers New Supercomputer Ranked #2 among Big Ten Universities, #8 among U.S. Academic Institutions by the Top500 List 

    The updated Top 500 ranking of world’s most powerful supercomputers issued today ranks Rutgers’ new academic supercomputer #2 among Big Ten universities, #8 among U.S. academic institutions, #49 among academic institutions globally, and #165 among all supercomputers worldwide.

    The Top 500 project provides a reliable basis for tracking and detecting trends in high-performance computing. Twice each year it assembles and releases a list of the sites operating the 500 most powerful computer systems in the world.

    Rutgers’ new supercomputer, which is named “Caliburn,” is the most powerful system in the state. It was built with a $10 million award to Rutgers from the New Jersey Higher Education Leasing Fund. The lead contractor is HighPoint Solutions of Bridgewater, N.J., which was chosen as the lead contractor after a competitive bidding process. The system manufacturer and integrator is Super Micro Computer Inc. of San Jose, Calif.

    Source: Rutgers New Supercomputer Ranked #2 among Big Ten Universities, #8 among U.S. Academic Institutions by the Top500 List

    Rutgersensis

     
  • richardmitnick 11:14 am on June 3, 2016 Permalink | Reply
    Tags: , , Supercomputing,   

    From ALCF: “3D simulations illuminate supernova explosions” 

    ANL Lab
    News from Argonne National Laboratory

    June 1, 2016
    Jim Collins

    1
    Top: This visualization is a volume rendering of a massive star’s radial velocity. In comparison to previous 1D simulations, none of the structure seen here would be present.

    2

    Bottom: Magnetohydrodynamic turbulence powered by neutrino-driven convection behind the stalled shock of a core-collapse supernova simulation. This simulation shows that the presence of rotation and weak magnetic fields dramatically impacts the development of the supernova mechanism as compared to non-rotating, non-magnetic stars. The nascent neutron star is just barely visible in the center below the turbulent convection.

    Credit:
    Sean M. Couch, Michigan State University

    Researchers from Michigan State University are using Mira to perform large-scale 3D simulations of the final moments of a supernova’s life cycle.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    While the 3D simulation approach is still in its infancy, early results indicate that the models are providing a clearer picture of the mechanisms that drive supernova explosions than ever before.

    In the landmark television series “Cosmos,” astronomer Carl Sagan famously proclaimed, “we are made of star stuff,” in reference to the ubiquitous impact of supernovas.

    At the end of their life cycles, these massive stars explode in spectacular fashion, scattering their guts—which consist of carbon, iron, and basically all other natural elements—across the cosmos. These elements go on to form new stars, solar systems, and everything else in the universe (including the building blocks for life on Earth).

    Despite this fundamental role in cosmology, the mechanisms that drive supernova explosions are still not well understood.

    “If we want to understand the chemical evolution of the entire universe and how the stuff that we’re made of was processed and distributed throughout the universe, we have to understand the supernova mechanism,” said Sean Couch, assistant professor of physics and astronomy at Michigan State University.

    To shed light on this complex phenomenon, Couch is leading an effort to use Mira, the Argonne Leadership Computing Facility’s (ALCF’s) 10-petaflops supercomputer, to carry out some of the largest and most detailed 3D simulations ever performed of core-collapse supernovas. The ALCF is a U.S. Department of Energy (DOE) Office of Science User Facility.

    After millions of years of burning ever-heavier elements, these super-giant stars (at least eight solar masses, or eight times the mass of the sun) eventually run out of nuclear fuel and develop an iron core. No longer able to support themselves against their own immense gravitational pull, they start to collapse. But a process, not yet fully understood, intervenes that reverses the collapse and causes the star to explode.

    “What theorists like me are trying to understand is that in-between step,” Couch said. “How do we go from this collapsing iron core to an explosion?”

    Through his work at the ALCF, Couch and his team are developing and demonstrating a high-fidelity 3D simulation approach that is providing a more realistic look at this “in-between step” than previous supernova simulations.

    While this 3D method is still in its infancy, Couch’s early results have been promising. In 2015, his team published a paper* in the Astrophysical Journal Letters, detailing their 3D simulations of the final three minutes of iron core growth in a 15 solar-mass star. They found that more accurate representations of the star’s structure and the motion generated by turbulent convection (measured at several hundred kilometers per second) play a substantial role at the point of collapse.

    “Not surprisingly, we’re showing that more realistic initial conditions have a significant impact on the results,” Couch said.

    Adding another dimension

    Despite the fact that stars rotate, have magnetic fields, and are not perfect spheres, most 1D and 2D supernova simulations to date have modeled non-rotating, non-magnetic, spherically symmetric stars. Scientists were forced to take this simplified approach because modeling supernovas is an extremely computationally demanding task. Such simulations involve highly complex multiphysics calculations and extreme timescales (the stars evolve over millions of years, yet the supernova mechanism occurs in a second).

    According to Couch, working with unrealistic initial conditions has led to difficulties in triggering robust and consistent explosions in simulations—a long-standing challenge in computational astrophysics.

    However, thanks to recent advances in computing hardware and software, Couch and his peers are making significant strides toward more accurate supernova simulations by employing the 3D approach.

    The emergence of petascale supercomputers like Mira has made it possible to include high-fidelity treatments of rotation, magnetic fields, and other complex physics processes that were not feasible in the past.

    “Generally when we’ve done these kinds of simulations in the past, we’ve ignored the fact that magnetic fields exist in the universe because when you add them into a calculation, it increases the complexity by about a factor of two,” Couch said. “But with our simulations on Mira, we’re finding that magnetic fields can add a little extra kick at just the right time to help push the supernova toward explosion.”

    Advances to the team’s open-source FLASH hydrodynamics code have also aided simulation efforts. Couch, a co-developer of FLASH, was involved in porting and optimizing the code for Mira as part of the ALCF’s Early Science Program in 2012. For his current project, Couch continues to collaborate with ALCF computational scientists to enhance the performance, scalability, and capabilities of FLASH to carry out certain tasks. For example, ALCF staff modified the code for writing Hierarchical Data Format (HDF5) files that sped up I/O performance by about a factor of 10.

    But even with today’s high-performance computing hardware and software, it is not yet feasible to include high-fidelity treatments of all the relevant physics in a single simulation; that would require a future exascale system, Couch said. For their ongoing simulations, Couch and his team have been forced to make a number of approximations, including a reduced nuclear network and simulating only one eighth of the full star.

    “Our simulations are only a first step toward truly realistic 3D simulations of supernova,” Couch said. “But they are already providing a proof-of-principle that the final minutes of a massive star evolution can and should be simulated in 3D.”

    The team’s results were published in Astrophysical Journal Letters in a 2015 paper titled “The Three-Dimensional Evolution to Core Collapse of a Massive Star.” The study also used computing resources at the Texas Advanced Computing Center at the University of Texas at Austin.

    Couch’s supernova research began at the ALCF with a Director’s Discretionary award and now continues with computing time awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This work is being funded by the DOE Office of Science and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 10:14 am on April 5, 2016 Permalink | Reply
    Tags: , Cosmic Origins, , Supercomputing   

    From Science Node: “Toward a realistic cosmic evolution” 

    Science Node bloc
    Science Node

    1
    Courtesy Cosmology and Astroparticle Physics Group University of Geneva. Switzerland Supercomputing Center.

    23 Mar, 2016 [Just popped up]
    Simone Ulmer

    Scientists exploring the universe have at their disposal research facilities such as at Laser Interferometer Gravitational-Wave Observatory (LIGO) — which recently achieved the breakthrough detection of gravitational waves — as well as telescopes and space probes.

    MIT/Caltech Advanced aLIGO Hanford Washington USA installation
    MIT/Caltech Advanced aLIGO Hanford Washington USA installation

    ESO/VLT
    ESO/VLT

    Keck Observatory, Mauna Kea, Hawaii, USA
    Keck Observatory, Mauna Kea, Hawaii, USA

    NASA/ESA Hubble Telescope
    NASA/ESA Hubble Telescope

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Considering that the Big Bang does not lend itself to experimental re-enactment, researchers must use supercomputers like the Piz Daint of the Swiss National Supercomputing Center (CSCS) to simulate the evolution of cosmic structures.

    Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)
    Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)


    Access mp4 video here .
    The Piz Daint supercomputer calculated 40963 grid points and 67 billion particles to help scientists visualize these gravitational waves. Courtesy Cosmology and Astroparticle Physics Group University of Geneva and the Swiss National Supercomputing Center.

    This entails modeling a complex, dynamical system that acts at vastly different scales of magnitude and contains a gigantic number of particles. With the help of such simulations, researchers can determine the movement of those particles and hence their formation into structures under the influence of gravitational forces at cosmological scales.

    To date, simulations like these have been entirely based on Newton’s law of gravitation. Yet this is formulated for classical physics and mechanics. It operates within an absolute space-time, where the cosmic event horizon of the expanding universe does not exist. It is also of no use in describing gravitational waves, or the rotation of space-time known as ‘frame-dragging’. Yet in the real expanding universe, space-time is dynamical. And, according to the general theory of relativity, masses such as stars or planets can give it curvature.

    Consistent application of the general theory of relativity

    Led by postdoctoral researcher Julian Adamek and PhD student David Daverio under the supervision of Martin Kunz and Ruth Durrer, the researchers of the Cosmology and Astroparticle Physics Group at the University of Geneva tackled their objective of developing a realistic code. This meant the equations to be solved in the code should make consistent use of the general theory of relativity in cosmic structure evolution simulation, which entails calculating gravitational waves as well as frame-dragging.

    The research team presents the code and the results in the current issue of the journal Nature Physics.

    4
    An image of the flow field where moving masses cause space-time to be pulled along slightly (frame-dragging). The yellow-orange collections are regions of high particle density, corresponding to the clustered galaxies of the real universe. Courtesy Cosmology and Astroparticle Physics Group University of Geneva, Switzerland Supercomputing Center.

    To allow existing simulations to model cosmological structure formation, one needs to calculate approximately how fast the universe would be expanding at any given moment. That result can then be fed into the simulation.

    “The traditional methods work well for non-relativistic matter such as atomic building blocks and cold dark matter, as well as at a small scale where the cosmos can be considered homogeneous and isotropic,” says Kunz.

    But given that Newtonian physics knows no cosmic horizon, the method has only limited applicability at large scales or to neutrinos, gravitational waves, and similar relativistic matter. Since this is an approximation to a dynamical system, it may happen that a simulation of the creation of the cosmos shows neutrinos moving at faster-than-light speeds. Such simulations are therefore subject to uncertainty.
    Self-regulating calculations

    With the new method the system might now be said to regulate itself and exclude such errors, explains Kunz. In addition, the numerical code can be used for simulating various models that bring into play relativistic sources such as dynamical dark energy, relativistic particles and topological defects, all the way to core collapse supernovae (stellar explosions).

    There are two parts to the simulation code. David Daverio was instrumental in developing and refining the part named ‘LATfield2’ to make it perform highly parallel and efficient calculations on a supercomputer. This library manages the basic tools for field-based particle-mesh N-body codes, i.e. the grid spanning the simulation space, the particles and fields acting therein, and the fast Fourier transform necessary for solving the model’s constituent equations, developed largely by Julian Adamek.

    These equations resulted in the second part of the code, ‘gevolution,’ that ensures the calculations take into account the general theory of relativity. The equations describe interactions between matter, space, and time that describe gravitation in terms of curved four-dimensional space-time.

    “Key to the simulation are the metrics describing space-time curvature, and the stress-energy tensor describing distribution of matter,” says Kunz.

    The largest simulation conducted on Piz Daint consisted of a cube with 4,0963 grid points and 67 billion particles. The scientists simulated regions with weak gravitational fields and other weak relativistic effects using the new code. Thus, for the first time it was possible to fully calculate the gravitational waves and rotating space-time induced by structure formation.


    Access mp4 video here .
    Spin cycle. A visualization of the rotation of space-time. Courtesy Cosmology and Astroparticle Physics Group University of Geneva and the Swiss National Supercomputing Center.

    The scientists compared the results with those they computed using a conventional, Newtonian code, and found only minor differences. Accordingly, it appears that structure formation in the universe has little impact on its rate of expansion.

    “For the conventional standard model to work, however, dark energy has to be a cosmological constant and thus have no dynamics,” says Adamek. Based on current knowledge, this is by no means established. “Our method now facilitates the consistent simulation and study of alternative scenarios.”
    Elegant approach

    With the new method, the researchers have managed — without significantly complicating the computational effort — to consistently integrate the general theory of relativity, 100 years after its formulation by Albert Einstein, with the dynamical simulation of structure formation in the universe. The researchers say that their method of implementing the general theory of relativity is an elegant approach to calculating a realistic distribution of radiation or very high-velocity particles in a way that considers gravitational waves and the rotation of space-time.

    General relativity and cosmic structure formation

    Julian Adamek, David Daverio, Ruth Durrer & Martin Kunz

    Affiliations

    Département de Physique Théorique & Center for Astroparticle Physics, Université de Genève, 24 Quai E. Ansermet, 1211 Genève 4, Switzerland
    Julian Adamek, David Daverio, Ruth Durrer & Martin Kunz
    African Institute for Mathematical Sciences, 6 Melrose Road, Muizenberg 7945, South Africa
    Martin Kunz

    Contributions

    J.A. worked out the equations in our approximation scheme and implemented the cosmological code gevolution. He also produced the figures. D.D. developed and implemented the particle handler for the LATfield2 framework. R.D. contributed to the development of the approximation scheme and the derivation of the equations. M.K. proposed the original idea. All authors discussed the research and helped with writing the paper.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:45 pm on March 30, 2016 Permalink | Reply
    Tags: , , Supercomputing   

    From SN: “Simulating stars with less computing power” 

    Science Node bloc
    Science Node

    30 Mar, 2016
    David Lugmayer

    For the first time, a research team has harnessed the JUQUEEN supercomputer to find a way to simulate the fusion of heavier elements within stars.

    JUQUEEN - Jülich Blue Gene Q IBM
    JUQUEEN – Jülich Blue Gene Q IBM supercomputer

    An international collaboration of researchers has developed a new method to simulate the creation of elements inside stars. The process was developed by researchers from the University of Bonn, and University of Bochum, Germany, working with North Carolina State University, Mississippi State University, and the Jülich research center in Germany. The goal of their work was to devise a way to allow simulations of this type to be conducted with less computational power. With this method they were able to model a more complex process that was not previously possible.

    A large part of a star’s life is governed by the process of thermonuclear fusion, through which hydrogen atoms are converted into helium at the core of the star. But fusion also creates a host of other elements in the core of the star, produced by the fusion of the nuclei of helium atoms, which are also known as alpha particles.

    But when scientists want to observe these processes they come up against a problem: the conditions inside the core of a star (15 million degrees Celsius in the case of our sun) are not reproducible inside a laboratory. Thus, the only way to recreate the processes inside a star is to use ‘ab-initio’ computer simulations.

    To allow more effective ab-initio simulation, the team devised a new technique that involves simulating the nucleons (the subatomic particles that comprise the nucleus of atoms) on a virtual lattice grid, instead of in free space. This allowed for very efficient calculation by parallel processing from supercomputers, and significantly lowered the computational demand required for the simulation.

    Using this technique, with the help of the supercomputer JUQUEEN at the Jülich Supercomputing Center, a simulation of the scattering and deflection of two helium nuclei was carried out that involved a grand total of eight nucleons. This may not sound extraordinary, but it is in fact unprecedented, as up until now even the fastest supercomputers in the world could only simulate the very lightest of elements involving a maximum of five total nucleons.

    The problem comes when the number of nucleons simulated is increased. Each of these particles interacts with every other particle present, which must be simulated along with the quantum state of each particle. “All existing methods have an exponential scaling of computing resources with the number of particles,” explains Ulf Meißner from the University of Bonn.

    The difference this makes to the simulation process is astounding: this particular simulation required roughly two million core hours of computing using the new method, which on a supercomputer as powerful as JUQUEEN could take only a number of days to run. However, the same simulation run with older methods would take JUQUEEN several thousand years to complete.

    “This is a major step in nuclear theory” says Meißner. He explains that this new method makes more advanced ab-initio simulations of element generation in stars possible. The next step Meißner and his colleagues are working towards is the ab-initio calculation of the “holy grail of nuclear astrophysics”: the process through which oxygen is generated in stars.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 10:57 am on March 18, 2016 Permalink | Reply
    Tags: , , Supercomputing   

    From Node: “Why should I believe your HPC research?” 

    Science Node bloc
    Science Node

    16 Mar, 2016
    Lorena Barba

    The strategy for scientific computing today is roughly the same as it was at its historical beginnings. When do we have evidence that claims to knowledge originating from simulation are justified?

    Supercomputing propels science forward in fields like climate change, precision medicine, and astrophysics. It is considered a vital part in the scientific endeavor today.

    The strategy for scientific computing: Start with a trusted mathematical model, transform it into a computable form, and express the algorithm into computer code that is executed to produce a result. This result is inferred to give information about the original physical system.

    In this way, computer simulations originate new claims to scientific knowledge. But when do we have evidence that claims to knowledge originating from simulation are justified? Questions like these were raised by Eric Winsberg in his book Science in the Age of Computer Simulation.

    In many engineering applications of computer simulations, we are used to speaking about verification and validation (V&V). Verification means confirming the simulation results match the solutions to the mathematical model. Validation means confirming that the simulation represents the physical phenomenon well. In other words, V&V separates the issues of solving the equations right, versus solving the right equations.

    If a published computational research reports on completing a careful V&V study, we are likely to trust the results more. But is it enough? Does it really produce reliable results, which we trust to create knowledge?

    Texas Stampede Supercomputer Texas Advanced Computer Center
    Texas Stampede Supercomputer. Texas Advanced Computer Center

    Thirty years ago, the same issues were being raised about experiments: How do we come to rationally believe in an experimental result? Allan Franklin wrote about the strategies that experimental scientists use to provide grounds for rational belief in experimental results. For example: confidence in an instrument increases if we can use it to get results that are expected. Or we gain confidence in an experimental result if it can be replicated with a different instrument/apparatus.

    The question of whether we have evidence that claims to scientific knowledge stemming from simulation are justified is not so clear as V&V. When we compare results with other simulations, for example, simulations that used a different algorithm or a more refined model, this does not fit neatly into V&V.

    And our work is not done when a simulation completes. Data requires interpretation, visualization, and analysis — all crucial for reproducibility. We usually try to summarize qualitative features of the system under study, and generalize these features to a class of similar phenomena (i.e. managing uncertainties).

    The new field of uncertainty quantification (UQ) aims to give mathematical grounds for confidence in simulation results. It is a response to the complicated nature of justifying the use of simulation results to draw conclusions. UQ presupposes verification and informs validation.

    Verification deals with the errors that occur when converting a continuous mathematical model into a discrete one, and then to a computer code. There are known sources of errors —  truncation, round-off, partial iterative convergence  —  and unknown sources of errors  — coding mistakes, instabilities.

    Uncertainties stem from input data, modeling errors, genuine physical uncertainties, random processes  —  UQ is thus associated with the validation of a model. It follows that sufficient verification should be done first, before attempting validation. But is this always done in practice, and what is meant by ‘sufficient’?

    Verification provides evidence that the solver is fit for purpose, but this is subject to interpretation: the idea of accuracy is linked to judgments.

    Many articles discussing reproducibility in computational science place emphasis on the importance of code and data availability. But making code and data open and publicly available is not enough. To provide evidence that results from simulation are reliable requires solid V&V expertise and practice, reproducible-science methods, and carefully reporting our uncertainties and judgments.

    Supercomputing research should be executed using reproducible practices, taking good care of documentation and reporting standards, including appropriate use of statistics, and providing any research objects needed to facilitate follow-on studies.

    Even if the specialized computing system used in the research is not available to peers, conducting the research as if it will be reproduced increases trust and helps justify the new claims to knowledge.

    Computational experiments often involve deploying precise and complex software stacks, with several layers of dependencies. Multiple details must be taken care of during compilation, setting up the computational environment, and choosing runtime options. Thus, making available the source code (with a detailed mathematical description of the algorithm) is a minimum pre-requisite for reproducibility: necessary, but not sufficient.

    We also require detailed description and/or provision of:

    Dependencies
    Environment
    Automated build process
    Running scripts
    Post-processing scripts
    Secondary data generating published figures

    Not only does this practice facilitate follow-on studies, removing roadblocks for building on our work, it also enables getting at the root of discrepancies if and when another researcher attempts a full replication of our study.

    How far are we from achieving this practice as standard? A recent study surveyed a sample (admittedly small) of papers submitted to a supercomputing conference: only 30% of the papers provide a link to the source code, only 40% mention the compilation process, and only 30% mention the steps taken to analyze the data.

    We have a long way to go.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 1:07 pm on March 16, 2016 Permalink | Reply
    Tags: , , Supercomputing,   

    From TACC: “Wrangler Special Report” 

    TACC bloc

    Texas Advanced Computing Center

    Dell Wrangler Supercomputer Speeds through Big Data
    Data-intensive supercomputer brings new users to high performance computing for science

    TACC Wrangler
    TACC Wrangler

    Handling big data can sometimes feel like driving on an unpaved road for researchers with a need for speed and supercomputers.

    “When you’re in the world of data, there are rocks and bumps in the way, and a lot of things that you have to take care of,” said Niall Gaffney, a former Hubble Space Telescope scientist who now heads the Data Intensive Computing group at the Texas Advanced Computing Center (TACC).

    Gaffney led the effort to bring online a new kind of supercomputer, called Wrangler. Like the old Western cowboys who tamed wild horses, Wrangler tames beasts of big data, such as computing problems that involve analyzing thousands of files that need to be quickly opened, examined and cross-correlated.

    Wrangler fills a gap in the supercomputing resources of XSEDE, the Extreme Science and Engineering Discovery Environment, supported by the National Science Foundation (NSF). XSEDE is a collection of advanced digital resources that scientists can easily use to share and analyze the massive datasets being produced in nearly every field of research today. In 2013, NSF awarded TACC and its academic partners Indiana University and the University of Chicago $11.2 million to build and operate Wrangler, a supercomputer to handle data-intensive high performance computing.

    Wrangler was designed to work closely with the Stampede supercomputer, the 10th most powerful in the world according to the bi-annual Top500 list, and the flagship of TACC at The University of Texas at Austin (UT Austin).

    Texas Stampede Dell Supercomputer
    Dell Stampede

    Stampede has computed over six million jobs for open science since it came online in 2013.

    “We kept a lot of what was good with systems like Stampede,” said Gaffney, “but added new things to it like a very large flash storage system, a very large distributed spinning disc storage system, and high speed network access. This allows people who have data problems that weren’t being fulfilled by systems like Stampede and Lonestar to be able to do those in ways that they never could before.”

    Gaffney made the analogy that supercomputers like Stampede are like racing sports cars, with fantastic compute engines optimized for going fast on smooth, well-defined race-tracks. Wrangler, on the other hand, is built like a rally car to go fast on unpaved, bumpy roads with muddy gravel.

    “If you take a Ferrari off-road you may want to change the way that the suspension is done,” Gaffney said. “You want to change the way that the entire car is put together, even though it uses the same components, to build something suitable for people who have a different job.”

    At the heart of Wrangler lie 600 terabytes of flash memory shared via PCI interconnect across Wrangler’s over 3,000 Haswell compute cores. “All parts of the system can access the same storage,” Gaffney said. “They can work in parallel together on the data that are stored inside this high-speed storage system to get larger results they couldn’t get otherwise.”

    This massive amount of flash storage comes from DSSD, a startup co-founded by Andy Bechtolsheim of Sun Microsystems fame and acquired in May of 2015 by EMC. Bechtolsheim’s influence at TACC goes back to the ‘Magnum’ Infiniband network switch he led design on for the now-decommissioned Ranger supercomputer, the predecessor to Stampede.

    What’s new is that DSSD took a shortcut between the CPU and the data. “The connection from the brain of the computer goes directly to the storage system. There’s no translation in between,” Gaffney said. “It actually allows people to compute directly with some of the fastest storage that you can get your hands on, with no bottlenecks in between.”

    Speeding up the gene analysis pipeline

    Gaffney recalled the hang-up scientists had with code called OrthoMCL, which combs through DNA sequences to find common genetic ancestry in seemingly unrelated species. The problem was that OrthoMCL let loose databases wild as a bucking bronco.

    “It generates a very large database and then runs computational programs outside and has to interact with this database,” said biologist Rebecca Young of the Department of Integrative Biology and the Center for Computational Biology and Bioinformatics at UT Austin. She added, “That’s not what Lonestar and Stampede and some of the other TACC resources were set up for.”

    U Texas Lonestar supercomputer
    Lonestar

    Young recounted how at first, using OrthoMCL with online resources, she was only able to pull out 350 comparable genes across 10 species. “When I run OrthoMCL on Wrangler, I’m able to get almost 2,000 genes that are comparable across the species,” Young said. “This is an enormous improvement from what is already available. What we’re looking to do with OrthoMCL is to allow us to make an increasing number of comparisons across species when we’re looking at these very divergent, these very ancient species separated by 450 million years of evolution.”

    “We were able to go through all of these work cases in anywhere between 15 minutes and 6 hours,” Gaffney said. “This is a game changer.”

    Gaffney added that getting results quickly lets scientists explore new and deeper questions by working with larger collections of data and driving previously unattainable discoveries.

    Tuning energy efficiency in buildings

    Computer scientist Joshua New with the Oak Ridge National Laboratory (ORNL) hopes to take advantage of Wrangler’s ability to tame big data. New is the principal investigator of the Autotune project, which creates a software version of a building and calibrates the model with over 3,000 different data inputs from sources like utility bills to generate useful information such as what an optimal energy-efficient retrofit might be.

    “Wrangler has enough horsepower that we can run some very large studies and get meaningful results in a single run,” New said. He currently uses the Titan supercomputer of ORNL to run 500,000 simulations and write 45 TB of data to disk in 68 minutes.

    ORNL Titan Supercomputer
    ORNL’s Cray Titan supercomputer

    He said he wants to scale out his parametric studies to simulate all 125.1 million buildings in the U.S.

    “I think that Wrangler fills a specific niche for us in that we’re turning our analysis into an end-to-end workflow, where we define what parameters we want to vary,” New said. “It creates the sampling matrix. It creates the input files. It does the computationally challenging task of running all the simulations in parallel. It creates the output. Then we run our artificial intelligence and statistic techniques to analyze that data on the back end. Doing that from beginning to end as a solid workflow on Wrangler is something that we’re very excited about.”

    When Gaffney talks about storage on Wrangler, he’s talking about is a lot of data storage — a 10 petabyte Lustre-based file system hosted at TACC and replicated at Indiana University. “We want to preserve data,” Gaffney said. “The system for Wrangler has been set up for making data a first-class citizen amongst what people do for research, allowing one to hold onto data and curate, share, and work with people with it. Those are the founding tenants of what we wanted to do with Wrangler.”

    Shedding light on dark energy

    “Data is really the biggest challenge with our project,” said UT Austin astronomer Steve Finkelstein. His NSF-funded project is called HETDEX, the Hobby-Eberly Telescope Dark Energy Experiment.

    Hobby-Eberly Telescope
    U Texas Hobby-Eberly Telescope

    It’s the largest survey of galaxies ever attempted. Scientists expect HETDEX to map over a million galaxies in three dimensions, in the process discovering thousands of new galaxies. The main goal is to study dark energy, a mysterious force pushing galaxies apart.

    “Every single night that we observe — and we plan to observe more or less every single night for at least three years — we’re going to make 200 GB of data,” Finkelstein said. It’ll measure the spectra of 34,000 points of skylight every six minutes.

    “On Wrangler is our pipeline,” Finkelstein said. “It’s going to live there. As the data comes in, it’s going to have a little routine that basically looks for new data, and as it comes in every six minutes or so it will process it. By the end of the night it will actually be able to take all the data together to find new galaxies.”

    Human origins buried in fossil data

    Another example of a new HPC user Wrangler enables is an NSF-funded science initiative called PaleoCore. It hopes to take advantage of Wrangler’s swiftness with databases to build a repository for scientists to dig through geospatially-aware data on all fossils related to human origins. This would combine older digital collections in formats like Excel worksheets and SQL databases with newer ways of gathering data such as real-time fossil GPS information collected from iPhones or iPads.

    “We’re looking at big opportunities in linked open data,” PaleoCore principal investigator Denne Reed said. Reed is an associate professor in the Department of Anthropology at UT Austin.

    Linked open data allows for queries to get meaning from the relationships of seemingly disparate pieces of data. “Wrangler is the type of platform that enables that,” Reed said. “It enables us to store large amounts of data, both in terms of photo imagery, satellite imagery and related things that go along with geospatial data. Then also, it allows us to start looking at ways to effectively link those data with other data repositories in real time.”

    Data analytics for science

    Wrangler’s shared memory supports data analytics on the Hadoop and Apache Spark frameworks. “Hadoop is a big buzzword in all of data science at this point,” Gaffney said. “We have all of that and are able to configure the system to be able to essentially be like the Google Search engines are today in data centers. The big difference is that we are servicing a few people at a time, as opposed to Google.”

    Users bring data in and out of Wrangler in one of the fastest ways possible. Wrangler connects to Internet2, an optical network which provides 100 gigabytes per second worth of throughput to most of the other academic institutions around the country.

    What’s more, TACC has tools and techniques to transfer their data in parallel. “It’s sort of like being at the supermarket,” explained Gaffney. “If there’s only one lane open, it is just as fast as one person checking you out. But if you go in and have 15 lanes open, you can spread that traffic across and get more people through in less time.”

    A new user community for supercomputers

    Biologists, astronomers, energy efficiency experts, and paleontologists are just a small slice of the new user community Wrangler aims to attract.

    Wrangler is also more web-enabled than typically found in high performance computing. A web portal allows users to manage the system and gives the ability to use web interfaces such as VNC, RStudio, and Jupyter Notebooks to support more desktop-like user interactions with the system.

    “We need these bigger systems for science,” Gaffney said. “We need more kinds of systems. And we need more kinds of users. That’s where we’re pushing towards with these sort of portals. This is going to be the new face, I believe, for many of these systems that we’re moving forward with now. Much more web-driven, much more graphical, much less command line driven. ”

    “The NSF shares with TACC great pride in Wrangler’s continuing delivery of world-leading technical throughput performance as an operational resource available to the open science community in specific characteristics most responsive to advance data-focused research,” said Robert Chadduck, the program officer overseeing the NSF award.

    Wrangler is primed to lead the way in computing the bumpy world of data-intensive science research. “There are some great systems and great researchers out there who are doing groundbreaking and very important work on data, to change the way we live and to change the world,” Gaffney said. “Wrangler is pushing forth on the sharing of these results, so that everybody can see what’s going on.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

     
  • richardmitnick 1:14 pm on March 12, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From Eos: “Which Geodynamo Models Will Work Best on Next-Gen Computers?” 

    Eos news bloc

    Eos

    11 March 2016
    Terri Cook

    Magnetic field in a geodynamo simulation, created by Hiroaki Matsui using Calypso code
    Magnetic field in a geodynamo simulation, created by Hiroaki Matsui using Calypso code

    Scientists have long sought to understand the origin and development of Earth’s geomagnetic field, which is continually generated by convection in the Earth’s conductive liquid outer core. Numerical modeling, so-called geodynamo simulations, has played an important role in this quest, but the extremely high resolution required for these models prevents current versions from replicating realistic, Earth-like conditions. As a result, fundamental questions about the outer core’s dynamics are left unanswered.

    Despite the need for more efficient computation, most current geodynamo models incorporate computing structures that can hinder the parallel processing necessary to achieve this. To evaluate which numerical models will most effectively operate on the next generation of “petascale” supercomputers, Matsui et al. ran identical tests of 15 numerical geomagnetic models, then compared their performance and accuracy to two standard benchmarks.

    They found that models using two- or three-dimensional parallel processing are capable of running efficiently on 16,384 processor cores—the maximum number available in the Texas Advanced Computing Center’s Stampede, one of the world’s most powerful supercomputers.

    Texas Stampede Supercomputer
    Stampede

    The authors further extrapolated that methods simulating the expansion of spherical harmonics—the mathematical equations describing functions on a sphere’s surface—combined with two-dimensional parallel processing will offer the best available tools for modeling the Earth’s magnetic field during simulations using up to 107 processor cores.

    According to the researchers, future work is needed to clarify several outstanding points, including determining which methods of variable time stepping are most efficient and exact and how accurately models will be able to simulate the turbulent flow presumed to occur in the outer core. Solving such challenges should greatly improve simulations of Earth’s magnetic field, as well as those of other planets and stars. (Geochemistry, Geophysics, Geosystems, doi:10.1002/2015GC006159, 2016)

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Eos is the leading source for trustworthy news and perspectives about the Earth and space sciences and their impact. Its namesake is Eos, the Greek goddess of the dawn, who represents the light shed on understanding our planet and its environment in space by the Earth and space sciences.

     
  • richardmitnick 3:50 pm on February 17, 2016 Permalink | Reply
    Tags: , , Supercomputing, ,   

    From Science Node: “Blue Waters solves Yellowstone mystery” 

    Science Node bloc
    Science Node

    17 Feb, 2016
    Lance Farrell

    Once Lijun Liu saw the proximity of subduction zones to supervolcanoes, the game was afoot. His NSF-funded research project is a harbinger of the age of numerical exploration.

    The Old Faithful geyser at Yellowstone National Park has thrilled park visitors for over a century, but it wasn’t until this year that scientists figured out the geophysical factors powering it.

    With over 2 million visitors annually, Yellowstone remains one of the most popular nature destinations in the US. Spanning an area of almost 3,500 square miles, the park sits atop the Yellowstone Caldera. This caldera is the largest supervolcano in North America, and is responsible for the park’s geothermal activity.

    Caldera at Yellowstone.
    Caldera at Yellowstone

    Until last week, most geologists had explained this activity with the so-called mantle plume hypothesis. This elegant theory proposed an idealized situation where hot columns of mantle rock rose from the core-mantle boundary all the way to the surface, fueling the supervolcano and the geothermal geysers.

    Supervolcanoes worldwide
    Neighborhood watch. A map of the worldwide distribution of supervolcanoes. The light reddish lines are subduction zones, the thick blue lines are mid-ocean ridges, and the size of the circles scales with the magnitude of supervolcanoes. Courtesy Lijun Liu.

    This theory didn’t sit well with Lijun Liu, assistant professor in the department of Geology at the University of Illinois, however. “If you look at the distribution of supervolcanoes globally, you’ll find something very interesting. You will see that most if not all of them are sitting close to a subduction zone,” Liu observes. “This close vicinity made me wonder if there were any internal relations between them, and I thought it was necessary and intriguing to further investigate this.”

    Solving the mystery

    To investigate the formation of Yellowstone volcanism, Liu and co-author Tiffany Leonard turned to the supercomputers at the Texas Advanced Computing Center (TACC) and the National Center for Supercomputing Applications (NCSA). Using Stampede for benchmarking work in 2014, and Blue Waters for modeling in 2015, Liu and Leonard ran 100 models, each requiring a few hundred core hours. The models weren’t too computationally intensive, using only 256 cores and generating only about 10 terabytes of data. In subsequent research, Blue Waters, more attuned to extreme scale calculations, has allowed Liu to scale up experiments up to 10,000 cores.

    Texas Stampede Supercomputer
    TACC Stampede

    Blue Waters supercomputer
    NCSA Blue Waters

    To make the Yellowstone discovery, Liu received valuable technical assistance from the Extreme Science and Engineering Discovery Environment (XSEDE). “We have been using XSEDE machines from very early on, Lonestar, Ranger, and, more lately, Stampede. We got a lot of assistance from XSEDE on installing the code, so by the time we got to this particular project we were pretty fluent using the code.”

    Liu and Leonard’s models, recently published in American Geophysical Union, simulated 40 million years of North American geological activity. By using the most well accepted history of surface plate motion and matching the complex mantle structure seen today with geophysical imaging techniques, Liu’s team imposed two powerful constraints to make sure their models didn’t deviate from reality. The models left little doubt that the flows of mantle beneath Yellowstone are actually modulated by moving plates rather than a single mantle plume.

    Analytical evolution

    According to Liu, prior to high-performance computing (HPC), debates about Yellowstone volcanic activity were like the proverbial blind men touching and describing the elephant. Without HPC, scientists lacked the geophysical data or imaging techniques to see under the surface. Most of the models of that time relied heavily on surface records only.

    “Numerical simulations are so important, especially now we are moving away from simple explanations and analytical solutions,” Liu admits. “We are definitely in the numerical era now. Most of these problems we couldn’t have solved a few years ago.”

    But with the advent of HPC and seismic tomography about 10 years ago, geologists were finally able to peer into the subsurface. By 2010, the scientific landscape had shifted dramatically when the US National Science Foundation (NSF) -funded nationwide seismic experiment called Earthscope unearthed an unprecedented amount of data and corresponding good imagery of the underlying mantle.

    From these images, geologists could see not only localized slow seismic structures called putative plumes, but also widespread fast anomalies often called slabs, or subducting oceanic plates. This breakthrough has created the opportunity for more questions, spawning even more models and hypotheses. Because of the complexity of the system, this is a situation ripe for HPC, Liu reasons.

    Understanding the volcanism powering Yellowstone is important because if this supervolcano erupts, it will affect a large area of the US. “That’s a real threat and a real natural hazard,” Liu quips. “But more seriously, if a mantle plume is powering the Yellowstone flows, then in theory its distribution could be more random — it can form almost anywhere. So it is possible that people in the Midwest who never worry about volcanoes are sitting right above a mantle plume.”

    But if the subduction process is more responsible for Yellowstone, and most of us sit further away from the subduction zone, we can rest a little bit easier.

    Liu’s research was made possible by funding from the NSF, support that procured not only supercomputing time but also student assistance. Providing an educational advantage is the more important benefit of NSF support, Liu says.

    In sum, Liu is convinced of the importance of HPC to the future of geological analysis. “HPC and models with a multidisciplinary theme should be the trend and should be encouraged for future research because this is really the way to solve complex natural systems like Yellowstone.”

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 9:33 pm on January 14, 2016 Permalink | Reply
    Tags: , , , , Supercomputing,   

    From Symmetry: “Exploring the dark universe with supercomputers” 

    Symmetry

    Temp 1

    01/14/16
    Katie Elyce Jones

    Next-generation telescopic surveys will work hand-in-hand with supercomputers to study the nature of dark energy.

    The 2020s could see a rapid expansion in dark energy research.

    For starters, two powerful new instruments will scan the night sky for distant galaxies. The Dark Energy Spectroscopic Instrument, or DESI, will measure the distances to about 35 million cosmic objects, and the Large Synoptic Survey Telescope, or LSST, will capture high-resolution videos of nearly 40 billion galaxies.

    DESI Dark Energy Spectroscopic Instrument
    LBL DESI

    LSST Exterior
    LSST Telescope
    LSST Camera
    LSST, the building that will house it in Chile, and the camera, being built at SLAC

    Both projects will probe how dark energy—the phenomenon that scientists think is causing the universe to expand at an accelerating rate—has shaped the structure of the universe over time.

    But scientists use more than telescopes to search for clues about the nature of dark energy. Increasingly, dark energy research is taking place not only at mountaintop observatories with panoramic views but also in the chilly, humming rooms that house state-of-the-art supercomputers.

    The central question in dark energy research is whether it exists as a cosmological constant—a repulsive force that counteracts gravity, as Albert Einstein suggested a century ago—or if there are factors influencing the acceleration rate that scientists can’t see. Alternatively, Einstein’s theory of gravity [General Relativity] could be wrong.

    “When we analyze observations of the universe, we don’t know what the underlying model is because we don’t know the fundamental nature of dark energy,” says Katrin Heitmann, a senior physicist at Argonne National Laboratory. “But with computer simulations, we know what model we’re putting in, so we can investigate the effects it would have on the observational data.”

    Temp 2
    A simulation shows how matter is distributed in the universe over time. Katrin Heitmann, et al., Argonne National Laboratory

    Growing a universe

    Heitmann and her Argonne colleagues use their cosmology code, called HACC, on supercomputers to simulate the structure and evolution of the universe. The supercomputers needed for these simulations are built from hundreds of thousands of connected processors and typically crunch well over a quadrillion calculations per second.

    The Argonne team recently finished a high-resolution simulation of the universe expanding and changing over 13 billion years, most of its lifetime. Now the data from their simulations is being used to develop processing and analysis tools for the LSST, and packets of data are being released to the research community so cosmologists without access to a supercomputer can make use of the results for a wide range of studies.

    Risa Wechsler, a scientist at SLAC National Accelerator Laboratory and Stanford University professor, is the co-spokesperson of the DESI experiment. Wechsler is producing simulations that are being used to interpret measurements from the ongoing Dark Energy Survey, as well as to develop analysis tools for future experiments like DESI and LSST.

    Dark Energy Survey
    Dark Energy Camera
    CTIO Victor M Blanco 4m Telescope
    DES, The DECam camera, built at FNAL, and the Victor M Blanco 4 meter telescope in Chile that houses the camera.

    “By testing our current predictions against existing data from the Dark Energy Survey, we are learning where the models need to be improved for the future,” Wechsler says. “Simulations are our key predictive tool. In cosmological simulations, we start out with an early universe that has tiny fluctuations, or changes in density, and gravity allows those fluctuations to grow over time. The growth of structure becomes more and more complicated and is impossible to calculate with pen and paper. You need supercomputers.”

    Supercomputers have become extremely valuable for studying dark energy because—unlike dark matter, which scientists might be able to create in particle accelerators—dark energy can only be observed at the galactic scale.

    “With dark energy, we can only see its effect between galaxies,” says Peter Nugent, division deputy for scientific engagement at the Computational Cosmology Center at Lawrence Berkeley National Laboratory.

    Trial and error bars

    “There are two kinds of errors in cosmology,” Heitmann says. “Statistical errors, meaning we cannot collect enough data, and systematic errors, meaning that there is something in the data that we don’t understand.”

    Computer modeling can help reduce both.

    DESI will collect about 10 times more data than its predecessor, the Baryon Oscillation Spectroscopic Survey, and LSST will generate 30 laptops’ worth of data each night. But even these enormous data sets do not fully eliminate statistical error.

    LBL BOSS
    LBL BOSS telescope

    Simulation can support observational evidence by modeling similar conditions to see if the same results appear consistently.

    “We’re basically creating the same size data set as the entire observational set, then we’re creating it again and again—producing up to 10 to 100 times more data than the observational sets,” Nugent says.

    Processing such large amounts of data requires sophisticated analyses. Simulations make this possible.

    To program the tools that will compare observational and simulated data, researchers first have to model what the sky will look like through the lens of the telescope. In the case of LSST, this is done before the telescope is even built.

    After populating a simulated universe with galaxies that are similar in distribution and brightness to real galaxies, scientists modify the results to account for the telescope’s optics, Earth’s atmosphere, and other limiting factors. By simulating the end product, they can efficiently process and analyze the observational data.

    Simulations are also an ideal way to tackle many sources of systematic error in dark energy research. By all appearances, dark energy acts as a repulsive force. But if other, inconsistent properties of dark energy emerge in new data or observations, different theories and a way of validating them will be needed.

    “If you want to look at theories beyond the cosmological constant, you can make predictions through simulation,” Heitmann says.

    A conventional way to test new scientific theories is to introduce change into a system and compare it to a control. But in the case of cosmology, we are stuck in our universe, and the only way scientists may be able to uncover the nature of dark energy—at least in the foreseeable future—is by unleashing alternative theories in a virtual universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 10:17 am on December 21, 2015 Permalink | Reply
    Tags: , Benchmarks, , Supercomputing   

    From Sandia: “Supercomputer benchmark gains adherents” 


    Sandia Lab

    1
    Sandia National Laboratories researcher Mike Heroux, developer of the High Performance Conjugate Gradients program that uses complex criteria to rank supercomputers. (Photo by Randy Montoya)

    More than 60 supercomputers were ranked by the emerging tool, termed the High Performance Conjugate Gradients (HPCG) benchmark, in ratings released at the annual supercomputing meeting SC15 in late November. Eighteen months earlier, only 15 supercomputers were on the list.

    “HPCG is designed to complement the traditional High Performance Linpack (HPL) benchmark used as the official metric for ranking the top 500 systems,” said Sandia National Laboratories researcher Mike Heroux, who developed the HPCG program in collaboration with Jack Dongarra and Piotr Luszczek from the University of Tennessee.

    The current list contains the same entries as many of the top 50 systems from Linpack’s TOP500 but significantly shuffles HPL rankings, indicating that HPCG puts different system characteristics through their paces.

    This is because the different measures provided by HPCG and HPL act as bookends on the performance spectrum of a given system, said Heroux. “While HPL tests supercomputer speed in solving relatively straightforward problems, HPCG’s more complex criteria test characteristics such as high-performance interconnects, memory systems and fine-grain cooperative threading that are important to a different and broader set of applications.”

    Heroux said only time will tell whether supercomputer manufacturers and users gravitate toward HPCG as a useful test. “All major vendor computing companies have invested heavily in optimizing our benchmark. All participating system owners have dedicated machine time to make runs. These investments are the strongest confirmation that we have developed something useful.

    “Many benchmarks have been proposed as complements or even replacements for Linpack,” he said. “We have had more success than previous Oefforts. But there is still a lot of work to keep the effort going.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.
    i1
    i2
    i3

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 573 other followers

%d bloggers like this: