Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 2:54 pm on November 25, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From Science Node: “Supercomputers put photosynthesis in the spotlight” 

    Science Node bloc
    Science Node

    David Lugmayer

    Courtesy Julia Schwab, Pixabay (CC0 Public Domain).

    Photosynthesis is one of the most important processes on Earth, essential to the existence of much life on our planet. But for all its importance, scientists still do not understand some of the small-scale processes of how plants absorb light.

    An international team, led by researchers from the University of the Basque Country (UPV/EHU) in Spain, has conducted detailed simulations of the processes behind photosynthesis. Working in collaboration with several other universities and institutions, the researchers are using supercomputers to better understand how photosynthesis functions at the most basic level.

    Photosynthesis is fundamental to much life on earth. The process of converting energy from our sun into a chemical form that can be stored enables the plethora of plant life that covers the globe to live. Without photosynthesis, plants — along with the animals that depend on them for food and oxygen — would not exist. During photosynthesis, carbon dioxide and water are converted into carbohydrates and oxygen. However, this process requires energy to function; energy that sunlight provides.

    Over half of the sunlight that green plants capture for use in photosynthesis is absorbed by a complex of chlorophyll molecules and proteins called the light-harvesting complex (LHC II). Yet the scientific community still does not fully understand how this molecule acts when it absorbs photons of light.

    The LHC II molecule, visualized here, is a complex of proteins and chlorophyll molecules. It is responsible for capturing over 50% of the solar energy absorbed for the process of photosynthesis. Image courtesy Joaquim Jornet-Somoza and colleagues (CC BY 3.0)

    To help illuminate this mystery, the team at UPV/EHU is simulating the LHC II molecule using a quantum mechanical theory called ‘real-space time-dependent density functional theory’ (TDDFT), implemented in a special software package called ‘Octopus’. Simulating LHC II is an impressive feat considering that the molecule is comprised of over 17,000 atoms, each of which must be simulated individually.

    Because of the size and complexity of the study, some of the TDDFT calculations required significant computing resources. Two supercomputers, MareNostrum III and Hydra, played an important role in the experiment. Joaquim Jornet-Somoza, a postdoctoral researcher from the University of Barcelona in Spain, explains why: “The memory storage needed to solve the equations, and the number of algorithmic operations increases exponentially with the number of electrons that are involved. For that reason, the use of supercomputers is essential for our goal. The use of parallel computing reduces the execution time and makes resolving quantum mechanical equations feasible.” In total 2.6 million core hours have been used for the study.

    MareNostrum III


    However, to run these simulations, several issues had to first be sorted out, and the Octopus software code had to be extensively optimized to cope with the experiment. “Our group has worked on the enhancement of the Octopus package to run in parallel-computing systems,” says Jornet.

    The simulations, comprising of thousands of atoms, are reported to be the biggest of their kind ever performed to date. Nevertheless, the team is still working towards simulating the full 17,000 atoms of the LHC II complex. “The maximum number of atoms simulated in our calculations was 6,025, all of them treated at TDDFT level. These calculations required the use of 5,120 processors, and around 10TB of memory,” explains Jornet.

    The implications of the study are twofold, says Jornet. From a photosynthetic perspective, it shows that the LHC II complex has evolved to optimize the capture of light energy. From a computational perspective, the team successfully applied quantum mechanical simulations on a system comprised of thousands of atoms, paving the way for similar studies on large systems.

    The study, published in the journal Physical Chemistry Chemical Physics, proposed that studying the processes behind photosynthesis could also yield applied benefits. One such benefit is the optimization of crop production. Enhanced understanding of photosynthesis could also potentially be used to improve solar power technologies or the production of hydrogen fuel.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

  • richardmitnick 11:38 am on November 20, 2015 Permalink | Reply
    Tags: , , , , Supercomputing   

    From BNL: “Supercomputing the Strange Difference Between Matter and Antimatter” 

    Brookhaven Lab

    November 20, 2015
    Karen McNulty Walsh, (631) 344-8350
    Peter Genzer, (631) 344-3174

    Members of the “RIKEN-Brookhaven-Columbia” Collaboration who participated in this work (seated L to R): Taku Izubuchi (RIKEN BNL Research Center, or RBRC, and Brookhaven Lab), Christoph Lehner (Brookhaven), Robert Mawhinney (Columbia University), Amarjit Soni (Brookhaven), Norman Christ (Columbia), Christopher Kelly (RBRC), Chulwoo Jung (Brookhaven); (standing L to R): Sergey Syritsyn (RBRC), Tomomi Ishikawa (RBRC), Luchang Jin (Columbia), Shigemi Ohta (RBRC), and Seth Olsen (Columbia). Mawhinney, Soni, and Christ were the founding members of the collaboration, along with Thomas Blum (not shown, now at the University of Connecticut).

    Supercomputers such as Brookhaven Lab’s Blue Gene/Q were essential for completing the complex calculation of direct CP symmetry violation. The same calculation would have required two thousand years using a laptop.

    An international team of physicists including theorists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory has published the first calculation of direct “CP” symmetry violation—how the behavior of subatomic particles (in this case, the decay of kaons) differs when matter is swapped out for antimatter. Should the prediction represented by this calculation not match experimental results, it would be conclusive evidence of new, unknown phenomena that lie outside of the Standard Model—physicists’ present understanding of the fundamental particles and the forces between them.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The current result—reported in the November 20 issue of Physical Review Letters—does not yet indicate such a difference between experiment and theory, but scientists expect the precision of the calculation to improve dramatically now that they’ve proven they can tackle the task. With increasing precision, such a difference—and new physics—might still emerge.

    “This so called ‘direct’ symmetry violation is a tiny effect, showing up in just a few particle decays in a million,” said Brookhaven physicist Taku Izubuchi, a member of the team performing the calculation. Results from the first, less difficult part of this calculation were reported by the same group in 2012. However, it is only now, with completion of the second part of this calculation—which was hundreds of times more difficult than the first—that a comparison with the measured size of direct CP violation can be made. This final part of the calculation required more than 200 million core processing hours on supercomputers, “and would have required two thousand years using a laptop,” Izubuchi said.

    The calculation determines the size of the symmetry violating effect as predicted by the Standard Model, and was compared with experimental results that were firmly established in 2000 at the European Center for Nuclear Research (CERN) and Fermi National Accelerator Laboratory.

    “This is an especially important place to compare with the Standard Model because the small size of this effect increases the chance that other, new phenomena may become visible,” said Robert Mawhinney of Columbia University.

    “Although the result from this direct CP violation calculation is consistent with the experimental measurement, revealing no inconsistency with the Standard Model, the calculation is on-going with an accuracy that is expected to increase two-fold within two years,” said Peter Boyle of the University of Edinburgh. “This leaves open the possibility that evidence for new phenomena, not described by the Standard Model, may yet be uncovered.”

    Matter-antimatter asymmetry

    Physicists’ present understanding of the universe requires that particles and their antiparticles (which have the same mass but opposite charge) behave differently. Only with matter-antimatter asymmetry can they hope to explain why the universe, which was created with equal parts of matter and antimatter, is filled mostly with matter today. Without this asymmetry, matter and antimatter would have annihilated one another leaving a cold, dim glow of light with no material particles at all.

    The first experimental evidence for the matter-antimatter asymmetry known as CP violation was discovered in 1964 at Brookhaven Lab. This Nobel-Prize-winning experiment also involved the decays of kaons, but demonstrated what is now referred to as “indirect” CP violation. This violation arises from a subtle imperfection in the two distinct types of neutral kaons.

    The target of the present calculation is a phenomenon that is even more elusive: a one-part-in-a-million difference between the matter and antimatter decay probabilities. The small size of this “direct” CP violation made its experimental discovery very difficult, requiring 36 years of intense experimental effort following the 1964 discovery of “indirect” CP violation.

    While these two examples of matter-antimatter asymmetry are of very different size, they are related by a remarkable theory for which physicists Makoto Kobayashi and Toshihide Maskawa were awarded the 2008 Nobel Prize in physics. The theory provides an elegant and simple explanation of CP violation that manages to explain both the 1964 experiment and later CP-violation measurements in experiments at the KEK laboratory in Japan and the SLAC National Accelerator Laboratory in California.

    “This new calculation provides another test of this theory—a test that the Standard Model passes, at least at the present level of accuracy,” said Christoph Lehner, a Brookhaven Lab member of the team.

    Although the Standard Model does successfully relate the matter-antimatter asymmetries seen in the 1964 and later experiments, this Standard-Model asymmetry is insufficient to explain the preponderance of matter over antimatter in the universe today.

    “This suggests that a new mechanism must be responsible for the preponderance of matter of which we are made,” said Christopher Kelly, a member of the team from the RIKEN BNL Research Center (RBRC). “This one-part-per-million, direct CP violation may be a good place to first see it. The approximate agreement between this new calculation and the 2000 experimental results suggests that we need to look harder, which is exactly what the team performing this calculation plans to do.”

    This calculation was carried out on the Blue Gene/Q supercomputers at the RIKEN BNL Research Center (RBRC), at Brookhaven National Laboratory, at the Argonne Leadership Class Computing Facility (ALCF) at Argonne National Laboratory, and at the DiRAC facility at the University of Edinburgh. The research was carried out by Ziyuan Bai, Norman Christ, Robert Mawhinney, and Daiqian Zhang of Columbia University; Thomas Blum of the University of Connecticut; Peter Boyle and Julien Frison of the University of Edinburgh; Nicolas Garron of Plymouth University; Chulwoo Jung, Christoph Lehner, and Amarjit Soni of Brookhaven Lab; Christopher Kelly, and Taku Izubuchi of the RBRC and Brookhaven Lab; and Christopher Sachrajda of the University of Southampton. The work was funded by the U.S. Department of Energy’s Office of Science, by the RIKEN Laboratory of Japan, and the U.K. Science and Technology Facilities Council. The ALCF is a DOE Office of Science User Facility.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.

  • richardmitnick 4:02 pm on November 19, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From LLNL: “Tri-lab collaboration that will bring Sierra supercomputer to Lab recognized” 

    Lawrence Livermore National Laboratory

    Sierra is the next in a long line of supercomputers at Lawrence Livermore National Laboratory.

    The collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) that will bring the Sierra supercomputer to the Lab in 2018 has been recognized by HPCWire with an Editor’s Choice Award for Best HPC Collaboration between Government and Industry.

    The award was received by Doug Wade, head of the Advanced Simulation and Computing (ASC) program, in the DOE booth at Supercomputing 2015 (SC15), and representatives from Oak Ridge and Argonne. HPCWire is an online news service that covers the high performance computing (HPC) industry.

    CORAL represents an innovative procurement strategy pioneered by Livermore that couples acquisition with R&D non-recurring engineering (NRE) contracts that make it possible for vendors to assume greater risks in their proposals than they would otherwise for an HPC system that is several years out. Delivery of Sierra is expected in late 2017 with full deployment in 2018. This procurement strategy has since been widely adopted by DOE labs.

    CORAL’s industry partners include IBM, NVIDIA and Mellanox. In addition to bringing Sierra to Livermore, CORAL will bring an HPC system called Summit to Oak Ridge National Laboratory and a system called Aurora to Argonne National Laboratory.

    Summit supercomputer

    Aurora supercomputer

    Sierra will be an IBM system expected to exceed 120 petaflops (120 quadrillion floating point operations per second) and will serve NNSA’s ASC program, an integral part of stockpile stewardship.

    In other SC15 news, LLNL’s 20-petaflop (trillion floating point operations per second) IBM Blue Gene Q Sequoia system was again ranked No. 3 on the Top500 list of the world’s most powerful supercomputers released Tuesday. For the third year running, the Chinese Tiahne-2 (Milky Way-2) supercomputer holds the No. 1 ranking on the list followed by Titan at Oak Ridge National Laboratory. LLNL’s 5-petaflop Vulcan, also a Blue Gene Q system, dropped out of the top 10 on the list and is now ranked No. 12.

    IBM Blue Gene Q Sequoia system

    Tiahne-2 supercomputer

    Titan supercomputer

    The United States has five of the top 10 supercomputers on the Top500 and four of those are DOE and NNSA systems. In addition to China, other countries with HPC systems in the top 10 include Germany, Japan, Switzerland and Saudi Arabia.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    DOE Seal

  • richardmitnick 9:48 am on November 16, 2015 Permalink | Reply
    Tags: , , , Q Continuum, Supercomputing   

    From ANL: “Researchers model birth of universe in one of largest cosmological simulations ever run” 

    News from Argonne National Laboratory

    October 29, 2015
    Louise Lerner

    Temp 1
    This series shows the evolution of the universe as simulated by a run called the Q Continuum, performed on the Titan supercomputer and led by Argonne physicist Katrin Heitmann. These images give an impression of the detail in the matter distribution in the simulation. At first the matter is very uniform, but over time gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form. Image by Heitmann et. al.

    Researchers are sifting through an avalanche of data produced by one of the largest cosmological simulations ever performed, led by scientists at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    The simulation, run on the Titan supercomputer at DOE’s Oak Ridge National Laboratory, modeled the evolution of the universe from just 50 million years after the Big Bang to the present day — from its earliest infancy to its current adulthood. Over the course of 13.8 billion years, the matter in the universe clumped together to form galaxies, stars, and planets; but we’re not sure precisely how.


    These kinds of simulations help scientists understand dark energy, a form of energy that affects the expansion rate of the universe, including the distribution of galaxies, composed of ordinary matter, as well as dark matter, a mysterious kind of matter that no instrument has directly measured so far.

    Temp 1
    Galaxies have halos surrounding them, which may be composed of both dark and regular matter. This image shows a substructure within a halo in the Q Continuum simulation, with “subhalos” marked in different colors. Image by Heitmann et al.

    Intensive sky surveys with powerful telescopes, like the Sloan Digital Sky Survey and the new, more detailed Dark Energy Survey, show scientists where galaxies and stars were when their light was first emitted.

    SDSS Telescope
    SDSS telescope at Apache Point, NM, USA

    Dark Energy Camera
    CTIO Victor M Blanco 4m Telescope
    DECam and the Blanco telecope in CHile where it is housed

    And surveys of the Cosmic Microwave Background [CMB], light remaining from when the universe was only 300,000 years old, show us how the universe began — “very uniform, with matter clumping together over time,” said Katrin Heitmann, an Argonne physicist who led the simulation.

    Cosmic Microwave Background  Planck

    The simulation fills in the temporal gap to show how the universe might have evolved in between: “Gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form,” said Heitmann.

    Called the Q Continuum, the simulation involved half a trillion particles — dividing the universe up into cubes with sides 100,000 kilometers long. This makes it one of the largest cosmology simulations at such high resolution. It ran using more than 90 percent of the supercomputer. For perspective, typically less than one percent of jobs use 90 percent of the Mira supercomputer at Argonne, said officials at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. Staff at both the Argonne and Oak Ridge computing facilities helped adapt the code for its run on Titan.

    “This is a very rich simulation,” Heitmann said. “We can use this data to look at why galaxies clump this way, as well as the fundamental physics of structure formation itself.”

    Analysis has already begun on the two and a half petabytes of data that were generated, and will continue for several years, she said. Scientists can pull information on such astrophysical phenomena as strong lensing, weak lensing shear, cluster lensing and galaxy-galaxy lensing.

    The code to run the simulation is called Hardware/Hybrid Accelerated Cosmology Code (HACC), which was first written in 2008, around the time scientific supercomputers broke the petaflop barrier (a quadrillion operations per second). HACC is designed with an inherent flexibility that enables it to run on supercomputers with different architectures.

    Details of the work are included in the study, The Q continuum simulation: harnessing the power of GPU accelerated supercomputers, published in August in the Astrophysical Journal Supplement Series by the American Astronomical Society. Other Argonne scientists on the study included Nicholas Frontiere, Salman Habib, Adrian Pope, Hal Finkel, Silvio Rizzi, Joe Insley and Suman Bhattacharya, as well as Chris Sewell at DOE’s Los Alamos National Laboratory.

    This work was supported by the DOE Office of Science (Scientific Discovery through Advanced Computing (SciDAC) jointly by High Energy Physics and Advanced Scientific Computing Research ) and used resources of the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory, a DOE Office of Science User Facility. The work presented here results from an award of computer time provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program at the OLCF.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

  • richardmitnick 5:26 pm on November 12, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From LLNL: “State grant enables energy-saving retrofit of Lawrence Livermore computing clusters” 

    Lawrence Livermore National Laboratory

    Nov. 12, 2015
    Don Johnston

    Anna Maria Bailey, LLNL high performance computing facility manager, with the Cab supercomputer that will be retrofitted with liquid cooling in January. Photos by Julie Russell/LLNL

    Supercomputers at Lawrence Livermore National Laboratory (LLNL) will be retrofitted with liquid cooling systems under a California Energy Commission (CEC) grant to assess potential energy savings.

    Asetek, a leading provider of energy, efficient liquid cooling systems for data centers, servers and HPC clusters, has received a $3.5 million grant from the CEC for retrofits at two California high performance computing (HPC) centers. A second yet-to-be-disclosed California data center also will be retrofitted under the grant next year. Energy savings and the associated cost reductions are critical to data centers and supercomputing facilities around the world.

    “We are excited about this important project,” said John Hamill, Asetek vice president for worldwide sales. “Not only will it benefit LLNL, but the results gathered from the project will be used to improve energy efficiency of data centers worldwide.”

    Lawrence Livermore’s CAB supercomputer will undergo retrofitting in January during the first phase of the project.

    Lawrence Livermore’s CAB supercomputer

    CAB, a Linux commodity cluster delivering 431 teraflops (trillions of floating point operations per second) for unclassified computing, is currently air cooled. Later next year, one of the recently announced Commodity Technology System (CTS-1) clusters slated for installation at Lawrence Livermore also will be fitted with an Asetek emerging future liquid cooling technology as the second phase of this grant.

    Anna Maria Bailey, LLNL high performance computing facility manager, with the Cab supercomputer that will be retrofitted with liquid cooling in January.

    “This is an exciting project with Asetek that will help advance the state-of-the-art in energy savings for data centers,” said Anna Maria Bailey, LLNL’s HPC facilities manager and a co-organizer of the annual Energy Efficient Working Group workshop conducted at the upcoming Supercomputing Conference. “As part of this project, we will measure savings and assess such potential benefits as improved computational performance.”

    LLNL has entered into a “work for others” contract with Asetek to undertake the retrofits. The selected systems for the first phase of the project, all currently air cooled, will be retrofitted with Asetek’s all-in-one liquid cooling technology. The liquid cooling technology is used to reduce power, greenhouse gas emissions and lower noise in data centers, servers, as well as HPC systems.

    The project and many other similar energy-efficient HPC efforts will be discussed during the Energy Efficient High Performance Computing Working Group’s all-day workshop at SC15, 9 a.m. to 5:30 p.m. Monday, Nov.16, in Salon A of the Hilton Hotel, Austin, Texas. Bailey is the co-chair of the group.

    “Reducing power consumption and the associated costs at data centers and high performance computing facilities is a leading concern of the HPC community,” Bailey said, noting that “addressing this issue is critical as we develop ever more powerful next-generation supercomputers.”

    Asetek is a leading provider of energy efficient liquid cooling systems for data centers, servers, workstations, gaming and high performance PCs.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    DOE Seal

  • richardmitnick 2:14 pm on October 19, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From Science Node: “How did the universe get here?” 

    Science Node bloc
    Science Node

    09 Oct, 2015
    Lance Farrell

    Courtesy DEUS-FUR; V Reverdy.

    The big bang model explains the history of the universe, but scientists are looking to an unseen force called dark energy to explain the universe’s accelerating expansion. To find dark energy’s cosmic fingerprints, scientists simulated the entire expansion of the universe on the Curie supercomputer.

    Curie supercomputer designed by Bull for GENCI

    To answer the world’s oldest question, researchers at the Dark Energy Universe Simulation: Full Universe Runs (DEUS-FUR) project did something new. Using the Curie supercomputer they were the first to simulate the unfolding of the entire universe to see how dark energy might lie hidden in plain sight.

    Using 50 million hours of supercomputing time (3,500 years if performed on a single computer), the DEUS group was able to calculate the trajectory of about 550 billion dark matter particles, each the size of our galaxy. Courtesy DEUS Consortium.

    Explaining the evolution of our universe is a timeless task. One hundred years ago, the theory of general relativity launched a scientific method to model the cosmos. From this revolution in thought came the notion of the big bang — a singularity of infinite density that expanded about 14 billion years ago, creating the known universe in its wake. With the discovery of an accelerating universe, the big bang model has grown complicated; dark energy is one of the prevailing models to account for the speeding expansion.

    But simulating the unfolding of the whole universe is a tremendous computational feat, taxing even the most powerful computers. DEUS scientists ran full universe models on 4,752 nodes and 300 TB of memory on Curie, one of Europe’s first petascale supercomputers. To highlight the numerical challenge behind the simulations, the team updated their findings last month in the International Journal of High Performance Computing Applications.

    “In our domain, high-performance computing is our only way to build universes, let them evolve to explore theoretical hypotheses, connect them to observations, and understand how observed phenomena emerge from fundamental laws,” says Vincent Reverdy, numerical cosmologist in the Department of Astronomy at the University of Illinois at Urbana-Champaign and co-author of the DEUS-FUR study.

    To reveal the hidden imprint of dark energy and chart the historical development of cosmic structures (clusters and superclusters of galaxies), the DEUS project contrasted three competing dark energy models, each suggesting a different history of structure formation.

    Map of voids and superclusters within 500 million light years from Milky Way
    Date 08/11/09
    Source http://www.atlasoftheuniverse.com/nearsc.html
    Author Richard Powell

    By simulating the path of photons through star clusters, the DEUS group learned that even slight deviations between models can change the way observers perceive the universe — not to mention the results they obtain while measuring cosmological distances.

    The simulations are a first in numerical cosmology, but the technical effort to achieve them points to applicability for compilers, geo-localization software, and artificial intelligence, Reverdy says. As satisfying as the applications might be, for Reverdy, the real prize is found in discovery.

    “For me, the goal of research is, before anything else, cultural and philosophical,” he says, cautious of appearing pompous. “Physics tell us something fundamental about what Nature is, or more exactly about what Nature is not. This project is a very small piece in the large puzzle of understanding gravity and the accelerating expansion of the universe.”

    download the mp4 video here.
    Curie. When added to others run by the DEUS group, scientists now have simulations scaling from less than 1/100 the size of the Milky Way up to the entirety of the observable universe — the first simulation to achieve this size. To do the job, DEUS looked to Curie, one of Europe’s first petascale supercomputers. Courtesy DEUS-FUR; V Reverdy.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

  • richardmitnick 2:51 pm on August 27, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From NERSC: “NERSC, Cray Move Forward With Next-Generation Scientific Computing” 

    NERSC Logo

    April 22, 2015
    Jon Bashor, jbashor@lbl.gov, 510-486-5849

    The Cori Phase 1 system will be the first supercomputer installed in the new Computational Research and Theory Facility now in the final stages of construction at Lawrence Berkeley National Laboratory.

    The U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing (NERSC) Center and Cray Inc. announced today that they have finalized a new contract for a Cray XC40 supercomputer that will be the first NERSC system installed in the newly built Computational Research and Theory facility at Lawrence Berkeley National Laboratory.


    This supercomputer will be used as Phase 1 of NERSC’s next-generation system named “Cori” in honor of bio-chemist and Nobel Laureate Gerty Cori. Expected to be delivered this summer, the Cray XC40 supercomputer will feature the Intel Haswell processor. The second phase, the previously announced Cori system, will be delivered in mid-2016 and will feature the next-generation Intel Xeon Phi™ processor “Knights Landing,” a self-hosted, manycore processor with on-package high bandwidth memory that offers more than 3 teraflop/s of double-precision peak performance per single socket node.

    NERSC serves as the primary high performance computing facility for the Department of Energy’s Office of Science, supporting some 6,000 scientists annually on more than 700 projects. This latest contract represents the Office of Science’s ongoing commitment to supporting computing to address challenges such as developing new energy sources, improving energy efficiency, understanding climate change and analyzing massive data sets from observations and experimental facilities around the world.

    “This is an exciting year for NERSC and for NERSC users,” said Sudip Dosanjh, director of NERSC. “We are unveiling a brand new, state-of-the-art computing center and our next-generation supercomputer, designed to help our users begin the transition to exascale computing. Cori will allow our users to take their science to a level beyond what our current systems can do.”

    “NERSC and Cray share a common vision around the convergence of supercomputing and big data, and Cori will embody that overarching technical direction with a number of unique, new technologies,” said Peter Ungaro, president and CEO of Cray. “We are honored that the first supercomputer in NERSC’s new center will be our flagship Cray XC40 system, and we are also proud to be continuing and expanding our longstanding partnership with NERSC and the U.S. Department of Energy as we chart our course to exascale computing.”
    Support for Data-Intensive Science

    A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.

    However, Cori Phase 1 will have twice as much memory per node than NERSC’s current Edison supercomputer (a Cray XC30 system) and will include a number of advanced features designed to accelerate data-intensive applications:

    Large number of login/interactive nodes to support applications with advanced workflows
    Immediate access queues for jobs requiring real-time data ingestion or analysis
    High-throughput and serial queues can handle a large number of jobs for screening, uncertainty qualification, genomic data processing, image processing and similar parallel analysis
    Network connectivity that allows compute nodes to interact with external databases and workflow controllers
    The first half of an approximately 1.5 terabytes/sec NVRAM-based Burst Buffer for high bandwidth low-latency I/O
    A Cray Lustre-based file system with over 28 petabytes of capacity and 700 gigabytes/second I/O bandwidth

    In addition, NERSC is collaborating with Cray on two ongoing R&D efforts to maximize Cori’s data potential by enabling higher bandwidth transfers in and out of the compute node, high-transaction rate data base access, and Linux container virtualization functionality on Cray compute nodes to allow custom software stack deployment.

    “The goal is to give users as familiar a system as possible, while also allowing them the flexibility to explore new workflows and paths to computation,” said Jay Srinivasan, the Computational Systems Group lead. “The Phase 1 system is designed to enable users to start running their workload on Cori immediately, while giving data-intensive workloads from other NERSC systems the ability to run on a Cray platform.”
    Burst Buffer Enhances I/O

    A key element of Cori Phase 1 is Cray’s new DataWarp technology, which accelerates application I/O and addresses the growing performance gap between compute resources and disk-based storage. This capability, often referred to as a “Burst Buffer,” is a layer of NVRAM designed to move data more quickly between processor and disk and allow users to make the most efficient use of the system. Cori Phase 1 will feature approximately 750 terabytes of capacity and approximately 750 gigabytes/second of I/O bandwidth. NERSC, Sandia and Los Alamos national laboratories and Cray are collaborating to define use cases and test early software that will provide the following capabilities:

    Improve application reliability (checkpoint-restart)
    Accelerate application I/O performance for small blocksize I/O and analysis files
    Enhance quality of service by providing dedicated I/O acceleration resources
    Provide fast temporary storage for out-of-core applications
    Serve as a staging area for jobs requiring large input files or persistent fast storage between coupled simulations
    Support post-processing analysis of large simulation data as well asin situandin transitvisualization and analysis using the Burst Buffer nodes

    Combining Extreme Scale Data Analysis and HPC on the Road to Exascale

    As previously announced, Phase 2 of Cori will be delivered in mid-2016 and will be combined with Phase 1 on the same high speed network, providing a unique resource. When fully deployed, Cori will contain more than 9,300 Knights Landing compute nodes and more than 1,900 Haswell nodes, along with the file system and a 2X increase in the applications I/O acceleration.

    “In the scientific computing community, the line between large scale data analysis and simulation and modeling is really very blurred,” said Katie Antypas, head of NERSC’s Scientific Computing and Data Services Department. “The combined Cori system is the first system to be specifically designed to handle the full spectrum of computational needs of DOE researchers, as well as emerging needs in which data- and compute-intensive work are part of a single workflow. For example, a scientist will be able to run a simulation on the highly parallel Knights Landing nodes while simultaneously performing data analysis using the Burst Buffer on the Haswell nodes. This is a model that we expect to be important on exascale-era machines.”

    NERSC is funded by the Office of Advanced Scientific Computing Research in the DOE’s Office of Science.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The National Energy Research Scientific Computing Center (NERSC) is the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is a division of the Lawrence Berkeley National Laboratory, located in Berkeley, California. NERSC itself is located at the UC Oakland Scientific Facility in Oakland, California.

    More than 5,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

    The NERSC Hopper system, a Cray XE6 with a peak theoretical performance of 1.29 Petaflop/s. To highlight its mission, powering scientific discovery, NERSC names its systems for distinguished scientists. Grace Hopper was a pioneer in the field of software development and programming languages and the creator of the first compiler. Throughout her career she was a champion for increasing the usability of computers understanding that their power and reach would be limited unless they were made to be more user friendly.

    (Historical photo of Grace Hopper courtesy of the Hagley Museum & Library, PC20100423_201. Design: Caitlin Youngquist/LBNL Photo: Roy Kaltschmidt/LBNL)

    NERSC is known as one of the best-run scientific computing facilities in the world. It provides some of the largest computing and storage systems available anywhere, but what distinguishes the center is its success in creating an environment that makes these resources effective for scientific research. NERSC systems are reliable and secure, and provide a state-of-the-art scientific development environment with the tools needed by the diverse community of NERSC users. NERSC offers scientists intellectual services that empower them to be more effective researchers. For example, many of our consultants are themselves domain scientists in areas such as material sciences, physics, chemistry and astronomy, well-equipped to help researchers apply computational resources to specialized science problems.

  • richardmitnick 4:13 pm on August 17, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From isgtw: “Simplifying and accelerating genome assembly” 

    international science grid this week

    August 12, 2015
    Linda Vu

    To extract meaning from a genome, scientists must reconstruct portions — a time consuming process akin to rebuilding the sentences and paragraphs of a book from snippets of text. But by applying novel algorithms and high-performance computational techniques to the cutting-edge de novogenome assembly tool Meraculous, a team of scientists have simplified and accelerated genome assembly — reducing a months-long process to mere minutes.

    Temp 1
    “The new parallelized version of Meraculous shows unprecedented performance and efficient scaling up to 15,360 processor cores for the human and wheat genomes on NERSC’s Edison supercomputer,” says Evangelos Georganas. “This performance improvement sped up the assembly workflow from days to seconds.” Courtesy NERSC.

    Researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have made this gain by ‘parallelizing’ the DNA code — sometimes billions of bases long — to harness the processing power of supercomputers, such as the US Department of Energy’s National Energy Research Scientific Computing Center’s (NERSC’s) Edison system. (Parallelizing means splitting up tasks to run on the many nodes of a supercomputer at once.)

    “Using the parallelized version of Meraculous, we can now assemble the entire human genome in about eight minutes,” says Evangelos Georganas, a UC Berkeley graduate student. “With this tool, we estimate that the output from the world’s biomedical sequencing capacity could be assembled using just a portion of the Berkeley-managed NERSC’s Edison supercomputer.”

    Supercomputers: A game changer for assembly

    High-throughput next-generation DNA sequencers allow researchers to look for biological solutions — and for the most part, these machines are very accurate at recording the sequence of DNA bases. Sometimes errors do occur, however. These errors complicate analysis by making it harder to assemble genomes and identify genetic mutations. They can also lead researchers to misinterpret the function of a gene.

    Researchers use a technique called shotgun sequencing to identify these errors. This involves taking numerous copies of a DNA strand, breaking it up into random smaller pieces and then sequencing each piece separately. For a particularly complex genome, this process can generate several terabytes of data.

    To identify data errors quickly and effectively, the Berkeley Lab and UC Berkeley team use ‘Bloom filters‘ and massively parallel supercomputers. “Applying Bloom filters has been done before, but what we have done differently is to get Bloom filters to work with distributed memory systems,” says Aydin Buluç, a research scientist in Berkeley Lab’s Computational Research Division (CRD). “This task was not trivial; it required some computing expertise to accomplish.”

    The team also developed solutions for parallelizing data input and output (I/O). “When you have several terabytes of data, just getting the computer to read your data and output results can be a huge bottleneck,” says Steven Hofmeyr, a research scientist in CRD who developed these solutions. “By allowing the computer to download the data in multiple threads, we were able to speed up the I/O process from hours to minutes.”

    The assembly process

    Once errors are removed, researchers can begin the genome assembly. This process relies on computer programs to join k-mers — short DNA sequences consisting of a fixed number (K) of bases — at overlapping regions, so they form a continuous sequence, or contig. If the genome has previously been sequenced, scientists can use reference recorded gene annotations to align the reads. If not, they need to create a whole new catalog of contigs through de novo assembly.

    Temp 1
    “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress,” says Jarrod Chapman. Pictured: Human Chromosomes. Courtesy Jane Ades, National Human Genome Research Institute.

    De novoassembly is memory-intensive, and until recently was resistant to parallelization in distributed memory. Many researchers turned to specialized large memory nodes, several terabytes in size, to do this work, but even the largest commercially available memory nodes are not big enough to assemble massive genomes. Even with supercomputers, it still took several hours, days or even months to assemble a single genome.

    To make efficient use of massively parallel systems, Georganas created a novel algorithm for de novo assembly that takes advantage of the one-sided communication and Partitioned Global Address Space (PGAS) capabilities of the UPC (Unified Parallel C) programming language. PGAS lets researchers treat the physically separate memories of each supercomputer node as one address space, reducing the time and energy spent swapping information between nodes.

    Tackling the metagenome

    Now that computation is no longer a bottleneck, scientists can try a number of different parameters and run as many analyses as necessary to produce very accurate results. This breakthrough means that Meraculous could also be used to analyze metagenomes — microbial communities recovered directly from environmental samples. This work is important because many microbes exist only in nature and cannot be grown in a laboratory. These organisms may be the key to finding new medicines or viable energy sources.

    “Analyzing metagenomes is a tremendous effort,” says Jarrod Chapman, who developed Meraculous at the US Department of Energy’s Joint Genome Institute (managed by the Berkeley Lab). “If assembling a single genome is like piecing together one novel, then assembling metagenomic data is like rebuilding the Library of Congress. Using Meraculous to effectively do this analysis would be a game changer.”

    –iSGTW is becoming the Science Node. Watch for our new branding and website this September.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

  • richardmitnick 11:24 am on August 11, 2015 Permalink | Reply
    Tags: , IBM Watson, , Supercomputing   

    From MIT Tech Review: “Why IBM Just Bought Billions of Medical Images for Watson to Look At” 

    MIT Technology Review
    M.I.T. Technology Review

    August 11, 2015
    Mike Orcut

    IBM seeks to transform image-based diagnostics by combining its cognitive computing technology with a massive collection of medical images.

    IBM says that Watson, its artificial-intelligence technology, can use advanced computer vision to process huge volumes of medical images. Now Watson has its sights set on using this ability to help doctors diagnose diseases faster and more accurately.


    Last week IBM announced it would buy Merge Healthcare for a billion dollars. If the deal is finalized, this would be the third health-care data company IBM has bought this year (see “Meet the Health-Care Company IBM Needed to Make Watson More Insightful”). Merge specializes in handling all kinds of medical images, and its service is used by more than 7,500 hospitals and clinics in the United States, as well as clinical research organizations and pharmaceutical companies. Shahram Ebadollahi, vice president of innovation and chief science officer for IBM’s Watson Health Group, says the acquisition is part of an effort to draw on many different data sources, including anonymized, text-based medical records, to help physicians make treatment decisions.

    Merge’s data set contains some 30 billion images, which is crucial to IBM because its plans for Watson rely on a technology, called deep learning, that trains a computer by feeding it large amounts of data.

    Watson won Jeopardy! by using advanced natural-language processing and statistical analysis to interpret questions and provide the correct answers. Deep learning was added to Watson’s skill set more recently (see “IBM Pushes Deep Learning with a Watson Upgrade”). This new approach to artificial intelligence involves teaching computers to spot patterns in data by processing it in ways inspired by networks of neurons in the brain (see “Breakthrough Technologies 2013: Deep Learning”). The technology has already produced very impressive results in speech recognition (see “Microsoft Brings Star Trek’s Voice Translator to Life”) and image recognition (see “Facebook Creates Software That Matches Faces Almost as Well as You Do”).

    IBM’s researchers think medical image processing could be next. Images are estimated to make up as much as 90 percent of all medical data today, but it can be difficult for physicians to glean important information from them, says John Smith, senior manager for intelligent information systems at IBM Research.

    One of the most promising near-term applications of automated image processing, says Smith, is in detecting melanoma, a type of skin cancer. Diagnosing melanoma can be difficult, in part because there is so much variation in the way it appears in individual patients. By feeding a computer many images of melanoma, it is possible to teach the system to recognize very subtle but important features associated with the disease. The technology IBM envisions might be able to compare a new image from a patient with many others in a database and then rapidly give the doctor important information, gleaned from the images as well as from text-based records, about the diagnosis and potential treatments.

    Finding cancer in lung CT scans is another good example of how such technology could help diagnosis, says Jeremy Howard, CEO of Enlitic, a one-year-old startup that is also using deep learning for medical image processing (see “A Startup Hopes to Teach Computers to Spot Tumors in Medical Scans”). “You have to scroll through hundreds and hundreds of slices looking for a few little glowing pixels that appear and disappear, and that takes a long time, and it is very easy to make a mistake,” he says. Howard says his company has already created an algorithm capable of identifying relevant characteristics of lung tumors more accurately than radiologists can.​​

    Howard says the biggest barrier to using deep learning in medical diagnostics is that so much of the data necessary for training the systems remains isolated in individual institutions, and government regulations can make it difficult to share that information. IBM’s acquisition of Merge, with its billions of medical images, could help address that problem.


    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The mission of MIT Technology Review is to equip its audiences with the intelligence to understand a world shaped by technology.

  • richardmitnick 2:23 pm on July 31, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From NSF: “Super news for supercomputers” 

    National Science Foundation

    July 30, 2015
    No Writer Credit


    This week, President Obama issued an executive order establishing the National Strategic Computing Initiative (NSCI) to ensure that the United States continues its leadership in high-performance computing over the coming decades.

    The National Science Foundation is proud to serve as one of the three lead agencies for the NSCI, working alongside the Department of Energy (DOE) and the Department of Defense (DOD) to maximize the benefits of high-performance computing research, development, and deployment across the federal government and in collaboration with academia and industry.

    NSF has been a leader in high-performance computing, and advanced cyber infrastructure more generally, for nearly four decades.

    That was then… A Cray supercomputer in the mid-1980s at the National Center for Supercomputer Applications, located at the University of Illinois, Urbana-Champaign.Credit: NCSA, University of Illinois at Urbana-Champ

    This is now… Blue Waters, launched in 2013, is one of the most powerful supercomputers in the world, and the fastest supercomputer on a university campus. Credit: NCSA, University of Illinois at Urbana-Champaign

    (The term “high-performance computing” refers to systems that, through a combination of processing capability and storage capacity, can solve computational problems that are beyond the capability of small- to medium-scale systems.)

    Over the last four decades, the benefits of advanced computing to our nation have been great.

    Whether helping to solve fundamental mysteries of the Universe…
    Simulations show the evolution of a white dwarf star as it is being disrupted by a massive black hole. Credit: Tamara Bogdanovic, Georgia Tech

    determining the underlying mechanisms of disease and prevention…
    Simulations of the human immunodeficiency virus (HIV) help researchers develop new antiretroviral drugs that suppress the HIV virus. Credit: Theoretical and Computational Biophysics Group, University of Illinois at Urbana-Champaign

    or improving the prediction of natural disasters and saving lives…
    3-D supercomputer simulations of earthquake data has found hidden rock structures deep under East Asia. Credit: Min Song, Rice University

    high-performance computing has been a necessary tool in the toolkit of scientists and engineers.

    NSF has the unique ability to ensure that our nation’s computing infrastructure is guided by the problems that scientists face working at the frontiers of science and engineering, and that our investments are informed by advances in state-of-the-art technologies and groundbreaking computer science research.

    By providing researchers and educators throughout the U.S. with access to cyberinfrastructure – the hardware, software, networks and people that make massive computing possible – NSF has accelerated the pace of discovery and innovation in all fields of inquiry. This holistic and collaborative high-performance computing ecosystem has transformed all areas of science and engineering and society at-large.

    In the new Strategic Initiative, NSF will continue to play a central role in computationally-enabled scientific advances, the development of the broader HPC ecosystem for making those scientific discoveries, and the development of a high-skill, high-tech workforce who can use high-performance computing for the good of the nation.

    We at NSF recognize that advancing discoveries and innovations demands a bold, sustainable, and comprehensive national strategy that is responsive to increasing computing demands, emerging technological challenges, and growing international competition.

    The National Strategic Computing Initiative paves the way toward a concerted, collective effort to examine the opportunities and challenges for the future of HPC.

    We look forward to working with other federal agencies, academic institutions, industry, and the scientific community to realize a vibrant future for HPC over the next 15 years, and to continue to power our nation’s ability to be the discovery and innovation engine of the world!

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.


Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc

Get every new post delivered to your Inbox.

Join 496 other followers

%d bloggers like this: