Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:33 pm on August 16, 2016 Permalink | Reply
    Tags: Energy Department to invest $16 million in computer design of materials, , Supercomputing,   

    From ORNL: “Energy Department to invest $16 million in computer design of materials” 

    i1

    Oak Ridge National Laboratory

    August 16, 2016
    Dawn Levy, Communications
    levyd@ornl.gov
    865.576.6448

    1
    Paul Kent of Oak Ridge National Laboratory directs the Center for Predictive Simulation of Functional Materials. No image credit.

    The U.S. Department of Energy announced today that it will invest $16 million over the next four years to accelerate the design of new materials through use of supercomputers.

    Two four-year projects—one team led by DOE’s Oak Ridge National Laboratory (ORNL), the other team led by DOE’s Lawrence Berkeley National Laboratory (LBNL)—will take advantage of superfast computers at DOE national laboratories by developing software to design fundamentally new functional materials destined to revolutionize applications in alternative and renewable energy, electronics, and a wide range of other fields. The research teams include experts from universities and other national labs.

    The new grants—part of DOE’s Computational Materials Sciences (CMS) program begun in 2015 as part of the U.S. Materials Genome Initiative—reflect the enormous recent growth in computing power and the increasing capability of high-performance computers to model and simulate the behavior of matter at the atomic and molecular scales.

    The teams are expected to develop sophisticated and user-friendly open-source software that captures the essential physics of relevant systems and can be used by the broader research community and by industry to accelerate the design of new functional materials.

    “Given the importance of materials to virtually all technologies, computational materials science is a critical area in which the United States needs to be competitive in the twenty-first century and beyond through global leadership in innovation,” said Cherry Murray, director of DOE’s Office of Science, which is funding the research. “These projects will both harness DOE existing high-performance computing capabilities and help pave the way toward ever-more sophisticated software for future generations of machines.”

    “ORNL researchers will partner with scientists from national labs and universities to develop software to accurately predict the properties of quantum materials with novel magnetism, optical properties and exotic quantum phases that make them well-suited to energy applications,” said Paul Kent of ORNL, director of the Center for Predictive Simulation of Functional Materials, which includes partners from Argonne, Lawrence Livermore, Oak Ridge and Sandia National Laboratories and North Carolina State University and the University of California–Berkeley. “Our simulations will rely on current petascale and future exascale capabilities at DOE supercomputing centers. To validate the predictions about material behavior, we’ll conduct experiments and use the facilities of the Advanced Photon Source [ANL/APS], Spallation Neutron Source and the Nanoscale Science Research Centers.”

    ANL APS
    ANL/APS

    ORNL Spallation Neutron Source
    ORNL Spallation Neutron Source

    Said the center’s thrust leader for prediction and validation, Olle Heinonen, “At Argonne, our expertise in combining state-of-the-art, oxide molecular beam epitaxy growth of new materials with characterization at the Advanced Photon Source and the Center for Nanoscale Materials will enable us to offer new and precise insight into the complex properties important to materials design. We are excited to bring our particular capabilities in materials, as well as expertise in software, to the center so that the labs can comprehensively tackle this challenge.”

    Researchers are expected to make use of the 30-petaflop/s Cori supercomputer now being installed at the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, the 27-petaflop/s Titan computer at the Oak Ridge Leadership Computing Facility (OLCF) and the 10-petaflop/s Mira computer at Argonne Leadership Computing Facility (ALCF).

    NERSC CRAY Cori supercomputer
    NERSC CRAY Cori supercomputer

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    OLCF, ALCF and NERSC are all DOE Office of Science User Facilities. One petaflop/s is1015 or a million times a billion floating-point operations per second.

    In addition, a new generation of machines is scheduled for deployment between 2016 and 2019 that will take peak performance as high as 200 petaflops. Ultimately the software produced by these projects is expected to evolve to run on exascale machines, capable of 1,000 petaflops and projected for deployment in the mid-2020s.

    LLNL IBM Sierra supercomputer
    LLNL IBM Sierra supercomputer

    ORNL IBM Summit supercomputer depiction
    ORNL IBM Summit supercomputer

    ANL Cray Aurora supercomputer
    ANL Cray Aurora supercomputer

    Research will combine theory and software development with experimental validation, drawing on the resources of multiple DOE Office of Science User Facilities, including the Advanced Light Source [ALS] at LBNL, the Advanced Photon Source at Argonne National Laboratory (ANL), the Spallation Neutron Source at ORNL, and several of the five Nanoscience Research Centers across the DOE National Laboratory complex.

    LBL ALS interior
    LBL/ALS

    The new research projects will begin in Fiscal Year 2016. They expand the ongoing CMS research effort, which began in FY 2015 with three initial projects, led respectively by ANL, Brookhaven National Laboratory and the University of Southern California.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 1:45 pm on August 16, 2016 Permalink | Reply
    Tags: , Big PanDA, , , , , , Supercomputing   

    From BNL: “Big PanDA Tackles Big Data for Physics and Other Future Extreme Scale Scientific Applications” 

    Brookhaven Lab

    August 16, 2016
    Karen McNulty Walsh
    kmcnulty@bnl.gov
    (631) 344-8350
    Peter Genzer
    (631) 344-3174
    genzer@bnl.gov

    1
    A workload management system developed by a team including physicists from Brookhaven National Laboratory taps into unused processing time on the Titan supercomputer at the Oak Ridge Leadership Computing Facility to tackle complex physics problems. New funding will help the group extend this approach, giving scientists in other data-intensive fields access to valuable supercomputing resources.

    A billion times per second, particles zooming through the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, smash into one another at nearly the speed of light, emitting subatomic debris that could help unravel the secrets of the universe.

    CERN/LHC Map
    CERN LHC Grand Tunnel
    CERN LHC particles
    LHC at CERN

    Collecting the data from those collisions and making it accessible to more than 6000 scientists in 45 countries, each potentially wanting to slice and analyze it in their own unique ways, is a monumental challenge that pushes the limits of the Worldwide LHC Computing Grid (WLCG), the current infrastructure for handling the LHC’s computing needs. With the move to higher collision energies at the LHC, the demand just keeps growing.

    To help meet this unprecedented demand and supplement the WLCG, a group of scientists working at U.S. Department of Energy (DOE) national laboratories and collaborating universities has developed a way to fit some of the LHC simulations that demand high computing power into untapped pockets of available computing time on one of the nation’s most powerful supercomputers—similar to the way tiny pebbles can fill the empty spaces between larger rocks in a jar. The group—from DOE’s Brookhaven National Laboratory, Oak Ridge National Laboratory (ORNL), University of Texas at Arlington, Rutgers University, and University of Tennessee, Knoxville—just received $2.1 million in funding for 2016-2017 from DOE’s Advanced Scientific Computing Research (ASCR) program to enhance this “workload management system,” known as Big PanDA, so it can help handle the LHC data demands and be used as a general workload management service at DOE’s Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility at ORNL.

    “The implementation of these ideas in an operational-scale demonstration project at OLCF could potentially increase the use of available resources at this Leadership Computing Facility by five to ten percent,” said Brookhaven physicist Alexei Klimentov, a leader on the project. “Mobilizing these previously unusable supercomputing capabilities, valued at millions of dollars per year, could quickly and effectively enable cutting-edge science in many data-intensive fields.”

    Proof-of-concept tests using the Titan supercomputer at Oak Ridge National Laboratory have been highly successful. This Leadership Computing Facility typically handles large jobs that are fit together to maximize its use. But even when fully subscribed, some 10 percent of Titan’s computing capacity might be sitting idle—too small to take on another substantial “leadership class” job, but just right for handling smaller chunks of number crunching. The Big PanDA (for Production and Distributed Analysis) system takes advantage of these unused pockets by breaking up complex data analysis jobs and simulations for the LHC’s ATLAS and ALICE experiments and “feeding” them into the “spaces” between the leadership computing jobs.

    CERN/ATLAS detector
    CERN/ATLAS detector

    AliceDetectorLarge
    CERN/Alice Detector
    When enough capacity is available to run a new big job, the smaller chunks get kicked out and reinserted to fill in any remaining idle time.

    “Our team has managed to access opportunistic cycles available on Titan with no measurable negative effect on the supercomputer’s ability to handle its usual workload,” Klimentov said. He and his collaborators estimate that up to 30 million core hours or more per month may be harvested using the Big PanDA approach. From January through July of 2016, ATLAS detector simulation jobs ran for 32.7 million core hours on Titan, using only opportunistic, backfill resources. The results of the supercomputing calculations are shipped to and stored at the RHIC & ATLAS Computing Facility, a Tier 1 center for the WLCG located at Brookhaven Lab, so they can be made available to ATLAS researchers across the U.S. and around the globe.

    The goal now is to translate the success of the Big PanDA project into operational advances that will enhance how the OLCF handles all of its data-intensive computing jobs. This approach will provide an important model for future exascale computing, increasing the coherence between the technology base used for high-performance, scalable modeling and simulation and that used for data-analytic computing.

    “This is a novel and unique approach to workload management that could run on all current and future leadership computing facilities,” Klimentov said.

    Specifically, the new funding will help the team develop a production scale operational demonstration of the PanDA workflow within the OLCF computational and data resources; integrate OLCF and other leadership facilities with the Grid and Clouds; and help high-energy and nuclear physicists at ATLAS and ALICE—experiments that expect to collect 10 to 100 times more data during the next 3 to 5 years—achieve scientific breakthroughs at times of peak LHC demand.

    As a unifying workload management system, Big PanDA will also help integrate Grid, leadership-class supercomputers, and Cloud computing into a heterogeneous computing architecture accessible to scientists all over the world as a step toward a global cyberinfrastructure.

    “The integration of heterogeneous computing centers into a single federated distributed cyberinfrastructure will allow more efficient utilization of computing and disk resources for a wide range of scientific applications,” said Klimentov, noting how the idea mirrors Aristotle’s assertion that “the whole is greater than the sum of its parts.”

    This project is supported by the DOE Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 12:44 pm on August 13, 2016 Permalink | Reply
    Tags: , Extreme Science and Engineering Discovery Environment (XSEDE), , , Supercomputing   

    From Science Node: “Opening the spigot at XSEDE” 

    Science Node bloc
    Science Node

    09 Aug, 2016
    Ken Chiacchia

    A boost from sequencing technologies and computational tools is in store for scientists studying how cells change which of their genes are active.

    Researchers using the Extreme Science and Engineering Discovery Environment (XSEDE) collaboration of supercomputing centers have reported advances in reconstructing cells’ transcriptomes — the genes activated by ‘transcribing’ them from DNA into RNA.

    The work aims to clarify the best practices in assembling transcriptomes, which ultimately can aid researchers throughout the biomedical sciences.

    1
    Digital detectives. Researchers from Texas A&M are using XSEDE resources to manage the data from transcriptome assembly. Studying transcriptomes will offer critical clues of how cells change their behavior in response to disease processes.

    “It’s crucial to determine the important factors that affect transcriptome reconstruction,” says Noushin Ghaffari of AgriLife Genomics and Bioinformatics, at Texas A&M University. “This work will particularly help generate more reliable resources for scientists studying non-model species” — species not previously well studied.

    Ghaffari is principal investigator in an ongoing project whose preliminary findings and computational aspects were presented at the XSEDE16 conference in Miami in July. She is leading a team of students and supercomputing experts from Texas A&M, Indiana University, and the Pittsburgh Supercomputing Center (PSC).

    The scientists sought to improve the quality and efficiency of assembling transcriptomes, and they tested their work on two real data sets from the Sequencing Quality Control Consortium (SEQC) RNA-Seq data: One of cancer cell lines and one of brain tissues from 23 human donors.

    What’s in a transcriptome?

    The transcriptome of a cell at a given moment changes as it reacts to its environment. Transcriptomes offer critical clues of how cells change their behavior in response to disease processes like cancer, or normal bodily signals like hormones.

    Assembling a transcriptome is a big undertaking with current technology, though. Scientists must start with samples containing tens or hundreds of thousands of RNA molecules that are each thousands of RNA ‘base units’ long. Trouble is, most of the current high-speed sequencing technologies can only read a couple hundred bases at one time.

    So researchers must first chemically cut the RNA into small pieces, sequence it, remove RNA not directing cell activity, and then match the overlapping fragments to reassemble the original RNA molecules.

    Harder still, they must identify and correct sequencing mistakes, and deal with repetitive sequences that make the origin and number of repetitions of a given RNA sequence unclear.

    While software tools exist to undertake all of these tasks, Ghaffari’s report was the most comprehensive yet to examine a variety of factors that affect assembly speed and accuracy when these tools are combined in a start-to-finish workflow.

    Heavy lifting

    The most comprehensive study of its kind, the report used data from SEQC to assemble a transcriptome, incorporating many quality control steps to ensure results were accurate. The process required vast amounts of computer memory, made possible by PSC’s high-memory supercomputers Blacklight, Greenfield, and now the new Bridges system’s 3-terabyte ‘large memory nodes.’

    2
    Blacklight supercomputer at the Pittsburgh Supercomputing Center.

    3
    Bridges HPE/Intel supercomputer

    4
    Bridges, a new PSC supercomputer, is designed for unprecedented flexibility and ease of use. It will include database and web servers to support gateways, collaboration, and powerful data management functions. Courtesy Pittsburgh Supercomputing Center.

    “As part of this work, we are running some of the largest transcriptome assemblies ever done,” says coauthor Philip Blood of PSC, an expert in XSEDE’s Extended Collaborative Support Service. “Our effort focused on running all these big data sets many different ways to see what factors are important in getting the best quality. Doing this required the large memory nodes on Bridges, and a lot of technical expertise to manage the complexities of the workflow.”

    During the study, the team concentrated on optimizing the speed of data movement from storage to memory to the processors and back.

    They also incorporated new verification steps to avoid perplexing errors that arise when wrangling big data through complex pipelines. Future work will include the incorporation of ‘checkpoints’ — storing the computations regularly so that work is not lost if a software error happens.

    Ultimately, Blood adds, the scientists would like to put the all the steps of the process into an automated workflow that will make it easy for other biomedical researchers to replicate.

    The work promises a better understanding of how living organisms respond to disease, environment and evolutionary changes, the scientists reported.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 8:34 am on July 23, 2016 Permalink | Reply
    Tags: , , PPPL and Princeton join high-performance software project, Supercomputing   

    From PPPL: “PPPL and Princeton join high-performance software project” 


    PPPL

    July 22, 2016
    John Greenwald

    1
    Co-principal investigators William Tang and Bei Wang. (Photo by Elle Starkman/Office of Communications)

    Princeton University and the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) are participating in the accelerated development of a modern high-performance computing code, or software package. Supporting this development is the Intel Parallel Computing Center (IPCC) Program, which provides funding to universities and laboratories to improve high-performance software capabilities for a wide range of disciplines.

    The project updates the GTC-Princeton (GTC-P) code, which was originally developed for fusion research applications at PPPL and has evolved into highly portable software that is deployed on supercomputers worldwide. The National Science Foundation (NSF) strongly supported advances in the code from 2011 through 2014 through the “G8” international extreme scale computing program, which represented the United States and seven other highly industrialized countries during that period.

    New activity

    Heading the new IPCC activity for the University’s Princeton Institute for Computational Science & Engineering (PICSciE) is William Tang, a PPPL physicist and PICSciE principal investigator (PI). Working with Tang is Co-PI Bei Wang, Associate Research Scholar at PICSciE, who leads this accelerated modernization effort. Joining them in the project are Co-PIs Carlos Rosales of the NSF’s Texas Advanced Computing Center at the University of Texas at Austin and Khaled Ibrahim of the Lawrence Berkeley National Laboratory.

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF
    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    The current GTC-P code has advanced understanding of turbulence and confinement of the superhot plasma that fuels fusion reactions in doughnut-shaped facilities called tokamaks.

    PPPL NSTXII
    PPPL NSTX tokamak

    Understanding and controlling fusion fuel turbulence is a grand challenge of fusion science, and great progress has been made in recent years. It can determine how effectively a fusion reactor will contain energy generated by fusion reactions, and thus can strongly influence the eventual economic attractiveness of a fusion energy system. Further progress on the code will enable researchers to study conditions that arise as tokamaks increase in size to the enlarged dimensions of ITER — the flagship international fusion experiment under construction in France.

    ITER Tokamak
    ITER tokamak

    Access to Intel computer clusters

    Through the IPCC, Intel will provide access to systems for exploring the modernization of the code. Included will be clusters equipped with the most recent Intel “Knights Landing” (KNL) central processing chips.

    The upgrade will become part of the parent GTC code, which is led by Prof. Zhihong Lin of the University of California, Irvine, with Tang as co-PI. That code is also being modernized and will be proposed, together with GTC-P, to be included in the early science portfolio for the Aurora supercomputer.

    3
    Cray Aurora supercomputer to be built for ANL

    Aurora will begin operations at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility at Argonne National Laboratory, in 2019. Powering Aurora will be Intel “Knights Hill” processing chips.

    Last year, the GTC and GTC-P codes were selected to be developed as an early science project designed for the Summit supercomputer that will be deployed at Oak Ridge Leadership Computing Facility, also a DOE Office of Science User Facility, at Oak Ridge National Laboratory in 2018.

    4
    IBM Summit supercomputer

    That modernization project differs from the one to be proposed for Aurora because Summit is being built around architecture powered by NVIDIA Volta graphical processing units and IBM Power 9 central processing chips.

    Moreover, the code planned for Summit will be designed to run on the Aurora platform as well.

    Boost U.S. computing power

    The two new machines will boost U.S. computing power far beyond Titan, the current leading U.S. supercomputer at Oak Ridge that can perform 27 quadrillion — or million billion — calculations per second. Summit and Aurora plan to perform some 200 quadrillion and 180 quadrillion calculations per second, respectively. Said Tang: “These new machines hold tremendous promise for helping to accelerate scientific discovery in many application domains, including fusion, that are of vital importance to the country.”

    PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov (link is external).

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 11:16 am on July 8, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    From Oak Ridge: “New 200-petaflop supercomputer to succeed Titan at ORNL” 

    i1

    Oak Ridge National Laboratory

    1
    Depiction of ORNL IBM Summit supercomputer

    A new 200-petaflop supercomputer will succeed Titan at Oak Ridge National Laboratory, and it could be available to scientists and researchers in 2018, a spokesperson said this week.

    The new IBM supercomputer, named Summit, could about double the computing power of what is now the world’s fastest machine, a Chinese system named Sunway TaihuLight, according to a seminannual list of the world’s top supercomputers released in June.

    Sunway TaihuLight is capable of 93 petaflops, according to the list, the TOP500 list. A petaflop is one quadrillion calculations per second. That’s 1,000 trillion calculations per second.

    Summit, which is expected to start operating at ORNL early in 2018, is one of three supercomputers that the U.S. Department of Energy expects to exceed 100 petaflops at three U.S. Department of Energy laboratories in 2018. The three planned systems are:

    the 200-petaflop Summit at ORNL, which is expected to be available to users in early 2018;

    a 150-petaflop machine known as Sierra at Lawrence Livermore National Laboratory near San Francisco in mid-2018;

    3
    IBM Sierra supercomputer depiction

    and
    a 180-petaflop supercomputer called Aurora at Argonne National Laboratory in Chicago in late 2018.

    4
    Cray Aurora supercomputer depiction

    “High performance computing remains an integral priority for the Department of Energy,” DOE Under Secretary Lynn Orr said. “Since 1993, our national supercomputing capabilities have grown exponentially by a factor of 300,000 to produce today’s machines like Titan at Oak Ridge National Lab. DOE has continually supported many of the world’s fastest, most powerful super-computers, and shared its facilities with universities and businesses ranging from auto manufacturers to pharmaceutical companies, enabling unimaginable economic benefits and leaps in science and technology, including the development of new materials for batteries and near zero-friction lubricants.”

    The supercomputers have also allowed the United States to maintain a safe, secure, and effective nuclear weapon stockpile, said Orr, DOE under secretary for science and energy.

    “DOE continues to lead in software and real world applications important to both science and industry,” he said. “Investments such as these continue to play a crucial role in U.S. economic competitiveness, scientific discovery, and national security.”

    At 200 petaflops, Summit would have at least five times as much power as ORNL’s 27-petaflop Titan. That system was the world’s fastest in November 2012 and recently achieved 17.59 petaflops on a test used by the TOP500 list that was released in June.

    Titan is used for research in areas such as materials research, nuclear energy, combustion, and climate science.

    “For several years, Titan has been the most scientifically productive in the world, allowing academic, government, and industry partners to do remarkable research in a variety of scientific fields,” ORNL spokesperson Morgan McCorkle said.

    Summit will be installed in a building close to Titan. Titan will continue operating while Summit is built and begins operating, McCorkle said.

    “That will ensure that scientific users have access to computing resources during the transition,” she said.

    Titan will then be decommissioned, McCorkle said.

    She said the total contract value for the new Summit supercomputer with all options and maintenance is $280 million. The U.S. Department of Energy is funding the project.

    McCorkle said the Oak Ridge Leadership Computing Facility at ORNL has been working with IBM, Nvidia, and Mellanox since 2014 to develop Summit.

    Like Titan, a Cray system, Summit will be part of the Oak Ridge Leadership Computing Facility, or OLCF. Researchers from around the world will be able to submit proposals to use the computer for a wide range of scientific applications, McCorkle said.

    She said the delivery of Summit will start at ORNL next year. Summit will be a hybrid computing system that uses traditional central processing units, or CPUs, and graphic processing units, or GPUs, which were first created for computer games.

    “We’re already scaling applications that will allow Summit to deliver an order of magnitude more science with at least 200 petaflops of compute power,” McCorkle said. “Early in 2018, users from around the world will have access to this resource.”

    Summit will have more than five times the computational power of Titan’s 18,688 nodes, using only about 3,400 nodes. Each Summit node will have IBM POWER9 CPUs and NVIDIA Volta GPUs connected with NVIDIA’s high-speed NVLinks and a huge amount of memory, according to the OLCF.

    Titan is also a hybrid system that combines CPUs with GPUs. That combination allowed the more powerful Titan to fit into the same space as Jaguar, an earlier supercomputer at ORNL, while using only slightly more electricity. That’s important because supercomputers can consume megawatts of power.

    China now has the top two supercomputers. Sunway TaihuLight was capable of 93 petaflops, and Tianhe-2, an Intel-based system ranked number two in the world, achieved 33.86 petaflops, according to the June version of the TOP500 list.

    But as planned, all three of the new DOE supercomputers would be more powerful than the top two Chinese systems.

    However, DOE officials said it’s not just about the hardware.

    “The strength of the U.S. program lies not just in hardware capability, but also in the ability to develop software that harnesses high-performance computing for real-world scientific and industrial applications,” DOE said. “American scientists have used DOE supercomputing capability to improve the performance of solar cells, to design new materials for batteries, to model the melting of ice sheets, to help optimize land use for biofuel crops, to model supernova explosions, to develop a near zero-fiction lubricant, and to improve laser radiation treatments for cancer, among countless other applications.

    Extensive work is already under way to prepare software and “real-world applications” to ensure that the new machines bring an immediate benefit to American science and industry, DOE said.

    “Investments such as these continue to play a crucial role in U.S. economic competitiveness, scientific discovery, and national security,” the department said.

    DOE said its supercomputers have more than 8,000 active users each year from universities, national laboratories, and industry.

    Among the supercomputer uses that DOE cited:

    Pratt and Whitney used the Argonne Leadership Computing Facility to improve the fuel efficiency of its Pure Power turbine engines.
    Boeing used the Oak Ridge Leadership Computing Facility to study the flow of debris to improve the safety of a thrust reverser for its new 787 Dreamliner.
    General Motors used the Oak Ridge Leadership Computing Facility to accelerate research on thermoelectric materials to help increase vehicle fuel efficiency.
    Proctor and Gamble used the Argonne Leadership Computing Facility to learn more about the molecular mechanisms of bubbles—important to the design of a wide range of consumer products.
    General Electric used the Oak Ridge Leadership Computing Facility to improve the efficiency of its world-leading turbines for electricity-generation.
    Navistar, NASA, the U.S. Air Force, and other industry leaders collaborated with scientists from Lawrence Livermore National Lab to develop technologies that increase semi-truck fuel efficiency by 17 percent.

    Though it was once the top supercomputer, Titan was bumped to number two behind Tianhe-2 in June 2013. It dropped to number three this June.

    As big as a basketball court, Titan is 10 times faster than Jaguar, the computer system it replaced. Jaguar, which was capable of about 2.5 petaflops, had ranked as the world’s fastest computer in November 2009 and June 2010.

    The new top supercomputer, Sunway TaihuLight, was developed by the National Research Center of Parallel Computer Engineering and Technology, or NRCPC, and installed at the National Supercomputing Center in Wuxi, China.

    Tianhe-2 was developed by China’s National University of Defense Technology.

    In the United States, DOE said its Office of Science and National Nuclear Security Administration are collaborating with other U.S. agencies, industry, and academia to pursue the goals of what is known as the National Strategic Computing Initiative:

    accelerating the delivery of “exascale” computing;
    increasing the coherence between the technology base used for modeling and simulation and that for data analytic computing;
    charting a path forward to a post-Moore’s Law era; and
    building the overall capacity and capability of an enduring national high-performance computing ecosystem.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 7:06 pm on July 7, 2016 Permalink | Reply
    Tags: , , OCLF, Supercomputing   

    From Oak Ridge: “One Billion Processor Hours Awarded to 22 Projects through ALCC” 

    i1

    Oak Ridge National Laboratory

    July 5, 2016
    Maleia Wood

    2016–17 ALCC projects allocated time on OLCF’s world-class resources

    ORNL OLCF

    1
    ALCC’s mission is to provide high-performance computing resources to projects that align with DOE’s broad energy mission, with an emphasis on high-risk, high-return simulations.

    The US Department of Energy (DOE) Office of Science has awarded nearly 1 billion processor hours to 22 projects at the Oak Ridge Leadership Computing Facility (OLCF)— a DOE Office of Science User Facility located at DOE’s Oak Ridge National Laboratory—through the DOE Office of Advanced Scientific Computing Research Leadership Computing Challenge (ALCC).

    ALCC’s mission is to provide high-performance computing resources to projects that align with DOE’s broad energy mission, with an emphasis on high-risk, high-return simulations. The ALCC program allocates up to 30 percent of the computational resources at the OLCF and the Argonne Leadership Computing Facility, as well as up to 10 percent at the National Energy Research Scientific Computing Center.

    “In addition to supporting the DOE mission, the program also seeks to broaden the field of researchers able to use some of the world’s fastest and most powerful supercomputers like Titan at the OLCF,” said Jack Wells, OLCF director of science.

    The ALCC grants 1-year awards and supports scientists from industry, academia, and national laboratories who are advancing scientific and technological research in energy-related fields. Past ALCC allocations contributed to scientific discovery in energy efficiency, computer science, climate modeling, materials science, bioenergy, and basic research.

    The 2016 projects will continue that reputation of discovery with topics that range from the biology of neurotransmitters to the search for affordable catalysts to the development of future clean energy technology. Scientific domains represented among the awards include biology, climate science, engineering, computer science, nuclear fusion, cosmology, materials science, nuclear engineering, and nuclear physics.

    ORNL Cray Titan Supercomputer
    ORNL Cray Titan Supercomputer

    Awards on Titan—ranging from 9 million to 167 million processor hours—went to projects that include the following:

    Materials Science. Catalysis plays a critical role both in the current energy landscape and by potentially enabling future clean energy technologies. A catalyst facilitates a chemical reaction and increases the reaction rate, but the substance isn’t consumed in the reaction and is, therefore, available to ease subsequent reactions. Breakthroughs in catalyst design paradigms could significantly increase energy efficiency. The search for effective and affordable catalysts is critical in both chemistry and materials science, as well as in large-scale industrial concerns.

    The computing resources on Titan will allow a team led by Efthimios Kaxiras from Harvard University to use high-throughput computation to generate datasets that will augment scientists’ abilities to predict useful catalysts. The team will focus on using nanoporous gold, which is an active, stable, and highly selective catalyst, in a particular reaction—anhydrous dehydrogenation of methanol to formaldehyde (a common chemical with multiple industrial uses).

    Catalysts traditionally have been developed using experimental trial and error or by testing known catalysts for similar reactions. The primary obstacles to designing new novel catalysts are twofold: the complexity of catalytic materials and the wide range of possible catalytic materials.

    The team is searching for an alloy catalyst that can produce formaldehyde from methanol without producing water, which involves energy-intensive separation steps. Because it is impossible to experimentally synthesize and test tens of thousands of possible bimetallic catalysts, the researchers will use Titan to computationally perform the screening.

    Biology. A team led by Cornell University’s Harel Weinstein seeks to determine the functional properties and energy-conserving mechanisms of cellular membrane proteins called neurotransmitter transporters. Specifically, the team hopes to uncover the biological machinery of neurotransmitter sodium symporters, a family of neurotransmitter transporters responsible for the release and reuptake of chemical signals between neurons.

    A major focus of the team is the dopamine transporter (DAT), the gatekeeper for the neurotransmitter dopamine that is associated with reward-motivated behavior. By simulating DAT, Weinstein and his collaborators hope not only to learn how cells harness energy to move molecules against a concentration gradient but also to uncover potential strategies for treating DAT-related disorders such as addiction and depression.

    Using molecular dynamics and high-performance computing, the team will be able to gain a clearer picture of how the transporter works at the molecular level—how energy is gained, stored, and used. Additionally, simulation of updated models could shed light on DAT mutations related to diseases such as autism, Parkinson’s disease, and attention deficit hyperactivity disorder, which have been shown to be affected by malfunctions of the neurotransmission process.

    Nuclear Physics. The accurate description of nuclear fission is relevant to a number of fields, including basic energy, nuclear waste disposal, national security, and nuclear forensics. The current theoretical description is based on limited models that rely on mathematical shortcuts and constraints and on a large collection of experimental data accumulated since 1939.

    Because many aspects of nuclear fission cannot be probed in the laboratory, devising a microscopic theory based on the fundamental properties of nuclear forces is a highly desirable goal. A team led by the University of Washington’s Aurel Bulgac will use Titan to study the sensitivity of fission fragment properties and fission dynamics using a novel theoretical approach that extends the widely used density functional theory (DFT) to superfluid nuclei. In particular, the team will focus on fission fragment excitation energies and total kinetic energy, which are difficult to extract using phenomenological models.

    In previous work involving Titan, the team developed a real-time DFT extension that explicitly includes the full dynamics of the crucial pairing correlations. Applying the method to a fissioning plutonium-240 nucleus, the team determined the final stages of fission last about 10 times longer than previously calculated. The code is one of the first in nuclear theory to take full advantage of GPU accelerators.

    Engineering. The High Performance Computing for Manufacturing (HPC4Mfg) program pairs US manufacturers with national labs’ world-class computing experts and advanced computing resources to address key challenges in US manufacturing. The solutions resulting from this collaboration will have broad industry and national impact.

    Three companies—Global Foundries, General Electric, and United Technologies Research Center—will use Titan as part of the HPC4Mfg program seeking to deliver solutions that can revolutionize the manufacturing industry through energy efficiency and increased innovation.

    Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

     
  • richardmitnick 5:32 pm on June 29, 2016 Permalink | Reply
    Tags: , , LLNL IBM Sierra supercomputer, Supercomputing   

    From LLNL: “Lawrence Livermore National Laboratory dedicates new supercomputer facility” 


    Lawrence Livermore National Laboratory

    Officials from the Department of Energy’s National Nuclear Security Administration (link is external) (NNSA) and government representatives today dedicated a new supercomputing facility at Lawrence Livermore National Laboratory (LLNL).

    The $9.8 million modular and sustainable facility provides the Laboratory flexibility to accommodate future advances in computer technology and meet a rapidly growing demand for unclassified high-performance computing (HPC). The facility houses supercomputing systems in support of NNSA’s Advanced Simulation and Computing (ASC) program. ASC is an essential and integral part of NNSA’s Stockpile Stewardship Program to ensure the safety, security and effectiveness of the nation’s nuclear deterrent without additional underground testing.

    “High performance computing is absolutely essential to the science and engineering that underpins our work in stockpile stewardship and national security. The unclassified computing capabilities at this facility will allow us to engage the young talent in academia on which NNSA’s future mission work will depend,” said NNSA Administrator Lt. Gen. Frank G. Klotz USAF (Ret.).

    Also in attendance at the dedication was Livermore Mayor John Marchand. Charles Verdon, LLNL principal associate director for Weapons and Complex Integration, presided over the ceremony.

    1
    Kim Cupps, Computing department head at Lawrence Livermore National Laboratory, gives a tour of the new computing facility.

    “The opening of this new facility underscores the vitality of Livermore’s world-class efforts to advance the state of the art in high performance computing,” said Bill Goldstein, LLNL director. “This facility provides the Laboratory the flexibility to accommodate future computing architectures and optimize their efficient use for applications to urgent national and global challenges.”

    Located on Lawrence Livermore’s east side, the new facility adjoins the Livermore Valley Open Campus. Located outside LLNL’s high-security perimeter, the open campus is home to LLNL’s High Performance Computing Innovation Center and facilitates collaboration with industry and academia to foster the innovation of new technologies.

    The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. The main computer structure is flexible in design to allow for expansion and the testing of future computer technology advances.

    The facility is now home to some of the systems acquired as part of the Commodity Technology Systems-1 (CTS-1) procurement announced in October. Delivery of those systems began in April. The Laboratory also intends to house in FY18 a powerful, but smaller, unclassified companion to the IBM “Sierra” system.

    IBM Sierra supercomputer
    LLNL IBM Sierra supercomputer

    It will support academic alliances, as well as other efforts of national importance, including the DOE-wide exascale computing project. The Sierra supercomputer will be delivered to Livermore starting in late 2017 under the tri-lab Collaboration of Oak Ridge, Argonne and Livermore (CORAL) multi-system procurement announced in November 2014. The Sierra system is expected to be capable of about 150 petaflops (quadrillion floating operations per second).

    In-house modeling and simulation expertise in energy-efficient building design was used in drawing up the specifications for the facility; heating, ventilation and air conditioning systems to meet federal sustainable design requirements to promote energy conservation. The flexible design will accommodate future liquid cooling solutions for HPC systems. The building is able to scale to 7.5 megawatts of electric power to support future platforms and was designed so that power and mechanical resources can be added as HPC technologies evolve.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 6:07 pm on June 20, 2016 Permalink | Reply
    Tags: , , , Supercomputing   

    Rutgers New Supercomputer Ranked #2 among Big Ten Universities, #8 among U.S. Academic Institutions by the Top500 List 

    The updated Top 500 ranking of world’s most powerful supercomputers issued today ranks Rutgers’ new academic supercomputer #2 among Big Ten universities, #8 among U.S. academic institutions, #49 among academic institutions globally, and #165 among all supercomputers worldwide.

    The Top 500 project provides a reliable basis for tracking and detecting trends in high-performance computing. Twice each year it assembles and releases a list of the sites operating the 500 most powerful computer systems in the world.

    Rutgers’ new supercomputer, which is named “Caliburn,” is the most powerful system in the state. It was built with a $10 million award to Rutgers from the New Jersey Higher Education Leasing Fund. The lead contractor is HighPoint Solutions of Bridgewater, N.J., which was chosen as the lead contractor after a competitive bidding process. The system manufacturer and integrator is Super Micro Computer Inc. of San Jose, Calif.

    Source: Rutgers New Supercomputer Ranked #2 among Big Ten Universities, #8 among U.S. Academic Institutions by the Top500 List

    Rutgersensis

     
  • richardmitnick 11:14 am on June 3, 2016 Permalink | Reply
    Tags: , , Supercomputing,   

    From ALCF: “3D simulations illuminate supernova explosions” 

    ANL Lab
    News from Argonne National Laboratory

    June 1, 2016
    Jim Collins

    1
    Top: This visualization is a volume rendering of a massive star’s radial velocity. In comparison to previous 1D simulations, none of the structure seen here would be present.

    2

    Bottom: Magnetohydrodynamic turbulence powered by neutrino-driven convection behind the stalled shock of a core-collapse supernova simulation. This simulation shows that the presence of rotation and weak magnetic fields dramatically impacts the development of the supernova mechanism as compared to non-rotating, non-magnetic stars. The nascent neutron star is just barely visible in the center below the turbulent convection.

    Credit:
    Sean M. Couch, Michigan State University

    Researchers from Michigan State University are using Mira to perform large-scale 3D simulations of the final moments of a supernova’s life cycle.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility
    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    While the 3D simulation approach is still in its infancy, early results indicate that the models are providing a clearer picture of the mechanisms that drive supernova explosions than ever before.

    In the landmark television series “Cosmos,” astronomer Carl Sagan famously proclaimed, “we are made of star stuff,” in reference to the ubiquitous impact of supernovas.

    At the end of their life cycles, these massive stars explode in spectacular fashion, scattering their guts—which consist of carbon, iron, and basically all other natural elements—across the cosmos. These elements go on to form new stars, solar systems, and everything else in the universe (including the building blocks for life on Earth).

    Despite this fundamental role in cosmology, the mechanisms that drive supernova explosions are still not well understood.

    “If we want to understand the chemical evolution of the entire universe and how the stuff that we’re made of was processed and distributed throughout the universe, we have to understand the supernova mechanism,” said Sean Couch, assistant professor of physics and astronomy at Michigan State University.

    To shed light on this complex phenomenon, Couch is leading an effort to use Mira, the Argonne Leadership Computing Facility’s (ALCF’s) 10-petaflops supercomputer, to carry out some of the largest and most detailed 3D simulations ever performed of core-collapse supernovas. The ALCF is a U.S. Department of Energy (DOE) Office of Science User Facility.

    After millions of years of burning ever-heavier elements, these super-giant stars (at least eight solar masses, or eight times the mass of the sun) eventually run out of nuclear fuel and develop an iron core. No longer able to support themselves against their own immense gravitational pull, they start to collapse. But a process, not yet fully understood, intervenes that reverses the collapse and causes the star to explode.

    “What theorists like me are trying to understand is that in-between step,” Couch said. “How do we go from this collapsing iron core to an explosion?”

    Through his work at the ALCF, Couch and his team are developing and demonstrating a high-fidelity 3D simulation approach that is providing a more realistic look at this “in-between step” than previous supernova simulations.

    While this 3D method is still in its infancy, Couch’s early results have been promising. In 2015, his team published a paper* in the Astrophysical Journal Letters, detailing their 3D simulations of the final three minutes of iron core growth in a 15 solar-mass star. They found that more accurate representations of the star’s structure and the motion generated by turbulent convection (measured at several hundred kilometers per second) play a substantial role at the point of collapse.

    “Not surprisingly, we’re showing that more realistic initial conditions have a significant impact on the results,” Couch said.

    Adding another dimension

    Despite the fact that stars rotate, have magnetic fields, and are not perfect spheres, most 1D and 2D supernova simulations to date have modeled non-rotating, non-magnetic, spherically symmetric stars. Scientists were forced to take this simplified approach because modeling supernovas is an extremely computationally demanding task. Such simulations involve highly complex multiphysics calculations and extreme timescales (the stars evolve over millions of years, yet the supernova mechanism occurs in a second).

    According to Couch, working with unrealistic initial conditions has led to difficulties in triggering robust and consistent explosions in simulations—a long-standing challenge in computational astrophysics.

    However, thanks to recent advances in computing hardware and software, Couch and his peers are making significant strides toward more accurate supernova simulations by employing the 3D approach.

    The emergence of petascale supercomputers like Mira has made it possible to include high-fidelity treatments of rotation, magnetic fields, and other complex physics processes that were not feasible in the past.

    “Generally when we’ve done these kinds of simulations in the past, we’ve ignored the fact that magnetic fields exist in the universe because when you add them into a calculation, it increases the complexity by about a factor of two,” Couch said. “But with our simulations on Mira, we’re finding that magnetic fields can add a little extra kick at just the right time to help push the supernova toward explosion.”

    Advances to the team’s open-source FLASH hydrodynamics code have also aided simulation efforts. Couch, a co-developer of FLASH, was involved in porting and optimizing the code for Mira as part of the ALCF’s Early Science Program in 2012. For his current project, Couch continues to collaborate with ALCF computational scientists to enhance the performance, scalability, and capabilities of FLASH to carry out certain tasks. For example, ALCF staff modified the code for writing Hierarchical Data Format (HDF5) files that sped up I/O performance by about a factor of 10.

    But even with today’s high-performance computing hardware and software, it is not yet feasible to include high-fidelity treatments of all the relevant physics in a single simulation; that would require a future exascale system, Couch said. For their ongoing simulations, Couch and his team have been forced to make a number of approximations, including a reduced nuclear network and simulating only one eighth of the full star.

    “Our simulations are only a first step toward truly realistic 3D simulations of supernova,” Couch said. “But they are already providing a proof-of-principle that the final minutes of a massive star evolution can and should be simulated in 3D.”

    The team’s results were published in Astrophysical Journal Letters in a 2015 paper titled “The Three-Dimensional Evolution to Core Collapse of a Massive Star.” The study also used computing resources at the Texas Advanced Computing Center at the University of Texas at Austin.

    Couch’s supernova research began at the ALCF with a Director’s Discretionary award and now continues with computing time awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. This work is being funded by the DOE Office of Science and the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 10:14 am on April 5, 2016 Permalink | Reply
    Tags: , Cosmic Origins, , Supercomputing   

    From Science Node: “Toward a realistic cosmic evolution” 

    Science Node bloc
    Science Node

    1
    Courtesy Cosmology and Astroparticle Physics Group University of Geneva. Switzerland Supercomputing Center.

    23 Mar, 2016 [Just popped up]
    Simone Ulmer

    Scientists exploring the universe have at their disposal research facilities such as at Laser Interferometer Gravitational-Wave Observatory (LIGO) — which recently achieved the breakthrough detection of gravitational waves — as well as telescopes and space probes.

    MIT/Caltech Advanced aLIGO Hanford Washington USA installation
    MIT/Caltech Advanced aLIGO Hanford Washington USA installation

    ESO/VLT
    ESO/VLT

    Keck Observatory, Mauna Kea, Hawaii, USA
    Keck Observatory, Mauna Kea, Hawaii, USA

    NASA/ESA Hubble Telescope
    NASA/ESA Hubble Telescope

    NASA/Spitzer Telescope
    NASA/Spitzer Telescope

    Considering that the Big Bang does not lend itself to experimental re-enactment, researchers must use supercomputers like the Piz Daint of the Swiss National Supercomputing Center (CSCS) to simulate the evolution of cosmic structures.

    Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)
    Piz Daint supercomputer of the Swiss National Supercomputing Center (CSCS)


    Access mp4 video here .
    The Piz Daint supercomputer calculated 40963 grid points and 67 billion particles to help scientists visualize these gravitational waves. Courtesy Cosmology and Astroparticle Physics Group University of Geneva and the Swiss National Supercomputing Center.

    This entails modeling a complex, dynamical system that acts at vastly different scales of magnitude and contains a gigantic number of particles. With the help of such simulations, researchers can determine the movement of those particles and hence their formation into structures under the influence of gravitational forces at cosmological scales.

    To date, simulations like these have been entirely based on Newton’s law of gravitation. Yet this is formulated for classical physics and mechanics. It operates within an absolute space-time, where the cosmic event horizon of the expanding universe does not exist. It is also of no use in describing gravitational waves, or the rotation of space-time known as ‘frame-dragging’. Yet in the real expanding universe, space-time is dynamical. And, according to the general theory of relativity, masses such as stars or planets can give it curvature.

    Consistent application of the general theory of relativity

    Led by postdoctoral researcher Julian Adamek and PhD student David Daverio under the supervision of Martin Kunz and Ruth Durrer, the researchers of the Cosmology and Astroparticle Physics Group at the University of Geneva tackled their objective of developing a realistic code. This meant the equations to be solved in the code should make consistent use of the general theory of relativity in cosmic structure evolution simulation, which entails calculating gravitational waves as well as frame-dragging.

    The research team presents the code and the results in the current issue of the journal Nature Physics.

    4
    An image of the flow field where moving masses cause space-time to be pulled along slightly (frame-dragging). The yellow-orange collections are regions of high particle density, corresponding to the clustered galaxies of the real universe. Courtesy Cosmology and Astroparticle Physics Group University of Geneva, Switzerland Supercomputing Center.

    To allow existing simulations to model cosmological structure formation, one needs to calculate approximately how fast the universe would be expanding at any given moment. That result can then be fed into the simulation.

    “The traditional methods work well for non-relativistic matter such as atomic building blocks and cold dark matter, as well as at a small scale where the cosmos can be considered homogeneous and isotropic,” says Kunz.

    But given that Newtonian physics knows no cosmic horizon, the method has only limited applicability at large scales or to neutrinos, gravitational waves, and similar relativistic matter. Since this is an approximation to a dynamical system, it may happen that a simulation of the creation of the cosmos shows neutrinos moving at faster-than-light speeds. Such simulations are therefore subject to uncertainty.
    Self-regulating calculations

    With the new method the system might now be said to regulate itself and exclude such errors, explains Kunz. In addition, the numerical code can be used for simulating various models that bring into play relativistic sources such as dynamical dark energy, relativistic particles and topological defects, all the way to core collapse supernovae (stellar explosions).

    There are two parts to the simulation code. David Daverio was instrumental in developing and refining the part named ‘LATfield2’ to make it perform highly parallel and efficient calculations on a supercomputer. This library manages the basic tools for field-based particle-mesh N-body codes, i.e. the grid spanning the simulation space, the particles and fields acting therein, and the fast Fourier transform necessary for solving the model’s constituent equations, developed largely by Julian Adamek.

    These equations resulted in the second part of the code, ‘gevolution,’ that ensures the calculations take into account the general theory of relativity. The equations describe interactions between matter, space, and time that describe gravitation in terms of curved four-dimensional space-time.

    “Key to the simulation are the metrics describing space-time curvature, and the stress-energy tensor describing distribution of matter,” says Kunz.

    The largest simulation conducted on Piz Daint consisted of a cube with 4,0963 grid points and 67 billion particles. The scientists simulated regions with weak gravitational fields and other weak relativistic effects using the new code. Thus, for the first time it was possible to fully calculate the gravitational waves and rotating space-time induced by structure formation.


    Access mp4 video here .
    Spin cycle. A visualization of the rotation of space-time. Courtesy Cosmology and Astroparticle Physics Group University of Geneva and the Swiss National Supercomputing Center.

    The scientists compared the results with those they computed using a conventional, Newtonian code, and found only minor differences. Accordingly, it appears that structure formation in the universe has little impact on its rate of expansion.

    “For the conventional standard model to work, however, dark energy has to be a cosmological constant and thus have no dynamics,” says Adamek. Based on current knowledge, this is by no means established. “Our method now facilitates the consistent simulation and study of alternative scenarios.”
    Elegant approach

    With the new method, the researchers have managed — without significantly complicating the computational effort — to consistently integrate the general theory of relativity, 100 years after its formulation by Albert Einstein, with the dynamical simulation of structure formation in the universe. The researchers say that their method of implementing the general theory of relativity is an elegant approach to calculating a realistic distribution of radiation or very high-velocity particles in a way that considers gravitational waves and the rotation of space-time.

    General relativity and cosmic structure formation

    Julian Adamek, David Daverio, Ruth Durrer & Martin Kunz

    Affiliations

    Département de Physique Théorique & Center for Astroparticle Physics, Université de Genève, 24 Quai E. Ansermet, 1211 Genève 4, Switzerland
    Julian Adamek, David Daverio, Ruth Durrer & Martin Kunz
    African Institute for Mathematical Sciences, 6 Melrose Road, Muizenberg 7945, South Africa
    Martin Kunz

    Contributions

    J.A. worked out the equations in our approximation scheme and implemented the cosmological code gevolution. He also produced the figures. D.D. developed and implemented the particle handler for the LATfield2 framework. R.D. contributed to the development of the approximation scheme and the derivation of the equations. M.K. proposed the original idea. All authors discussed the research and helped with writing the paper.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 594 other followers

%d bloggers like this: