Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 12:17 pm on September 21, 2018 Permalink | Reply
    Tags: , , LLNL/LBNL team named as Gordon Bell Award finalists for work on modeling neutron lifespans, , , , Supercomputing   

    From Lawrence Livermore National Laboratory: “LLNL/LBNL team named as Gordon Bell Award finalists for work on modeling neutron lifespans” 

    From Lawrence Livermore National Laboratory

    Sept. 20, 2018
    Jeremy Thomas
    thomas244@llnl.gov
    925-422-5539

    1
    Beta decay, the decay of a neutron (n) to a proton (p) with the emission of an electron (e) and an electron-anti-neutrino (ν). In the figure gA is depicted as the white node on the red line. The square grid indicates the lattice. Image by Evan Berkowitz/Forschungszentrum Jülich/Institut für Kernphysik /Institute for Advanced Simulation

    A team of scientists and physicists headed by the Lawrence Livermore and Lawrence Berkeley national laboratories has been named as one of six finalists for the prestigious 2018 Gordon Bell Award, one of the world’s top honors in supercomputing.

    Using the Department of Energy’s newest supercomputers, LLNL’s Sierra and Oak Ridge’s Summit, a team led by computational theoretical physicists Pavlos Vranas of LLNL and André Walker-Loud of LBNL developed an improved algorithm and code that can more precisely determine the lifetime of a neutron, an achievement that could lead to discovering new, previously unknown physics, researchers said.

    LLNL SIERRA IBM supercomputer

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The team’s approach involves simulating the fundamental theory of quantum chromodynamics (QCD) on a fine grid of space-time points called the lattice. QCD theory describes how particles like quarks and gluons make up protons and neutrons.

    The lifetime of a neutron, which begins to decay after about 15 minutes, is important because it has a profound effect on the mass composition of the universe, Vranas explained. Using previous generation supercomputers at ORNL and LLNL, the team was the first to calculate the nucleon axial coupling, a figure (denoted by gA) directly related to the neutron lifetime, at 1 percent precision. Two different real-world experiments have measured neutron lifetime with results that differ at an experimental accuracy of about 0.1 percent, which researchers believe may be related to new physics affecting each experiment.

    To resolve this discrepancy, Vranas and his team have advanced their calculation onto the new generation supercomputers Sierra and Summit, aiming to improve their precision to less than 1 percent and get closer to the experimental results. The team has fully optimized their codes on the new CPU (Central Processing Unit)/GPU (Graphics Processing Unit) architectures of the two supercomputers, which involved developing an algorithm that exponentially speeds up calculations, a method for optimally distributing GPU resources and a job manager that allows CPU and GPU jobs to be interleaved.

    “New machines like Sierra and Summit are disruptively fast and require the ability to manage and process more tasks, amounting to about a factor of 10 increase. As we move toward exascale, job management is becoming a huge factor for success. With Sierra and Summit, we will be able to run hundreds of thousands of jobs and generate several petabytes of data in a few days — a volume that is too much for the current standard management methods,” said LBNL’s Walker-Loud. “The fact that we have an extremely fast GPU code (QUDA) and were able to wrap our entire lattice QCD scientific application with new job managers we wrote (METAQ and MPI_JM) got us to the Gordon Bell finalist stage, I believe.”

    The resulting axial coupling calculation, Vranas said, will provide the neutron lifetime that the fundamental theory of QCD predicts. Any deviations from the theory may be signs of new physics beyond current understanding of nature and the reach of the Large Hadron Collider.

    “We’ve demonstrated that we can use this next generation of computers efficiently, at about 15-20 percent of peak speed,” Vranas said. “This research takes us further, and now with these computers we can move forward with precision better than one percent, in an attempt to find new physics. This is an exciting time.”

    On Sierra and Summit, the team was able to reach sustained performance of about 20 petaFLOPS (FLOPS stands for floating-point operations per second), or roughly 15 percent of the peak performance for Sierra. The team discovered that the number of calculations they could do on the new machines will keep rising in a constant linear fashion, a solid indication that using more of the GPUs in the machines will result in even faster calculations. In turn this will result in producing more data and therefore to improved precision of the calculation of the neutron lifetime, researchers said.

    “Every time a new supercomputer comes along it just amazes you,” Vranas said. “These systems are significantly different than their predecessors, and it was quite an effort on the code side to make this happen. This is important science and Sierra and Summit will accelerate it in a meaningful and impactful way.”

    LLNL postdoctoral researcher Arjun Gambhir contributed to the research. Co-authors include Evan Berkowitz (Institute for Advanced Simulation, Jülich Supercomputing Centre), M.A. Clark (NVIDIA), Ken McElvain (LBNL and University of California, Berkeley), Amy Nicholson (University of North Carolina), Enrico Rinaldi (RIKEN-Brookhaven National Laboratory), Chia Cheng Chang (LBNL), Ba ́lint Joo ́ (Thomas Jefferson National Accelerator Facility), Thorsten Kurth (NERSC/LBNL) and Kostas Orginos (College of William and Mary).

    The Gordon Bell Prize is awarded each year to recognize outstanding achievements in high performance computing, with an emphasis on rewarding innovations in science applications, engineering and large-scale data analytics.

    Other finalists include an LBNL-led collaboration using exascale deep learning on Summit to identify extreme weather patterns; a team from ORNL that developed a genomics application on Summit capable of determining the genetic architectures for chronic pain and opioid addiction at up to five orders of magnitude beyond the current state-of-the-art; an ORNL team that used an artificial intelligence system to automatically develop a deep learning network on Summit capable of identifying information from raw electron microscopy data; a team from the University of Tokyo that applied artificial intelligence and trans-precision arithmetic to accelerate simulations of earthquakes in cities; and a team led by China’s Tsinghua University that developed a framework for efficiently utilizing an entire petascale system to process multi-trillion edge graphs in seconds.

    The Gordon Bell winner will be announced at the 2018 International Conference for High Performance Computing, Networking, Storage and Analysis (SC18) in Dallas this November.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration
    Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center (FFRDC), it is primarily funded by the U.S. Department of Energy (DOE) and managed and operated by Lawrence Livermore National Security, LLC (LLNS), a partnership of the University of California, Bechtel, BWX Technologies, AECOM, and Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it.

    LLNL is self-described as “a premier research and development institution for science and technology applied to national security.” Its principal responsibility is ensuring the safety, security and reliability of the nation’s nuclear weapons through the application of advanced science, engineering and technology. The Laboratory also applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.

    The Laboratory is located on a one-square-mile (2.6 km2) site at the eastern edge of Livermore. It also operates a 7,000 acres (28 km2) remote experimental test site, called Site 300, situated about 15 miles (24 km) southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of roughly 5,800 employees.

    LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley. It was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence,[2] director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility.

    The new laboratory was sited at a former naval air station of World War II. It was already home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions (i.e. fusion). About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus.

    Lawrence tapped 32-year-old Herbert York, a former graduate student of his, to run Livermore. Under York, the Lab had four main programs: Project Sherwood (the magnetic-fusion program), Project Whitney (the weapons-design program), diagnostic weapon experiments (both for the Los Alamos and Livermore laboratories), and a basic physics program. York and the new lab embraced the Lawrence “big science” approach, tackling challenging projects with physicists, chemists, engineers, and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university’s board of regents named both laboratories for him, as the Lawrence Radiation Laboratory.

    Historically, the Berkeley and Livermore laboratories have had very close relationships on research projects, business operations, and staff. The Livermore Lab was established initially as a branch of the Berkeley laboratory. The Livermore lab was not officially severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, and LLNL’s remote test location as Site 300.[3]

    The laboratory was renamed Lawrence Livermore Laboratory (LLL) in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had exclusively managed and operated the Laboratory since its inception 55 years before. The laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008.[4] The jury found that LLNS breached a contractual obligation to terminate the employees only for “reasonable cause.”[5] The five plaintiffs also have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial.[6] There are 125 co-plaintiffs awaiting trial on similar claims against LLNS.[7] The May 2008 layoff was the first layoff at the laboratory in nearly 40 years.[6]

    On March 14, 2011, the City of Livermore officially expanded the city’s boundaries to annex LLNL and move it within the city limits. The unanimous vote by the Livermore city council expanded Livermore’s southeastern boundaries to cover 15 land parcels covering 1,057 acres (4.28 km2) that comprise the LLNL site. The site was formerly an unincorporated area of Alameda County. The LLNL campus continues to be owned by the federal government.

    LLNL/NIF


    DOE Seal
    NNSA

    Advertisements
     
  • richardmitnick 11:29 am on September 21, 2018 Permalink | Reply
    Tags: Andrew Peterson, Brown awarded $3.5M to speed up atomic-scale computer simulations, , Computational power is growing rapidly which lets us perform larger and more realistic simulations, Different simulations often have the same sets of calculations underlying them- so finding what can be re-used saves a lot of time and money, , Supercomputing   

    From Brown University: “Brown awarded $3.5M to speed up atomic-scale computer simulations” 

    Brown University
    From Brown University

    September 20, 2018
    Kevin Stacey
    kevin_stacey@brown.edu
    401-863-3766

    1
    Andrew Peterson. No photo credit.

    With a new grant from the U.S. Department of Energy, a Brown University-led research team will use machine learning to speed up atom-level simulations of chemical reactions and the properties of materials.

    “Simulations provide insights into materials and chemical processes that we can’t readily get from experiments,” said Andrew Peterson, an associate professor in Brown’s School of Engineering who will lead the work.

    “Computational power is growing rapidly, which lets us perform larger and more realistic simulations. But as the size of the simulations grows, the time involved in running them can grow exponentially. This paradox means that even with the growth in computational power, our field still cannot perform truly large-scale simulations. Our goal is to speed those simulations up dramatically — ideally by orders of magnitude — using machine learning.”

    The grant provides $3.5 million dollars for the work over four years. Peterson will work with two Brown colleagues — Franklin Goldsmith, assistant professor of engineering, and Brenda Rubenstein, assistant professor of chemistry — as well as researchers from Carnegie Mellon, Georgia Tech and MIT.

    The idea behind the work is that different simulations often have the same sets of calculations underlying them. Peterson and his colleagues aim to use machine learning to find those underlying similarities and fast-forward through them.

    “What we’re doing is taking the results of calculations from prior simulations and using them to predict the outcome of calculations that haven’t been done yet,” Peterson said. “If we can eliminate the need to do similar calculations over and over again, we can speed things up dramatically, potentially by orders of magnitude.”

    The team will focus their work initially on simulations of electrocatalysis — the kinds of chemical reactions that are important in devices like fuel cells and batteries. These are complex, often multi-step reactions that are fertile ground for simulation-driven research, Peterson says.

    Atomic-scale simulations have demonstrated usefulness in Peterson’s own work in the design of new catalysts. In a recent example, Peterson worked with Brown chemist Shouheng Sun on a gold nanoparticle catalyst that can perform a reaction necessary for converting carbon dioxide into useful forms of carbon. Peterson’s simulations showed it was the sharp edges of the oddly shaped catalyst that were particularly active for the desired reaction.

    “That led us to change the geometry of the catalyst to a nanowire — something that’s basically all edges — to maximize its reactivity,” Peterson said. “We might have eventually tried a nanowire by trial and error, but because of the computational insights we were able to get there much more quickly.”

    The researchers will use a software package that Peterson’s research group developed previously as a starting point. The software, called AMP (Atomistic Machine-learning Package) is open-source and already widely used in the simulation community, Peterson says.

    The Department of Energy grant will bring atomic-scale simulations — and the insights they produce — to bear on ever larger and more complex simulations. And while the work under the grant will focus on electrocatalysis, the tools the team develops should be widely applicable to other types of material and chemical simulations.

    Peterson is hopeful that the investment that the federal government is making in machine learning will be repaid by making better use of valuable computing resources.

    “Modern supercomputers cost millions of dollars to build, and simulation time on them is precious,” Peterson said. “If we’re able to free up time on those machines for additional simulations to be run, that translates into vastly increased return-on-investment for those machines. It’s real money.”

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Welcome to Brown

    Brown U Robinson Hall
    Located in historic Providence, Rhode Island and founded in 1764, Brown University is the seventh-oldest college in the United States. Brown is an independent, coeducational Ivy League institution comprising undergraduate and graduate programs, plus the Alpert Medical School, School of Public Health, School of Engineering, and the School of Professional Studies.

    With its talented and motivated student body and accomplished faculty, Brown is a leading research university that maintains a particular commitment to exceptional undergraduate instruction.

    Brown’s vibrant, diverse community consists of 6,000 undergraduates, 2,000 graduate students, 400 medical school students, more than 5,000 summer, visiting and online students, and nearly 700 faculty members. Brown students come from all 50 states and more than 100 countries.

    Undergraduates pursue bachelor’s degrees in more than 70 concentrations, ranging from Egyptology to cognitive neuroscience. Anything’s possible at Brown—the university’s commitment to undergraduate freedom means students must take responsibility as architects of their courses of study.

     
  • richardmitnick 8:54 am on September 8, 2018 Permalink | Reply
    Tags: , Google Dataset Search, JASMIN supercmputer, NERC, , Supercomputing, UK dataset expertise informs Google's new dataset search   

    From Science and Technology Facilities Council: “UK dataset expertise informs Google’s new dataset search” 


    From Science and Technology Facilities Council

    6 September 2018

    1
    False colour image of Europe captured by Sentinel 3. (Credit: contains modified Copernicus Sentinel data (2018)

    ESA Sentinel 3

    Experts from UK Research and Innovation have contributed to a search tool newly launched by Google that aims to help scientists, policy makers and other user groups more easily find the data required for their work and their stories, or simply to satisfy their intellectual curiosity.

    In today’s world, scientists in many disciplines and a growing number of journalists live and breathe data. There are many thousands of data repositories on the web, providing access to millions of datasets; and local and national governments around the world publish their data as well. As part of the UK Research and Innovation commitment to easy access to data, their experts worked with Google to help develop the Dataset Search, launched today.

    Similar to how Google Scholar works, Dataset Search lets users find datasets wherever they’re hosted, whether it’s a publisher’s site, a digital library, or an author’s personal web page.

    Google approached UK Research and Innovation’s Natural Environment Research Council (NERC) and Science and Technology Facilities Council (STFC) to help ensure their world-leading environmental datasets were included. The heritage in these organisations for managing huge complex datasets on the atmosphere, oceans, climate change, and even data about the solar system, managed by Dr Sarah Callaghan, the Data and Programme Manager at the UKRI’s national space laboratory STFC RAL Space, led to them working with Google on the project.

    Dr Sarah Callaghan said: “In RAL Space we manage, archive and distribute thousands of terabytes of data to make it available to scientific researchers and other interested parties. My experience making datasets findable, usable and interoperable enabled me to advise Google on their Dataset Search and how to best display their search results.”

    “I was able to draw on my work with NERC and STFC datasets, not only in just archiving and managing data for the long term and the scientific record, but also helping users to understand if a dataset is the right one for their purposes.”

    3
    Temperature of Europe during the April 2018 heatwave. (Credit: contains modified Copernicus Sentinel data (2018)

    To create Dataset Search, Google developed guidelines for dataset providers to describe their data in a way that search engines can better understand the content of their pages. These guidelines include salient information about datasets: who created the dataset, when it was published, how the data was collected, what the terms are for using the data, etc. This enables search engines to collect and link this information, analyse where different versions of the same dataset might be, and find publications that may be describing or discussing the dataset. The approach is based on an open standard for describing this information (schema.org). Many STFC and NERC datasets for environmental data are already described in this way and are particularly good examples of findable, user-friendly datasets.

    “Standardised ways of describing data allows us to help researchers by building tools and services to make it easier to find and use data” said Dr Callaghan, “If people don’t know what datasets exist, they won’t know how to look for what they need to solve their environmental problems. For example, an ecologist might not know where to go to find, or how to access the rainfall data needed to understand a changing habitat. Making data easier to find, will help introduce researchers from a variety of disciplines to the vast amount of data I and my colleagues manage for NERC and STFC.”

    The new Google Dataset Search offers references to most datasets in environmental and social sciences, as well as data from other disciplines including government data and data provided by news organisations.

    Professor Tim Wheeler, Director of Research and Innovation at NERC, said: “NERC is constantly working to raise awareness of the wealth of environmental information held within its Data Centres, and to improve access to it. This new tool will make it easier than ever for the public, business and science professionals to find and access the data that they’re looking for. We want to get as many people as possible interested in and able to benefit from data collected by the environmental science that we fund.”

    NERC JASMIN supercomputer based at STFC’s Rutherford Appleton Laboratory (Credit: STFC)

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    STFC Hartree Centre

    Helping build a globally competitive, knowledge-based UK economy

    We are a world-leading multi-disciplinary science organisation, and our goal is to deliver economic, societal, scientific and international benefits to the UK and its people – and more broadly to the world. Our strength comes from our distinct but interrelated functions:

    Universities: we support university-based research, innovation and skills development in astronomy, particle physics, nuclear physics, and space science
    Scientific Facilities: we provide access to world-leading, large-scale facilities across a range of physical and life sciences, enabling research, innovation and skills training in these areas
    National Campuses: we work with partners to build National Science and Innovation Campuses based around our National Laboratories to promote academic and industrial collaboration and translation of our research to market through direct interaction with industry
    Inspiring and Involving: we help ensure a future pipeline of skilled and enthusiastic young people by using the excitement of our sciences to encourage wider take-up of STEM subjects in school and future life (science, technology, engineering and mathematics)

    We support an academic community of around 1,700 in particle physics, nuclear physics, and astronomy including space science, who work at more than 50 universities and research institutes in the UK, Europe, Japan and the United States, including a rolling cohort of more than 900 PhD students.

    STFC-funded universities produce physics postgraduates with outstanding high-end scientific, analytic and technical skills who on graduation enjoy almost full employment. Roughly half of our PhD students continue in research, sustaining national capability and creating the bedrock of the UK’s scientific excellence. The remainder – much valued for their numerical, problem solving and project management skills – choose equally important industrial, commercial or government careers.

    Our large-scale scientific facilities in the UK and Europe are used by more than 3,500 users each year, carrying out more than 2,000 experiments and generating around 900 publications. The facilities provide a range of research techniques using neutrons, muons, lasers and x-rays, and high performance computing and complex analysis of large data sets.

    They are used by scientists across a huge variety of science disciplines ranging from the physical and heritage sciences to medicine, biosciences, the environment, energy, and more. These facilities provide a massive productivity boost for UK science, as well as unique capabilities for UK industry.

    Our two Campuses are based around our Rutherford Appleton Laboratory at Harwell in Oxfordshire, and our Daresbury Laboratory in Cheshire – each of which offers a different cluster of technological expertise that underpins and ties together diverse research fields.

    The combination of access to world-class research facilities and scientists, office and laboratory space, business support, and an environment which encourages innovation has proven a compelling combination, attracting start-ups, SMEs and large blue chips such as IBM and Unilever.

    We think our science is awesome – and we know students, teachers and parents think so too. That’s why we run an extensive Public Engagement and science communication programme, ranging from loans to schools of Moon Rocks, funding support for academics to inspire more young people, embedding public engagement in our funded grant programme, and running a series of lectures, travelling exhibitions and visits to our sites across the year.

    Ninety per cent of physics undergraduates say that they were attracted to the course by our sciences, and applications for physics courses are up – despite an overall decline in university enrolment.

     
  • richardmitnick 10:34 am on September 6, 2018 Permalink | Reply
    Tags: , , , Supercomputing,   

    From Science Node: “Putting neutrinos on ice” 

    Science Node bloc
    From Science Node

    29 Aug, 2018
    Ken Chiacchia
    Jan Zverina

    1
    IceCube Collaboration/Google Earth: PGC/NASA U.S. Geological Survy Data SIO,NOAA, U.S. Navy, NGA, GEBCO Landsat/Copernicus.

    Identification of cosmic-ray source by IceCube Neutrino Observatory depends on global collaboration.

    Four billion years ago—before the first life had developed on Earth—a massive black hole shot out a proton at nearly the speed of light.

    Fast forward—way forward—to 45.5 million years ago. At that time, the Antarctic continent had started collecting an ice sheet. Eventually Antarctica would capture 61 percent of the fresh water on Earth.

    Thanks to XSEDE resources and help from XSEDE Extended Collaborative Support Service (ECSS) experts, scientists running the IceCube Neutrino Observatory in Antarctica and their international partners have taken advantage of those events to answer a hundred-year-old scientific mystery: Where do cosmic rays come from?

    U Wisconsin IceCube neutrino observatory

    U Wisconsin ICECUBE neutrino detector at the South Pole

    IceCube employs more than 5000 detectors lowered on 86 strings into almost 100 holes in the Antarctic ice NSF B. Gudbjartsson, IceCube Collaboration

    Lunar Icecube

    IceCube DeepCore annotated

    IceCube PINGU annotated


    DM-Ice II at IceCube annotated

    Making straight the path

    First identified in 1912, cosmic rays have puzzled scientists. The higher in the atmosphere you go, the more of them you can measure. The Earth’s thin shell of air, scientists came to realize, was protecting us from potentially harmful radiation that filled space. Most cosmic ray particles consist of a single proton. That’s the smallest positively charged particle of normal matter.

    Cosmic ray particles are ridiculously powerful. Gonzalo Merino, computing facilities manager for the Wisconsin IceCube Particle Astrophysics Center at the University of Wisconsin-Madison (UW), compares the force of a proton accelerated by the LHC, the world’s largest atom-smasher, as similar to the force of a mosquito flying into a person.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    By comparison, the “Oh-My-God” cosmic ray particle detected by the University of Utah in 1991 hit with the force of a baseball flying at 58 miles per hour.

    Because cosmic-ray particles are electrically charged, they would be pushed and pulled by every magnetic field they encounter along the way. Cosmic rays would not travel in a straight line, particularly if they came from some powerful object far away in the Universe. You can’t figure out where they originated from by their direction when they hit Earth.

    Particle-physics theorists came to the rescue.

    “If cosmic rays hit any matter around them, the collision will generate secondary products,” Merino says. “A byproduct of any high-energy interaction with the protons that make up much of a cosmic ray will be neutrinos.”

    Neutrinos respond to gravity and to what’s known as the weak subatomic force, like most matter. But they aren’t affected by the electromagnetic forces that send cosmic rays on a drunkard’s walk. Scientists realized that the intense showers of protons at the source of cosmic rays had to be hitting matter nearby, producing neutrinos that can be tracked back to their source.

    The shape of water

    But if the matter that makes up your instrument can’t interact with an incoming neutrino, how are you going to detect it? The answer lay in making the detector big.

    “The probability that a neutrino will interact with matter is extremely low, but not zero,” Merino explains. “If you want to see neutrinos, you need to build a huge detector so that they collide with matter at a reasonable rate.”

    2
    Multimessenger astronomy combines information from different cosmic messenger—cosmic rays, neutrinos, gamma rays, and gravitational waves—to learn about the distant and extreme universe. Courtesy IceCube Collaboration.

    Enter the Antarctic ice shelf. The ice here is nearly pure water and could be used as a detector. From 2005 through 2010, a UW-led team created the IceCube Neutrino Observatory by drilling 86 holes deep in the ice, re-freezing detectors in the holes. Their new detector consisted of 5,160 detectors suspended in a huge ice cube six-tenths of a mile on each side.

    The IceCube scientists weren’t quite ready to detect cosmic-ray-associated neutrinos yet. While the IceCube observatory was nearly pure water, it wasn’t completely pure. As a natural formation, its transparency might differ a bit from spot to spot, which could affect detection.

    “Progress in understanding the precise optical properties of the ice leads to increasing complexity in simulating the propagation of photons in the instrument and to a better overall performance of the detector,” says Francis Halzen, a UW professor of physics and the lead scientist for the IceCube Neutrino Observatory.

    GPUs to the rescue

    The collaborators simulated the effects of neutrinos hitting the ice using traditional supercomputers containing standard central processing units (CPUs). They realized, though, that portions of their computations would instead work faster on graphics-processing units (GPUs), invented to improve video-game animation.

    “We realized that a part of the simulation is a very good match for GPUs,” Merino says. “These computations run 100 to 300 times faster on GPUs than on CPUs.”

    Madison’s own GPU cluster and collaborators’ campuses’ GPU systems helped, but it wasn’t enough.

    3

    Then Merino had a talk with XSEDE ECSS expert Sergiu Sanielevici from the Pittsburgh Supercomputing Center (PSC), lead of XSEDE’s Novel and Innovative Projects.

    Pittsburgh Supercomputing Center 3000 cores, 6 TFLOPS

    Sanielevici filled him in on the large GPU capability of XSEDE supercomputing systems. The IceCube team wound up using a number of XSEDE machines for GPU and CPU computations: Bridges at PSC, Comet at the San Diego Supercomputer Center (SDSC), XStream at Stanford University and the collection of clusters available through the Open Science Grid Consortium.

    3
    Bridges at PSC

    SDSC Dell Comet supercomputer at San Diego Supercomputer Center (SDSC)

    Stanford U Cray Xstream supercomputer

    The IceCube scientists could not assume that their computer code would run well in the XSEDE system. Their massive and complex flow of calculations could have slowed down considerably had the new machines conflicted with it. ECSS expertise was critical to making the join-up smooth.

    “XSEDE’s resources integrated seamlessly; that was very important for us,” Merino says. “XSEDE has been very collaborative, extremely open in facilitating that integration.”
    Paydirt

    Their detector built and simulated, the IceCube scientists had to wait for it to detect a cosmic neutrino. On Sept. 22, 2017, it happened. An automated system tuned to the signature of a cosmic-ray neutrino sent a message to the members of the IceCube Collaboration, an international team with more than 300 scientists in 12 countries.

    This was important. A single neutrino detection would not have been proof by itself. Scientists at observatories that detect other types of radiation expected from cosmic rays needed to look at the same spot in the sky.

    4
    Blazars are a type of active galaxy with one of its jets pointing toward us. It emits both neutrinos and gamma rays that could be detected by the IceCube Neutrino Observatory as well as by other telescopes on Earth and in space. Courtesy IceCube/NASA.

    They found multiple types of radiation coming from the same spot in the sky as the neutrino. At this spot was a “blazar” called TXS 0506+056, about 4 billion light years from Earth. A type of active galactic nucleus (AGN), a blazar is a huge black hole sitting in the center of a distant galaxy, flaring as it eats the galaxy’s matter. Blazars are AGNs that happen to be pointed straight at us.

    The scientists think that the vast forces surrounding the black hole are likely the catapult that shot cosmic-ray particles on their way toward Earth. After a journey of 4 billion years across the vastness of space, one of the neutrinos created by those particles blazed a path through IceCube’s detector.

    The IceCube scientists went back over nine and a half years of detector data, before they’d set up their automated warning. They found several earlier detections from TXS 0506+056, greatly raising their confidence.

    The findings led to papers in the prestigious journal Science and Science in July 2018. Future work will focus on confirming that blazars are the source—or at least a major source—of the high-energy particles that fill the Universe.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:57 pm on September 5, 2018 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From PPPL and ALCF: “Artificial intelligence project to help bring the power of the sun to Earth is picked for first U.S. exascale system” 


    From PPPL

    and

    Argonne Lab

    Argonne National Laboratory ALCF

    August 27, 2018
    John Greenwald

    1
    Deep Learning Leader William Tang. (Photo by Elle Starkman/Office of Communications.)

    To capture and control the process of fusion that powers the sun and stars in facilities on Earth called tokamaks, scientists must confront disruptions that can halt the reactions and damage the doughnut-shaped devices.

    PPPL NSTX-U

    Now an artificial intelligence system under development at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    The project, titled “Accelerated Deep Learning Discovery in Fusion Energy Science” is one of 10 Early Science Projects on data science and machine learning for the Aurora supercomputer, which is set to become the first U.S. exascale system upon its expected arrival at Argonne in 2021. The system will be capable of performing a quintillion (1018) calculations per second — 50-to-100 times faster than the most powerful supercomputers today.

    Fusion combines light elements

    Fusion combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — in reactions that generate massive amounts of energy. Scientists aim to replicate the process for a virtually inexhaustible supply of power to generate electricity.

    The goal of the PPPL/Princeton University project is to develop a method that can be experimentally validated for predicting and controlling disruptions in burning plasma fusion systems such as ITER — the international tokamak under construction in France to demonstrate the practicality of fusion energy. “Burning plasma” refers to self-sustaining fusion reactions that will be essential for producing continuous fusion energy.

    Heading the project will be William Tang, a principal research physicist at PPPL and a lecturer with the rank and title of professor in the Department of Astrophysical Sciences at Princeton University. “Our research will utilize capabilities to accelerate progress that can only come from the deep learning form of artificial intelligence,” Tang said.

    Networks analagous to a brain

    Deep learning, unlike other types of computational approaches, can be trained to solve with accuracy and speed highly complex problems that require realistic image resolution. Associated software consists of multiple layers of interconnected neural networks that are analogous to simple neurons in a brain. Each node in a network identifies a basic aspect of data that is fed into the system and passes the results along to other nodes that identify increasingly complex aspects of the data. The process continues until the desired output is achieved in a timely way.

    The PPPL/Princeton deep-learning software is called the “Fusion Recurrent Neural Network (FRNN),” composed of convolutional and recurrent neural nets that allow a user to train a computer to detect items or events of interest. The software seeks to speedily predict when disruptions will break out in large-scale tokamak plasmas, and to do so in time for effective control methods to be deployed.

    The project has greatly benefited from access to the huge disruption-relevant data base of the Joint European Torus (JET) in the United Kingdom, the largest and most powerful tokamak in the world today.

    Joint European Torus, at the Culham Centre for Fusion Energy in the United Kingdom

    The FRNN software has advanced from smaller computer clusters to supercomputing systems that can deal with such vast amounts of complex disruption-relevant data. Running the data aims to identify key pre-disruption conditions, guided by insights from first principles-based theoretical simulations, to enable the “supervised machine learning” capability of deep learning to produce accurate predictions with sufficient warning time.

    Access to Tiger computer cluster

    The project has gained from access to Tiger, a high-performance Princeton University cluster equipped with advanced image-resolution GPUs that have enabled the deep learning software to advance to the Titan supercomputer at Oak Ridge National Laboratory and to powerful international systems such as the Tsubame 3.0 supercomputer in Tokyo, Japan.

    Tiger supercomputer at Princeton University

    ORNL Cray XK7 Titan Supercomputer

    Tsubame 3.0 supercomputer in Tokyo, Japan

    The overall goal is to achieve the challenging requirements for ITER, which will need predictions to be 95 percent accurate with less than 5 percent false alarms at least 30 milliseconds or longer before disruptions occur.


    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    The team will continue to build on advances that are currently supported by the DOE while preparing the FRNN software for Aurora exascale computing. The researchers will also move forward with related developments on the SUMMIT supercomputer at Oak Ridge.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    Members of the team include Julian Kates-Harbeck, a graduate student at Harvard University and a DOE Office of Science Computational Science Graduate Fellow (CSGF) who is the chief architect of the FRNN. Researchers include Alexey Svyatkovskiy, a big-data, machine learning expert who will continue to collaborate after moving from Princeton University to Microsoft; Eliot Feibush, a big data analyst and computational scientist at PPPL and Princeton, and Kyle Felker, a CSGF member who will soon graduate from Princeton University and rejoin the FRNN team as a post-doctoral research fellow at Argonne National Laboratory.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition


    PPPL campus

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 10:30 am on August 29, 2018 Permalink | Reply
    Tags: , Supercomputing, T.A.C.C.,   

    From Texas Advanced Computing Center: “New Texas supercomputer to push the frontiers of science” 

    TACC bloc

    From Texas Advanced Computing Center

    August 29, 2018
    Aaron Dubrow

    National Science Foundation awards $60 million to the Texas Advanced Computing Center to build nation’s fastest academic supercomputer.


    A new supercomputer, known as Frontera (Spanish for “frontier”), will begin operations in 2019 [That’s pretty fast]. It will allow the nation’s academic researchers to make important discoveries in all fields of science, from astrophysics to zoology, and further establishes The University of Texas at Austin’s leadership in advanced computing.

    The National Science Foundation (NSF) announced today that it has awarded $60 million to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for the acquisition and deployment of a new supercomputer that will be the fastest at any U.S. university and among the most powerful in the world.

    The new system, known as Frontera (Spanish for “frontier”), will begin operations in 2019 . It will allow the nation’s academic researchers to make important discoveries in all fields of science, from astrophysics to zoology, and further establishes The University of Texas at Austin’s leadership in advanced computing.

    2
    Image from a global simulation of Earth’s mantle convection enabled by the NSF-funded Stampede supercomputer. The Frontera system will allow researchers to incorporate more observations into simulations, leading to new insights into the main drivers of plate motion. [Courtesy of ICES, UT Austin]

    “Supercomputers — like telescopes for astronomy or particle accelerators for physics — are essential research instruments that are needed to answer questions that can’t be explored in the lab or in the field,” said Dan Stanzione, TACC executive director. “Our previous systems have enabled major discoveries, from the confirmation of gravitational wave detections by the Laser Interferometer Gravitational-wave Observatory to the development of artificial-intelligence-enabled tumor detection systems. Frontera will help science and engineering advance even further.”

    “For over three decades, NSF has been a leader in providing the computing resources our nation’s researchers need to accelerate innovation,” said NSF Director France Córdova. “Keeping the U.S. at the forefront of advanced computing capabilities and providing researchers across the country access to those resources are key elements in maintaining our status as a global leader in research and education. This award is an investment in the entire U.S. research ecosystem that will enable leap-ahead discoveries.”

    Frontera is the latest in a string of successful awards and deployments by TACC with support from NSF. Since 2006, TACC has built and operated three supercomputers that debuted in the Top 10 most powerful systems in the world: Ranger (2008), Stampede1 (2012) and Stampede2 (2017). Three other systems debuted in the Top 25.

    If completed today, Frontera would be the fifth most powerful system in the world, the third fastest in the U.S. and the largest at any university. For comparison, Frontera will be about twice as powerful as Stampede2 (currently the fastest university supercomputer) and 70 times as fast as Ranger, which operated until 2013. To match what Frontera will compute in just one second, a person would have to perform one calculation every second for about a billion years.

    3
    Industrial scale simulations of novel boiler designs (above) are needed to make them cleaner and more cost effective. Systems like Frontera will make it possible to use computation to evaluate new designs much more quickly before they are built. [Courtesy: the University of Utah, the University of California, Berkeley, and Brigham Young University]

    “Today’s NSF award solidifies the University of Texas’ reputation as the nation’s leader in academic supercomputing,” said Gregory L. Fenves, president of UT Austin. “UT is proud to serve the research community with the world-class capabilities of TACC, and we are excited to contribute to the many discoveries Frontera will enable.”

    Anticipated early projects on Frontera include analyses of particle collisions from the Large Hadron Collider, global climate modeling, improved hurricane forecasting and multi-messenger astronomy.

    The primary computing system will be provided by Dell EMC and powered by Intel processors. Data Direct Networks will contribute the primary storage system, and Mellanox will provide the high-performance interconnect for the machine. NVIDIA, GRC (Green Revolution Cooling) and the cloud providers Amazon, Google, and Microsoft will also have roles in the project.

    “The new Frontera systems represents the next phase in the long-term relationship between TACC and Dell EMC, focused on applying the latest technical innovation to truly enable human potential,” said Thierry Pellegrino, vice president of Dell EMC High Performance Computing. “The substantial power and scale of this new system will help researchers from Austin and across the U.S. harness the power of technology to spawn new discoveries and advancements in science and technology for years to come.”

    “Accelerating scientific discovery lies at the foundation of the TACC’s mission, and enabling technologies to advance these discoveries and innovations is a key focus for Intel,” said Patricia Damkroger, Vice President in Intel’s Data Center Group and General Manager, Extreme Computing Group. “We are proud that the close partnership we have built with TACC will continue with TACC’s selection of next-generation Intel Xeon Scalable processors as the compute engine for their flagship Frontera system.”

    Faculty at the Institute for Computational Engineering and Sciences (ICES) at UT Austin will lead the world-class science applications and technology team, with partners from the California Institute of Technology, Cornell University, Princeton University, Stanford University, the University of Chicago, the University of Utah and the University of California, Davis.

    Experienced technologists and operations partners from the sites above as well as The Ohio State University, the Georgia Institute of Technology and Texas A&M University will ensure the system runs effectively in all areas, including security, user engagement and workforce development.

    “With its massive computing power, memory, bandwidth, and storage, Frontera will usher in a new era of computational science and engineering in which data and models are integrated seamlessly to yield new understanding that could not have been achieved with either alone,” said Omar Ghattas, director of the Center for Computational Geosciences in ICES and co-principal investigator on the award.

    Frontera’s name alludes to “Science the Endless Frontier,” the title of a 1945 report to President Harry Truman by Vannevar Bush that led to the creation of the National Science Foundation.

    “NSF was born out of World War II and the idea that science, and scientists, had enabled our nation to win the war, and continued innovation would be required to ‘win the peace’,” said Stanzione. “Many of the frontiers of research today can be advanced only by computing, and Frontera will be an important tool to solve grand challenges that will improve our nation’s health, well-being, competitiveness and security.”

    Frontera will enter production in the summer of 2019 and will operate for five years. In addition to serving as a resource for the nation’s scientists and engineers, the award will support efforts to test and demonstrate the feasibility of an even larger future leadership-class system, 10 times as fast as Frontera, to potentially be deployed as Phase 2 of the project.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer


     
  • richardmitnick 10:16 am on August 16, 2018 Permalink | Reply
    Tags: 3-D simulations of double-detonation Type Ia supernovas reveal dynamic burning, , , , , , , Supercomputing, Supernova research, Titan Helps Researchers Explore Explosive Star Scenarios   

    From Oak Ridge Leadership Computing Facility: ” Titan Helps Researchers Explore Explosive Star Scenarios – 3-D simulations of double-detonation Type Ia supernovas reveal dynamic burning’ 

    i1

    Oak Ridge National Laboratory

    From Oak Ridge Leadership Computing Facility

    8.16.18
    Jonathan Hines

    Exploding stars may seem like an unlikely yardstick for measuring the vast distances of space, but astronomers have been mapping the universe for decades using these stellar eruptions, called supernovas, with surprising accuracy.

    This is an artist’s impression of the SN 1987A remnant. The image is based on real data and reveals the cold, inner regions of the remnant, in red, where tremendous amounts of dust were detected and imaged by ALMA. This inner region is contrasted with the outer shell, lacy white and blue circles, where the blast wave from the supernova is colliding with the envelope of gas ejected from the star prior to its powerful detonation. Image credit: ALMA / ESO / NAOJ / NRAO / Alexandra Angelich, NRAO / AUI / NSF.

    ESO/NRAO/NAOJ ALMA Array in Chile in the Atacama at Chajnantor plateau, at 5,000 metres

    NRAO/Karl V Jansky VLA, on the Plains of San Agustin fifty miles west of Socorro, NM, USA, at an elevation of 6970 ft (2124 m)

    Type Ia supernovas—exploding white dwarf stars—are considered the most reliable distance markers for objects beyond our local group of galaxies. Because all Type Ia supernovas give off about the same amount of light, their distance can be inferred by the light intensity observed from Earth.

    A white dwarf fed by a normal star reaches the critical mass and explodes as a type Ia supernova. Credit: NASA/CXC/M Weiss

    These so-called standard candles are critical to astronomers’ efforts to map the cosmos. It’s been estimated that Type Ia supernovas can be used to calculate distances to within 10 percent accuracy, good enough to help scientists determine that the expansion of the universe is accelerating, a discovery that garnered the Nobel Prize in 2011.

    1
    “Outflows” (red), regions where plumes of hot gas escape the intense nuclear burning at a star’s surface, form at the onset of convection in the helium shell of some white dwarf stars. This visualization depicts early convection on the surface of white dwarf stars of different masses. (Image credit: Adam Jacobs, Stony Brook University)

    But despite their reputation for uniformity, exploding white dwarfs contain subtle differences that scientists are working to explain using supercomputers.

    A team led by Michael Zingale of Stony Brook University is exploring the physics of Type Ia supernovas using the Titan supercomputer at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory. Titan is the flagship machine of the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL. The team’s latest research focuses on a specific class of Type Ia supernovas known as double-detonation supernovas, a process by which a single star explodes twice.

    This year, the team completed a three-dimensional (3-D), high-resolution investigation of the thermonuclear burning a double-detonation white dwarf undergoes before explosion. The study expands upon the team’s initial 3-D simulation of this supernova scenario, which was carried out in 2013.

    “In 3-D simulations we can see the region of convective burning drill down deeper and deeper into the star under the right conditions,” said Adam Jacobs, a graduate student on Zingale’s team. “Higher mass and more burning force the convection to be more violent. These results will be useful in future studies that explore the subsequent explosion in three-dimensional detail.”

    By capturing the genesis of a Type Ia supernova, Zingale’s team is laying the foundation for the first physically realistic start-to-finish, double-detonation supernova simulation. Beyond capturing the incredible physics of an exploding star, the creation of a robust end-to-end model would help astronomers understand stellar phenomena observed in our night sky and improve the accuracy of cosmological measurements.

    These advances, in addition to helping us orient ourselves in the universe, could shed light on some of humanity’s biggest questions about how the universe formed, how we came to be, and where we’re going.

    An Explosive Pairing

    All Type Ia supernovas begin with a dying star gravitationally bound to a stellar companion. White dwarfs are the remnants of Sun-like stars that have spent most of their nuclear fuel. Composed mostly of carbon and oxygen, white dwarfs pack a mass comparable to that of the Sun in a star that’s about the size of the Earth.

    Left to its own devices, a lone white dwarf will smolder into darkness. But when a white dwarf is paired with a companion star, a cosmic dance ensues that’s destined for fireworks.

    To become a supernova, a white dwarf must collide with or siphon off the mass of its companion. The nature of the companion—perhaps a Sun-like star, a red giant star, or another white dwarf—and the properties of its orbit play a large role in determining the supernova trigger.

    In the classic setup, known as the single-degenerate scenario, a white dwarf takes on the maximum amount of mass it can handle—about 1.4 times the mass of the Sun, a constraint known as the Chandrasekhar limit. The additional mass increases pressure within the white dwarf’s core, reigniting nuclear fusion. Heat builds up within the star over time until it can no longer escape the star’s surface fast enough. A moving flame front of burning gases emerges, engulfing the star and causing its explosion.

    This model gave scientists a strong explanation for the uniformity of Type Ia supernovas, but further tests and observational data gathered by astronomers suggested there was more to the story.

    “To reach the Chandrasekhar limit, a white dwarf has to gain mass at just the right rate so that it grows without losing mass, for example by triggering an explosion,” Jacobs said. “It’s difficult for the classic model to explain all we know today. The community is more and more of the belief that there are going to be multiple progenitor systems that lead to a Type Ia system.”

    The double-detonation scenario, a current focus of Zingale’s team, is one such alternative. In this model, a white dwarf builds up helium on its surface. The helium can be acquired in multiple ways: stealing hydrogen from a Sun-like companion and burning it into helium, siphoning helium directly from a helium white dwarf, or attracting the helium-rich core remnant of a dying Sun-like star. The buildup of helium on the white dwarf’s surface can cause a detonation before reaching the Chandrasekhar limit. The force of this sub-Chandrasekhar detonation triggers a second detonation in the star’s carbon–oxygen core.

    “If you have a thick helium shell, the explosion doesn’t look like a normal Type Ia supernova,” Jacobs said. “But if the helium shell is very thin, you can get something that does.”

    To test this scenario, Zingale’s team simulated 18 different double-detonation models using the subsonic hydrodynamics code MAESTRO. The simulations were carried out under a 50-million core-hour allocation on Titan, a Cray XK7 with a peak performance of 27 petaflops (or 27 quadrillion calculations per second), awarded through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program. DOE’s Office of Nuclear Physics also supported the team’s work.

    By varying the mass of the helium shell and carbon–oxygen core in each model, MAESTRO calculated a range of thermonuclear dynamics that potentially could lead to detonation. Additionally, the team experimented with “hot” and “cold” core temperatures—about 10 million and 1 million degrees Celsius, respectively.

    In three-dimensional detail, the team was able to capture the formation of “hot spots” on the sub-Chandrasekhar star’s surface, regions where the star cannot shed the heat of burning helium fast enough. The simulations indicated that this buildup could lead to a runaway reaction if the conditions are right, Jacobs said.

    “We know that all nuclear explosions depend on a star’s temperature and density. The question is whether the shell dynamics of the double-detonation model can yield the temperature and density needed for an explosion,” Jacobs said. “Our study suggests that it can.”

    Using the OLCF’s analysis cluster Rhea, Zingale’s team was able to visualize this relationship for the first time.

    Bigger and Better

    Before translating its findings to the next step of double detonation, called the ignition-to-detonation phase, Zingale’s team is upgrading MAESTRO to calculate more realistic physics, an outcome that will enhance the fidelity of its simulations. On Titan, this means equipping the CPU-only code to leverage GPUs, which are highly parallel, highly efficient processors that can take on heavy calculation loads.

    Working with the OLCF’s Oscar Hernandez, the team was able to offload one of MAESTRO’s most demanding tasks: tracking stars’ nucleus-merging, energy-releasing process called nucleosynthesis. For the double-detonation problem, MAESTRO calculates a network of three elements—helium, carbon, and oxygen. By leveraging the GPUs, Zingale’s team could increase that number to around 10. Early efforts to program the OpenACC compiler directives included in the PGI compiler indicated a speedup of around 400 percent was attainable for this part of the code.

    The GPU effort benefits the team’s investigation of not only Type Ia supernovas but also other astrophysical phenomena. As part of its current INCITE proposal, Zingale’s team is exploring Type I x‑ray bursts, a recurring explosive event triggered by the buildup of hydrogen and helium on the surface of a neutron star, the densest and smallest type of star in the universe.

    “Right now our reaction network for x-ray bursts includes 11 nuclei. We want to go up to 40. That requires about a factor of 16 more computational power that only the GPUs can give us,” Zingale said.

    Maximizing the power of current-generation supercomputers will position codes like MAESTRO to better take advantage of the next generation of machines. Summit, the OLCF’s next GPU-equipped leadership system, is expected to deliver at least five times the performance of Titan.

    “Ultimately, we hope to understand how convection behaves in these stellar systems,” Zingale said, “Now we want to do bigger and better, and Titan is what we need to achieve that.”

    Related publications:

    Jacobs, M. Zingale, A. Nonaka, A. Almgren, and J. Bell, “Low Mach Number Modeling of Convection in Helium Shells on Sub-Chandrasekhar White Dwarfs II: Bulk Properties of Simple Models.” arXiv preprint: http://arxiv.org/abs/1507.06696.

    Zingale, C. Malone, A. Nonaka, A. Almgren, and J. Bell, “Comparisons of Two- and Three-Dimensional Convection in Type I X-ray Bursts.” The Astrophysical Journal 807, no. 1 (2015): 60–71, doi:10.1088/0004-637X/807/1/60.

    Zingale, A. Nonaka, A. Almgren, J. Bell, C. Malone, and R. Orvedahl, “Low Mach Number Modeling of Convection in Helium Shells on Sub-Chandrasekhar White Dwarfs. I. Methodology.” The Astrophysical Journal 764, no. 1 (2013): 97–110, doi:10.1088/0004-637X/764/1/97.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 11:37 am on July 31, 2018 Permalink | Reply
    Tags: Cori at NERSC, , , , Supercomputing   

    From PPPL: “Newest supercomputer to help develop fusion energy in international device” 


    From PPPL

    July 25, 2018
    John Greenwald

    Scientists led by Stephen Jardin, principal research physicist and head of the Computational Plasma Physics Group at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), have won 40 million core hours of supercomputer time to simulate plasma disruptions that can halt fusion reactions and damage fusion facilities, so that scientists can learn how to stop them. The PPPL team will apply its findings to ITER, the international tokamak under construction in France to demonstrate the practicality of fusion energy. The results could help ITER operators mitigate the large-scale disruptions the facility inevitably will face.

    ITER Tokamak in Saint-Paul-lès-Durance, which is in southern France

    Receipt of the highly competitive 2018 ASCR Leadership Computing Challenge (ALCC) award entitles the physicists to simulate the disruption on Cori, the newest and most powerful supercomputer at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory.

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    NERSC, a U.S. Department of Energy Office of Science user facility, is a world leader in accelerating scientific discovery through computation.

    Model the entire disruption

    “Our objective is to model development of the entire disruption from stability to instability to completion of the event,” said Jardin, who has led previous studies of plasma breakdowns. “Our software can now simulate the full sequence of an ITER disruption, which could not be done before.”

    Fusion, the power that drives the sun and stars, is the fusing of light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

    The award of 40 million core hours on Cori, a supercomputer named for Nobel Prize-winning biochemist Gerty Cori that has hundreds of thousands of cores that act in parallel, will enable the physicists to complete in weeks what a single-core laptop computer would need thousands of years to accomplish. The high-performance computing machine will scale up simulations for ITER and perform other tasks that less powerful computers would be unable to complete.

    On Cori the team will run the M3D-C1 code primarily developed by Jardin and PPPL physicist Nate Ferraro. The code, developed and upgraded over a decade, will evolve the disruption simulation forward in a realistic manner to produce quantitative results. PPPL now uses the code to perform similar studies for current fusion facilities for validation.

    The simulations will also cover strategies for the mitigation of ITER disruptions, which could develop from start to finish within roughly a tenth of a second. Such strategies require a firm understanding of the physics behind mitigations, which the PPPL team aims to create. Together with Jardin and Ferraro on the team are physicist Isabel Krebs and computational scientist Jin Chen.

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition


    PPPL campus

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University. PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

     
  • richardmitnick 1:57 pm on July 7, 2018 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From MIT News: “Project to elucidate the structure of atomic nuclei at the femtoscale” 

    MIT News
    MIT Widget

    From MIT News

    July 6, 2018
    Scott Morley | Laboratory for Nuclear Science

    1
    The image is an artist’s visualization of a nucleus as studied in numerical simulations, created using DeepArt neural network visualization software. Image courtesy of the Laboratory for Nuclear Science.

    Laboratory for Nuclear Science project selected to explore machine learning for lattice quantum chromodynamics.

    The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, has selected 10 data science and machine learning projects for its Aurora Early Science Program (ESP). Set to be the nation’s first exascale system upon its expected 2021 arrival, Aurora will be capable of performing a quintillion calculations per second, making it 10 times more powerful than the fastest computer that currently exists.

    Depiction of ANL ALCF Cray Shasta Aurora supercomputer

    The Aurora ESP, which commenced with 10 simulation-based projects in 2017, is designed to prepare key applications, libraries, and infrastructure for the architecture and scale of the exascale supercomputer. Researchers in the Laboratory for Nuclear Science’s Center for Theoretical Physics have been awarded funding for one of the projects under the ESP. Associate professor of physics William Detmold, assistant professor of physics Phiala Shanahan, and principal research scientist Andrew Pochinsky will use new techniques developed by the group, coupling novel machine learning approaches and state-of-the-art nuclear physics tools, to study the structure of nuclei.

    Shanahan, who began as an assistant professor at MIT this month, says that the support and early access to frontier computing that the award provides will allow the group to study the possible interactions of dark matter particles with nuclei from our fundamental understanding of particle physics for the first time, providing critical input for experimental searches aiming to unravel the mysteries of dark matter while simultaneously giving insight into fundamental particle physics.

    “Machine learning coupled with the exascale computational power of Aurora will enable spectacular advances in many areas of science,” Detmold adds. “Combining machine learning to lattice quantum chromodynamics calculations of the strong interactions between the fundamental particles that make up protons and nuclei, our project will enable a new level of understanding of the femtoscale world.”

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.


    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 1:35 pm on July 5, 2018 Permalink | Reply
    Tags: , Some of our biggies, Supercomputing, University at Buffalo’s Center for Computational Research, XD Metrics on Demand (XDMoD) tool from U Buffalo   

    From Science Node: “”Getting the most out of your supercomputer” 

    Science Node bloc
    From Science Node

    02 Jul, 2018
    Kevin Jackson

    1
    No image caption or credit.

    As the name implies, supercomputers are pretty special machines. Researchers from every field seek out their high-performance capabilities, but time spent using such a device is expensive. As recently as 2015, it took the same amount of energy to run Tianhe-2, the world’s second-fastest supercomputer [now #4], for a year as it did to power a 13,501 person town in Mississippi.

    China’s Tianhe-2 Kylin Linux TH-IVB-FEP supercomputer at National Supercomputer Center, Guangzhou, China

    And that’s not to mention the initial costs associated with purchase, as well as salaries for staff to help run and support the machine. Supercomputers are kept incredibly busy by their users, often oversubscribed, with thousands of jobs in the queue waiting for others to finish.

    With computing time so valuable, managers of supercomputing centers are always looking for ways to improve performance and speed throughput for users. This is where Tom Furlani and his team at the University at Buffalo’s Center for Computational Research, come in.

    Thanks to a grant from the National Science Foundation (NSF) in 2010, Furlani and his colleagues have developed the XD Metrics on Demand (XDMoD) tool, to help organizations improve production on their supercomputers and better understand how they are being used to enable science and engineering.

    “XDMoD is an incredibly useful tool that allows us not only to monitor and report on the resources we allocate, but also provides new insight into the behaviors of our researcher community,” says John Towns, PI and Project Director for the Extreme Science and Engineering Discovery Environment (XSEDE).

    Canary in the coal mine

    Modern supercomputers are complex combinations of compute servers, high speed networks, and high performance storage systems. Each of these areas is a potential point of under performance or even outright failure. Add system software and the complexity only increases.

    With so much that can go wrong, a tool that can identify problems or poor performance as well as monitor overall usage is vital. XDMoD aims to fulfill that role by performing three functions:

    1. Job accounting – XDMoD provides metrics about utilization, including who is using the system and how much, what types of jobs are running, plus length of wait times, and more.

    2. Quality of service – The complex mechanisms behind HPC often mean that managers and support personnel don’t always know if everything is working correctly—or they lack the means to ensure that it is. All too often this results in users serving as “canaries in the coal mine” who identify and alert admins only after they’ve discovered an issue.

    To solve this, XDMoD launches application kernels daily that provide baseline performances for the cluster in question. If these kernels show that something that should take 30 seconds is now taking 120, support personnel know they need to investigate. XDMoD’s monitoring of the Meltdown and Spectre patches is a perfect example—the application kernels allowed system personnel to quantify the effects of the patches put in place to mitigate the chip vulnerabilities.

    3. Job-level performance – Much like job accounting, job-level performance zeroes in on usage metrics. However, this task focuses more on how well users’ codes are performing. XDMoD can measure the performance of every single job, helping users to improve the efficiency of their job or even figure out why it failed.

    Furlani also expects that XDMoD will soon include a module to help quantify the return on investment (ROI) for these expensive systems, by tying external funding of the supercomputer’s users to their external research funding.

    Thanks to its open-source code, XDMoD’s reach extends to commercial, governmental, and academic supercomputing centers worldwide, including England, Spain, Belgium, Germany, and many others.

    Future features

    In 2015, the NSF awarded the University at Buffalo a follow-on grant to continue work on XDMoD. Among other improvements, the project will include cloud computing metrics. Cloud use is growing all the time, and jobs performed there are much different in terms of metrics.

    2
    Who’s that user? XDMoD’s customizable reports help organizations better understand how their computing resources are being used to enable science and engineering. This graph depicts the allocation of resources delivered by supporting funding agency. Courtesy University at Buffalo.

    For the average HPC job, Furlani explains that the process starts with a researcher requesting resources, such as how many processors and how much memory they need. But in the cloud, a virtual machine may stop running and then start again. What’s more, a cloud-based supercomputer can increase and decrease cores and memory. This makes tracking performance more challenging.

    “Cloud computing has a beginning, but it doesn’t necessarily have a specific end,” Furlani says. “We have to restructure XDMoD’s entire backend data warehouse to accommodate that.”

    Regardless of where XDMoD goes next, tools like this will continue to shape and redefine what supercomputers can accomplish.

    Some of our biggies:

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    No. 1 in the world.

    LLNL SIERRA IBM supercomputer

    No.3 in the world

    ORNL Cray XK7 Titan Supercomputer

    No. 7 in the world

    NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

    No.10 in the world

    See the full article here .


    five-ways-keep-your-child-safe-school-shootings
    Please help promote STEM in your local schools.

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: