Tagged: ASCR Discovery Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 1:38 pm on December 5, 2018 Permalink | Reply
    Tags: ASCR Discovery, Astronomical magnetism, , Mira an IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility a Department of Energy user facility, NASA’s Pleiades supercomputer, Nick Featherstone- University of Colorado Boulder   

    From ASCR Discovery: “Astronomical magnetism” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    Modeling solar and planetary magnetic fields is a big job that requires a big code.

    1
    Convection models of the sun, with increasing amounts of rotation from left to right. Warm flows (red) rise to the surface while others cool (blue). These simulations are the most comprehensive high-resolution models of solar convection so far. See video here.

    Image courtesy of Nick Featherstone, University of Colorado Boulder.

    It’s easy to take the Earth’s magnetic field for granted. It’s always on the job, shielding our life-giving atmosphere from the corrosive effects of unending solar radiation. Its constant presence also gives animals – and us — clues to find our way around.

    This vital force has protected the planet since long before humans evolved, yet its source – the giant generator of a heat-radiating, electricity-conducting liquid iron core swirling as the planet rotates – still holds mysteries. Understanding the vast and complex turbulent features of Earth’s dynamo – and that of other planets and celestial bodies – has challenged physicists for decades.

    “You can always do the problem you want to, but just a little bit,” says Nick Featherstone, research associate at the University of Colorado Boulder. Thanks to his efforts, however, researchers now have a computer code that lets them come closer than ever to simulating these features in detail across a whole planet or star. The program, known as Rayleigh, is open-source and available to anyone.

    To demonstrate the power of Rayleigh’s algorithms, a research team has simulated the dynamics of the sun, Jupiter and Earth in unprecedented detail. The project has been supported with a Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program allocation of 260 million processor hours on Mira, an IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility, a Department of Energy user facility.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    2
    Earth’s liquid metal core produces a complex combination of outward (red) and inward (blue) flows in this dynamo simulation. Image courtesy of Rakesh Yadav, Harvard University.

    This big code stemmed from Featherstone’s research in solar physics. Previously scientists had used computation to model solar features on as many as a few hundred processor cores simultaneously, or in parallel. But Featherstone wanted to tackle larger problems that were intractable using available technology. “I spent a lot of time actually looking at the parallel algorithms that were used in that code and seeing where I could change things,” he says.

    When University of California, Los Angeles geophysicist Jonathan Aurnou saw Featherstone present his work at a conference in 2012, he was immediately impressed. “Nick has built this huge, huge capability,” says Aurnou, who leads the Geodynamo Working Group in the Computational Infrastructure for Geodynamics (CIG) based at the University of California, Davis. Though stars and planets can behave very differently, the dynamo in these bodies can be modeled with adjustments to the same fundamental algorithms.

    Aurnou soon recruited Featherstone to develop a community code – one researchers could share and improve – based on his earlier algorithms. The team initially performed simulations on up to 10,000 cores of NASA’s Pleiades supercomputer.

    NASA SGI Intel Advanced Supercomputing Center Pleiades Supercomputer

    But the scientists wanted to go bigger. Previous codes are like claw hammers, but “this code – it’s a 30-pound sledge,” Aurnou says. “That changes what you can swing at.”

    In 2014 Aurnou, Featherstone and their colleagues proposed three big INCITE projects focusing on three bodies in our solar system: the sun, a star; Jupiter, a gas giant planet; and Earth, a rocky planet. Mira’s 786,000 processor cores let the team scale up their calculations by a factor of 100, Featherstone says. Adds Aurnou, “You can think of Mira as a place to let codes run wild, a safari park for big codes.”

    The group focused on one problem each year, starting with Featherstone’s specialty: the sun. In its core, hydrogen atoms fuse to form helium, releasing high-energy photons that bounce around a dense core for thousands of years. They eventually diffuse to an outer convecting layer, where they warm plasma pockets, causing them to rise to the surface. Finally, the energy reaches the surface, the photosphere, where it can escape, reaching Earth as light within minutes. Like planets, the sun rotates, producing chaotic forces and its own magnetic poles that reverse every 11 years. The processes that cause this magnetic reversal remain largely unknown.

    Featherstone broke down this complex mixture of activity into components across the whole star. “What I’ve been able to do with the INCITE program is to start modeling convection in the sun both with and without rotation turned on and at very, very high resolution,” Featherstone says. The researchers plan to incorporate magnetism into the models next.

    The team then moved on to Jupiter, aiming to predict and model the results of NASA’s Juno probe, which orbits that planet. In Jupiter’s core – the innermost 95 percent – hydrogen is compressed so tightly that the electrons pop off. The mass behaves like a metal ball, Aurnou says. Its core also releases heat in an amount equal to what the planet receives from the sun. All that convective turbulence also rotates, creating a potent planetary magnetic field, he says.

    Until recent results from Juno, scientists didn’t know that surface jets on Jupiter extend deep – thousands of kilometers – into the planet. Juno’s images reveal clusters of geometric turbulence – pentagons, octagons and more – grouped around the Jovian poles.

    3
    A model of interacting vortices simulating turbulent jets that resemble those observed on Jupiter. Yellow features are rotating counterclockwise, while blue features rotate clockwise. Image courtesy of Moritz Heimpel, University of Alberta.

    Even before the Juno results were published in March, the CIG team had simulated deep jets and their interactions with Jupiter’s surface and magnetic core. The team is well-poised to help physicists better understand these unusual stormy features, Aurnou adds. “We’re going to be using our big simulations and the analysis that we’re now carrying out to try to understand the Juno data.”

    In its third year the team modeled the behavior of Earth’s magnetic field, a system where they had far more data from observations. Nonetheless, our home still harbors geophysical puzzles. Earth has an outer core of molten iron and a hard rocky crust that contains it. The magnetic poles drift – and can even flip – but the process takes a few hundred thousand years and doesn’t occur on a regular schedule. “Earth’s magnetic field is complex – messy – both in time and space,” Aurnou says. “That mess is where all the fun is.”

    Turbulence is difficult to simulate because it includes the cumulative effects of minuscule changes coupled with processes that are occurring over large parts of a planet.

    “[In our Earth model] we’ve made, in a sense, as messy a dynamo simulation as possible,” Aurnou says. Previous researchers modeling Earth have argued that tweaks to physics were needed to explain features such as the constant magnetic-pole shifts. “We’ve actually found with our Mira runs, that, no, we don’t need any extra ingredients. We just need turbulence.”

    With these results, the team hopes to pare down simulations to incorporate the simplest set of inputs needed to understand our complex terrestrial system.

    The INCITE project results are fueling new research opportunities already. Based on the team’s solar findings, in 2017 Featherstone received a $1 million grant from NASA’s Heliophysics Grand Challenge program, which supports research into solar physics problems that require both theory and observation.

    The project shows how federal funding can dovetail to help important science reach its potential, Aurnou says. CIG originally hired Featherstone using National Science Foundation funds, which led to the INCITE grant, followed by this NASA project, which will model even more of the sun’s fundamental physics. That information could help protect astronauts from solar radiation and shield our electrical grids from damage and outages during periods of high solar activity.

    Eventually the team would like to model the reversal of magnetic poles on Earth, which requires accounting for daily rotation over hundreds of thousands of years. “That’s going to cost us,” Aurnou says. “We need to get a more efficient code for that and faster computers.”

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 5:58 pm on October 17, 2018 Permalink | Reply
    Tags: ASCR Discovery, , Quantum Monte Carlo (QMC) modeling, Quantum predictions   

    From ASCR Discovery: “Quantum predictions” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    1
    Mechanical strain, pressure or temperature changes or adding chemical doping agents can prompt an abrupt switch from insulator to conductor in materials such as nickel oxide (pictured here). Nickel ions (blue) and oxygen ions (red) surround a dopant ion of potassium (yellow). Quantum Monte Carlo methods can accurately predict regions where charge density (purple) will accumulate in these materials. Image courtesy of Anouar Benali, Argonne National Laboratory.

    Solving a complex problem quickly requires careful tradeoffs – and simulating the behavior of materials is no exception. To get answers that predict molecular workings feasibly, scientists must swap in mathematical approximations that speed computation at accuracy’s expense.

    But magnetism, electrical conductivity and other properties can be quite delicate, says Paul R.C. Kent of the Department of Energy’s (DOE’s) Oak Ridge National Laboratory. These properties depend on quantum mechanics, the movements and interactions of myriad electrons and atoms that form materials and determine their properties. Researchers who study such features must model large groups of atoms and molecules rather than just a few. This problem’s complexity demands boosting computational tools’ efficiency and accuracy.

    That’s where a method called quantum Monte Carlo (QMC) modeling comes in. Many other techniques approximate electrons’ behavior as an overall average, for example, rather than considering them individually. QMC enables accounting for the individual behavior of all of the electrons without major approximations, reducing systematic errors in simulations and producing reliable results, Kent says.

    Kent’s interest in QMC dates back to his Ph.D. research at Cambridge University in the 1990s. At ORNL, he recently returned to the method because advances in both supercomputer hardware and in algorithms had allowed researchers to improve its accuracy.

    “We can do new materials and a wider fraction of elements across the periodic table,” Kent says. “More importantly, we can start to do some of the materials and properties where the more approximate methods that we use day to day are just unreliable.”

    Even with these advances, simulations of these types of materials, ones that include up to a few hundred atoms and thousands of electrons, requires computational heavy lifting. Kent leads a DOE Basic Energy Sciences Center, the Center for Predictive Simulations of Functional Materials (CPSFM) that includes researchers from ORNL, Argonne National Laboratory, Sandia National Laboratories, Lawrence Livermore National Laboratory, the University of California, Berkeley and North Carolina State University.

    Their work is supported by a DOE Innovative and Novel Computational Impact on Theory and Experiments (INCITE) allocation of 140 million processor hours, split between Oak Ridge Leadership Computing Facility’s Titan and Argonne Leadership Computing Facility’s Mira supercomputers. Both computing centers are DOE Office of Science user facilities.

    ORNL Cray Titan XK7 Supercomputer

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    To take QMC to the next level, Kent and colleagues start with materials such as vanadium dioxide that display unusual electronic behavior. At cooler temperatures, this material insulates against the flow of electricity. But at just above room temperature, vanadium dioxide abruptly changes its structure and behavior.

    Suddenly this material becomes metallic and conducts electricity efficiently. Scientists still don’t understand exactly how and why this occurs. Factors such as mechanical strain, pressure or doping the materials with other elements also induce this rapid transition from insulator to conductor.

    However, if scientists and engineers could control this behavior, these materials could be used as switches, sensors or, possibly, the basis for new electronic devices. “This big change in conductivity of a material is the type of thing we’d like to be able to predict reliably,” Kent says.

    Laboratory researchers also are studying these insulator-to-conductors with experiments. That validation effort lends confidence to the predictive power of their computational methods in a range of materials. The team has built open-source software, known as QMCPACK, that is now available online and on all of the DOE Office of Science computational facilities.

    Kent and his colleagues hope to build up to high-temperature superconductors and other complex and mysterious materials. Although scientists know these materials’ broad properties, Kent says, “we can’t relate those to the actual structure and the elements in the materials yet. So that’s a really grand challenge for the condensed-matter physics field.”

    The most accurate quantum mechanical modeling methods restrict scientists to examining just a few atoms or molecules. When scientists want to study larger systems, the computation costs rapidly become unwieldy. QMC offers a compromise: a calculation’s size increases cubically relative to the number of electrons, a more manageable challenge. QMC incorporates only a few controlled approximations and can be applied to the numerous atoms and electrons needed. It’s well suited for today’s petascale supercomputers – capable of one quadrillion calculations or more each second – and tomorrow’s exascale supercomputers, which will be at least a thousand times faster. The method maps simulation elements relatively easily onto the compute nodes in these systems.

    The CPSFM team continues to optimize QMCPACK for ever-faster supercomputers, including OLCF’s Summit, which will be fully operational in January 2019.

    ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

    The higher memory capacity on that machine’s Nvidia Volta GPUs – 16 gigabytes per graphics processing unit compared with 6 gigabytes on Titan – already boosts computation speed. With the help of OLCF’s Ed D’Azevedo and Andreas Tillack, the researchers have implemented improved algorithms that can double the speed of their larger calculations.

    QMCPACK is part of DOE’s Exascale Computing Project, and the team is already anticipating additional scaling challenges for running QMCPACK on future machines. To perform the desired simulations within roughly 12 hours on an exascale supercomputer, Kent estimates that they’ll need algorithms that are 30 times more scalable than those within the current version.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Even with improved hardware and algorithms, QMC calculations will always be expensive. So Kent and his team would like to use QMCPACK to understand where cheaper methods go wrong so that they can improve them. Then they can save QMC calculations for the most challenging problems in materials science, Kent says. “Ideally we will learn what’s causing these materials to be very tricky to model and then improve cheaper approaches so that we can do much wider scans of different materials.”

    The combination of improved QMC methods and a suite of computationally cheaper modeling approaches could lead the way to new materials and an understanding of their properties. Designing and testing new compounds in the laboratory is expensive, Kent says. Scientists could save valuable time and resources if they could first predict the behavior of novel materials in a simulation.

    Plus, he notes, reliable computational methods could help scientists understand properties and processes that depend on individual atoms that are extremely difficult to observe using experiments. “That’s a place where there’s a lot of interest in going after the fundamental science, predicting new materials and enabling technological applications.”

    Oak Ridge National Laboratory is supported by the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 11:57 am on August 22, 2018 Permalink | Reply
    Tags: , , , ASCR Discovery, Fine-tuning physics, ,   

    From ASCR Discovery and Argonne National Lab: “Fine-tuning physics” 

    From ASCR Discovery
    ASCR – Advancing Science Through Computing

    August 2018

    Argonne applies supercomputing heft to boost precision in particle predictions.

    2
    A depiction of a scattering event on the Large Hadron Collider. Image courtesy of Argonne National Laboratories.

    Advancing science at the smallest scales calls for vast data from the world’s most powerful particle accelerator, leavened with the precise theoretical predictions made possible through many hours of supercomputer processing.

    The combination has worked before, when scientists from the Department of Energy’s Argonne National Laboratory provided timely predictions about the Higgs particle at the Large Hadron Collider in Switzerland. Their predictions contributed to the 2012 discovery of the Higgs, the subatomic particle that gives mass to all elementary particles.


    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    “That we are able to predict so precisely what happens around us in nature is a remarkable achievement,” Argonne physicist Radja Boughezal says. “To put all these pieces together to get a number that agrees with the measurement that was made with something so complicated as the LHC is always exciting.”

    Earlier this year, she was allocated more than 98 million processor hours on the Mira and Theta supercomputers at the Argonne Leadership Computing Facility, a DOE Office of Science user facility, through DOE’s INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    ANL ALCF Theta Cray XC40 supercomputer

    Her previous INCITE allocation helped solve problems that scientists saw as insurmountable just two or three years ago.

    These problems stem from the increasingly intricate and precise measurements and theoretical calculations associated with scrutinizing the Higgs boson and from searches for subtle deviations from the standard model that underpins the behavior of matter and energy.

    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.


    Standard Model of Particle Physics from Symmetry Magazine

    The approach she and her associates developed led to early, high-precision LHC predictions that describe so-called strong-force interactions between quarks and gluons, which comprise subatomic particles such as protons and neutrons.

    The theory governing strong-force interactions is called QCD, for quantum chromodynamics. In QCD, the thing that quantifies the strong force when exerted in any direction is called the strong coupling constant.

    “At high energies, when collisions happen, quarks and gluons are very close to each other, so the strong force is very weak. It’s almost turned off,” Boughezal explains. How this strong coupling grows – a small parameter called perturbative expansion – gives physicists a yardstick to calculate their predictions. Perturbative expansion is “a method we have used over and over to get these predictions, and it has provided powerful tests of QCD to date.”

    Crucial to these tests is the N-jettiness framework Boughezal and her Argonne and Northwestern University collaborators devised to obtain high-precision predictions for particle scattering processes. Specially adapted for high-performance computing systems, the framework’s novelty stems from its incorporation of existing low-precision numerical codes to achieve part of the desired result. The scientists fill in algorithmic gaps with simple analytic calculations.

    The LHC data lined up completely with predictions the team had obtained from running the N-jettiness code on the Mira supercomputer at Argonne. The agreement carries important implications for the precision goals physicists are setting for future accelerators such as the proposed Electron-Ion Collider (EIC).

    “One of the things that has puzzled us for 30 years is the spin of the proton,” Boughezal says. Planners hope the EIC reveals how the spin of the proton, matter’s basic building block, emerges from its elementary constituents, quarks and gluons.

    Boughezal also is working with LHC scientists in the search for dark matter, which accounts for 96 percent of stuff in the universe. The remainder is ordinary matter, the atoms and molecules that form stars, planets and people.

    “Scientists believe that the mysterious dark matter in the universe could leave a missing energy footprint at the LHC,” she says. Such a footprint would reveal the existence of a new particle that’s currently missing from the standard model. Dark matter particles interact weakly with the LHC’s detectors. “We cannot see them directly.”

    They could, however, be produced with a jet – a spray of standard-model particles made from LHC proton collisions. “We can measure that jet. We can see it. We can tag it.” And by using simple laws of physics such as the conservation of momentum, even if the particles are invisible, scientists would be able to detect them by measuring the jet’s energy.

    For example, when subatomic particles called Z bosons are produced with particle jets, the bosons can decay into neutrinos, ghostly specks that rarely interact with ordinary matter. The neutrinos appear as missing energy in the LHC’s detectors, just as a dark matter particle would.

    In July 2017, Boughezal and three co-authors published a paper in the Journal of High Energy Particle Physics. It was the first to describe new proton-structure details derived from precision high-energy Z-boson experimental data.

    
“If you want to know whether what you have produced is actually coming from a standard model process or something else that we have not seen before, you need to predict your standard model process very well,” she says. If the theoretical predictions deviate from the experimental data, it suggests new physics at play.


    In fact, Boughezal and her associates have precisely predicted the standard model jet process and it agrees with the data. “So far we haven’t produced dark matter at the LHC.”

    Previously, however, the results were so imprecise – and the margin of uncertainty so high – that physicists couldn’t tell whether they’d produced a standard-model jet or something entirely new.

    What surprises will higher-precision calculations reveal in future LHC experiments?

    “There is still a lot of territory that we can probe and look for something new,” Boughezal says. “The standard model is not a complete theory because there is a lot it doesn’t explain, like dark matter. We know that there has to be something bigger than the standard model.”

    Argonne is managed by UChicago Argonne LLC for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

    See the full article here.


    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    ASCRDiscovery is a publication of The U.S. Department of Energy

     
  • richardmitnick 3:07 pm on March 15, 2017 Permalink | Reply
    Tags: ASCR Discovery, Coding a Starkiller, , ,   

    From OLCF via ASCR and DOE: “Coding a Starkiller” 

    i1

    Oak Ridge National Laboratory

    OLCF

    ASCR

    March 2017

    The Titan supercomputer and a tool called Starkiller help Stony Brook University-led team simulate key moments in exploding stars.

    1
    A volume rendering of the density after 0.6 and 0.9 solar mass white dwarfs merge. The image is derived from a calculation performed on the Oak Ridge Leadership Computing facility’s Titan supercomputer. The model used Castro, an adaptive mesh astrophysical radiation hydrodynamics simulation code. Image courtesy of Stony Brook University / Max Katz et al.

    The spectacular Supernova 1987A, whose light reached Earth on Feb. 23 of the year it’s named for, captured the public’s fancy. It’s located at the edge of the Milky Way, in a dwarf galaxy called the Large Magellanic Cloud. It had been four centuries since earthlings had witnessed light from a star exploding in our galaxy.

    1
    NASA

    A supernova’s awesome light show heralds a giant star’s death, and the next supernova’s post-mortem will generate reams of data, compared to the paltry dozen or so neutrinos and X-rays harvested from the 1987 event.

    Astrophysicists Michael Zingale and Bronson Messer aren’t waiting. They’re aggressively anticipating the next supernova by leading teams in high-performance computer simulations of explosive stellar events, including different supernova types and their accompanying X-ray bursts. Zingale, of Stony Brook University, and Messer, of the Department of Energy’s Oak Ridge National Laboratory (ORNL), are in the midst of an award from the DOE Office of Science’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. It provides an allocation of 45 million processor hours of computer time on Titan, a Cray XK7 that’s one of the world’s most powerful supercomputers, at the Oak Ridge Leadership Computing Facility, or OLCF – a DOE Office of Science user facility.

    The simulations run on workhorse codes developed by the INCITE collaborators and at the DOE’s Lawrence Berkeley National Laboratory – codes that “are often modified toward specific problems,” Zingale says. “And the common problem we share with ORNL is that we have to put more and more of our algorithms on the Titan graphics processor units (GPUs),” specialized computer chips that accelerate calculations. While the phenomena they’re modeling “are really far away and on scales that are hard to imagine,” the codes have other applications closer to home: “terrestrial phenomena, like terrestrial combustion.” The team’s codes – Maestro, Castro, Chimera and FLASH – are available to other modelers free through online code repository Github.

    With a previous INCITE award, the researchers realized the possibility of attacking the GPU problem together. They envisioned codes comprised of multiphysics modules that compute common pieces of most kinds of explosive activities, Messer says. They dubbed the growing collection of GPU-enabled modules Starkiller.

    “Starkiller ties this INCITE project together,” he says. “We realized we didn’t want to reinvent the wheel with each new simulation.” For example, a module that tracks nuclear burning helps the researchers create larger networks for nucleosynthesis, a supernova process in which elements form in the turbulent flow on the stellar surface.

    “In the past, we were able to do only a little more than a dozen different elements, and now we’re routinely doing 150,” Messer says. “We can make the GPU run so much faster. That’s part of Titan’s advantage to us.”

    Supernova 1987A, a type II supernova, arose from the gravitational collapse of a stellar core, the consistent fate of massive stars. Type Ia supernovae follow from intense thermonuclear activities that eventually drive the explosion of a white dwarf – a star that has used up all its hydrogen. Zingale’s group is focused on type Ia, Messer’s on type II. A type II leaves a remnant star; a type Ia does not.

    Stars like the sun burn hydrogen into helium and, over enormous stretches of time, burn the helium into carbon. Once our sun starts burning carbon, it will gradually peter out, Messer says, because it’s not massive enough to turn the carbon into something heavier.

    “A star begins life as a big ball of hydrogen, and its whole life is this fight between gravity trying to suck it into the middle and thermonuclear reactions keeping it supported against its own gravity,” he adds. “Once it gets to the point where it’s burning some carbon, the sun will just give up. It will blow a big smoke ring into space and become a planetary nebula, and at the center it will become a white dwarf.”

    Zingale is modeling two distinct thermonuclear modes. One is for a white dwarf in a binary system – two stars orbiting one another – that consumes additional material from its partner. As the white dwarf grows in mass, it gets hotter and denser in the center, creating conditions that drive thermonuclear reactions.

    “This star is made mostly of carbon and oxygen,” Zingale says. “When you get up to a few hundred million K, you have densities of a few billion grams per cubic centimeter. Carbon nuclei get fused and make things like neon and sodium and magnesium, and the star gets energy out in that process. We are modeling the star’s convection, the creation of a rippling burning front that converts the carbon and oxygen into heavier elements such as iron and nickel. This creates such an enormous amount of energy that it overcomes the force of gravity that’s holding the star together, and the whole thing blows apart.”

    The other mode is being modeled with former Stony Brook graduate student and INCITE co-principal investigator Max Katz, who want to understand whether merging stars can create a burning point that leads to a supernova, as some observations suggest. His simulations feature two white dwarfs so close that they emit gravitational radiation, robbing energy from the system and causing the stars to spiral inward. Eventually, they get so close that the more massive one rips the lesser apart via tidal energy.

    Zingale’s group also continues to model the convective burning on stars, known as X-ray bursts, providing a springboard to more in-depth studies. He says they’re the first to simulate them in three dimensions. That work and additional supernova studies were supported by the DOE Office of Science and performed at OLCF and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility at Lawrence Berkeley National Laboratory.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: