Tagged: Jaguar/Titan Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 6:33 pm on March 20, 2014 Permalink | Reply
    Tags: , , , , Jaguar/Titan, ,   

    From Oak Ridge via PPPL: “The Bleeding ‘Edge’ of Fusion Research” 

    March 20, 2014

    Few problems have vexed physicists like fusion, the process by which stars fuel themselves and by which researchers on Earth hope to create the energy source of the future.

    By heating the hydrogen isotopes tritium and deuterium to more than five times the temperature of the Sun’s surface, scientists create a reaction that could eventually produce electricity. Turns out, however, that confining the engine of a star to a manmade vessel and using it to produce energy is tricky business.

    Big problems, such as this one, require big solutions. Luckily, few solutions are bigger than Titan, the Department of Energy’s flagship Cray XK7 supercomputer managed by the Oak Ridge Leadership Computing Facility.

    tatan
    Titan

    is
    Inside Titan

    Titan allows advanced scientific applications to reach unprecedented speeds, enabling scientific breakthroughs faster than ever with only a marginal increase in power consumption. This unique marriage of number-crunching hardware enables Titan, located at Oak Ridge National Laboratory (ORNL), to reach a peak performance of 27 petaflops to claim the title of the world’s fastest computer dedicated solely to scientific research.

    PPPL fusion code

    And fusion is at the head of the research pack. In fact, a team led by Princeton Plasma Physics Laboratory’s (PPPL’s) C.S. Chang increased the performance of its fusion XGC1 code fourfold on Titan using its GPUs and CPUs, compared to its previous CPU-only incarnation after a 6-month performance engineering period during which the team tweaked its code to best take advantage of Titan’s revolutionary hybrid architecture.

    “In nature, there are two types of physics,” said Chang. The first is equilibrium, in which changes happen in a “closed” world toward a static state, making the calculations comparatively simple. “This science has been established for a couple hundred years,” he said. Unfortunately, plasma physics falls in the second category, in which a system has inputs and outputs that constantly drive the system to a nonequilibrium state, which Chang refers to as an “open” world.

    Most magnetic fusion research is centered on a tokamak, a donut-shaped vessel that shows the most promise for magnetically confining the extremely hot and fragile plasma. Because the plasma is constantly coming into contact with the vessel wall and losing mass and energy, which in turn introduces neutral particles back into the plasma, equilibrium physics generally don’t apply at the edge and simulating the environment is difficult using conventional computational fluid dynamics.

    tftr
    TFTR at PPPL Tokamak Fusion Test Reactor at Princeton Plasma Physics Laboratory Image Credit: Princeton.

    Another major reason the simulations are so complex is their multiscale nature. The distance scales involved range from millimeters (what’s going on among the gyrating particles and turbulence eddies inside the plasma itself) to meters (looking at the entire vessel that contains the plasma). The time scales introduce even more complexity, as researchers want to see how the edge plasma evolves from microseconds in particle motions and turbulence fluctuations to milliseconds and seconds in its full evolution. Furthermore, these two scales are coupled. “The simulation scale has to be very large, but still has to include the small-scale details,” said Chang.

    And few machines are as capable of delivering in that regard as is Titan. “The bigger the computer, the higher the fidelity,” he said, simply because researchers can incorporate more physics, and few problems require more physics than simulating a fusion plasma.

    On the hunt for blobs

    Studying the plasma edge is critical to understanding the plasma as a whole. “What happens at the edge is what determines the steady fusion performance at the core,” said Chang. But when it comes to studying the edge, “the effort hasn’t been very successful because of its complexity,” he added.

    Chang’s team is shedding light on a long-known and little-understood phenomenon known as “blobby” turbulence in which formations of strong plasma density fluctuations or clumps flow together and move around large amounts of edge plasma, greatly affecting edge and core performance in the DIII-D tokamak at General Atomics in San Diego, CA. DIII-D-based simulations are considered a critical stepping-stone for the full-scale, first principles simulation of the ITER plasma edge. ITER is a tokamak reactor to be built in France to test the science feasibility of fusion energy.

    iter
    ITER

    The phenomenon was discovered more than 10 years ago, and is one of the “most important things in understanding edge physics,” said Chang, adding that people have tried to model it using fluids (i.e., equilibrium physics quantities). However, because the plasma inhabits an open world, it requires first-principles, ab-initio simulations. Now, for the first time, researchers have verified the existence and modeled the behavior of these blobs using a gyrokinetic code (or one that uses the most fundamental plasma kinetic equations, with analytic treatment of the fast gyrating particle motions) and the DIII-D geometry.

    This same first-principles approach also revealed the divertor heat load footprint. The divertor will extract heat and helium ash from the plasma, acting as a vacuum system and ensuring that the plasma remains stable and the reaction ongoing.

    These discoveries were made possible because the team’s XGC1 code exhibited highly efficient weak and strong scalability on Titan’s hybrid architecture up to the full size of the machine. Collaborating with Ed D’Azevedo, supported by the OLCF and by the DOE Scientific Discovery through Advanced Computing (SciDAC) project Center for Edge Physics Simulation (EPSi), along with Pat Worley (ORNL), Jianying Liand (PPPL) and Seung-Hoe Ku (PPPL) also supported by EPSi, this team optimized its XGC1 code for Titan’s GPUs using the maximum number of nodes, boosting performance fourfold over the previous CPU-only code. This performance increase has enormous implications for predicting fusion energy efficiency in ITER.

    Full-scale simulations

    “We can now use both the CPUs and GPUs efficiently in full-scale production simulations of the tokamak plasma,” said Chang.

    Furthermore, added Chang, Titan is beginning to allow the researchers to model physics that were just a year ago out of reach altogether, such as electron-scale turbulence, that were out of reach altogether as little as a year ago. Jaguar—Titan’s CPU-only predecessor— was fine for ion-scale edge turbulence because ions are both slower and heavier than electrons (for which the computing requirement is 60 times greater), but fell seriously short when it came to calculating electron-scale turbulence. While Titan is still not quite powerful enough to model electrons as accurately as Chang would like, the team has developed a technique that allows them to simulate electron physics approximately 10 times faster than on Jaguar.

    And they are just getting started. The researchers plan on eventually simulating the full volume plasma with electron-scale turbulence to understand how these newly modeled blobs affect the fusion core, because whatever happens at the edge determines conditions in the core. “We think this blob phenomenon will be a key to understanding the core,” said Chang, adding, “All of these are critical physics elements that must be understood to raise the confidence level of successful ITER operation. These phenomena have been observed experimentally for a long time, but have not been understood theoretically at a predictable confidence level.”

    Given the team can currently use all of Titan’s more that 18,000 nodes, a better understanding of fusion is certainly in the works. A better understanding of blobby turbulence and its effects on plasma performance is a significant step toward that goal, proving yet again that few tools are more critical than simulation if mankind is to use the engines of stars to solve its most pressing dilemma: clean, abundant energy.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 7:19 am on November 24, 2011 Permalink | Reply
    Tags: , , INCITE, Jaguar/Titan   

    From Oak Ridge Lab: “Researchers Show How Proteins Help DNA Replicate Past a Damaged Site” 

    Computation and experiment reveal how protein switching provides right tool for the job

    i1

    Dawn Levy
    November 9, 2011

    “Ultraviolet light, oxidants, and environmental toxins constantly assault the genome of a cell. Consequences in the absence of DNA repair could be severe—accelerated aging, degenerative disease, or cancer. Fortunately, cells have evolved sophisticated mechanisms for dealing with DNA damage, stabilizing their genomes, and ensuring their survival. Recently, a multi-institutional research team led by Ivaylo Ivanov of Georgia State University has employed the Jaguar XT4 supercomputer at Oak Ridge National Laboratory (ORNL) and x-rays a billion times brighter than the sun, produced at Lawrence Berkeley National Laboratory (LBNL), to illuminate how DNA replication continues past a damaged site so a lesion can be repaired later. The combination of computation and experiment reveals conformations that ubiquitin (Ub), a small protein that binds and orients DNA-editing enzymes, can assume when it associates with a molecular “tool belt” called proliferating cell nuclear antigen (PCNA). The results appear in the October 17, 2011, online issue of Proceedings of the National Academy of Sciences.

    XT5
    Jaguar

    ‘The tool belt model is a longstanding model in the PCNA field, although it has not been conclusively proven,’ said Ivanov, whose collaborators include Susan Tsutakawa of LBNL; John Tainer of LBNL and the Scripps Research Institute; Adam Van Wynsberghe of Hamilton College; Bret Freudenthal, M. Todd Washington, and Lokesh Gakhar of the University of Iowa College of Medicine; and Christopher Weinacht and Zhihao Zhuang of the University of Delaware.

    Through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, the researchers received 2.6 million processor hours in 2009 on the Department of Energy Office of Science’s Jaguar XT4 to conduct their simulations. They also used the Intel 64 cluster Abe at the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications through the National Science Foundation’s Teragrid.

    i3

    i5
    A model of ubiquitinated PCNA, refined using multiscale dynamics simulations and protein-docking experiments, shows ubiquitin (orange) binding in a groove above a PCNA subunit interface. Mutations known to interfere with translesion synthesis (blue and green) are directly beneath. The conformation of the J-loop (purple), a structural element of PCNA, is affected by mutations at the subunit interface. Image courtesy Ivaylo Ivanov.

    See the full article here.

    i1

    i2

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 355 other followers

%d bloggers like this: