Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:35 am on March 17, 2015 Permalink | Reply
    Tags: , , Seismic activity, Supercomputing   

    From CBS: “Scientists mapping Earth in 3D, from the inside out’ 

    CBS News

    CBS News

    March 16, 2015
    Michael Casey

    1
    Using a technique that is similar to a medical CT (“CAT”) scan, researchers at Princeton are using seismic waves from earthquakes to create images of the Earth’s subterranean structures — such as tectonic plates, magma reservoirs and mineral deposits — which will help better understand how earthquakes and volcanoes occur. Ebru Bozdağ, University of Nice Sophia Antipolis, and David Pugmire, Oak Ridge National Laboratory

    The wacky adventures of scientists traveling to the Earth’s core have been a favorite plot line in Hollywood over the decades, but actually getting there is mostly science fiction.

    Now, a group of scientists is using some of the world’s most powerful supercomputers to do what could be the next best thing.

    Princeton’s Jeroen Tromp and colleagues are eavesdropping on the seismic vibrations produced by earthquakes, and using the data to create a map of the Earth’s mantle, the semisolid rock that stretches to a depth of 1,800 miles, about halfway down to the planet’s center and about 300 times deeper than humans have drilled. The research could help understand and predict future earthquakes and volcanic eruptions.

    “We need to scour the maps for interesting and unexpected features,” Tromp told CBS News. “But it’s really a 3D mapping expedition.”

    To do this, Tromp and his colleagues will exploit an interesting phenomenon related to seismic activity below the surface of the Earth. As seismic waves travel, they change speed depending on the density, temperature and type of rock they’re moving through, for instance slowing down when traveling through an underground aquifer or magma.

    2
    This three-dimensional image displays contours of locations where seismic wave speeds are faster than average.
    Ebru Bozdağ, University of Nice Sophia Antipolis, and David Pugmire, Oak Ridge National Laboratory

    Thousands of seismographic stations worldwide make recordings, or seismograms, that detail the movement produced by seismic waves, which typically travel at speeds of several miles per second and last several minutes. By combining seismographic readings of roughly 3,000 quakes of magnitude 5.5 and greater the geologists can produce a three-dimensional model of the structures under the Earth’s surface.

    For the task, Tromp’s team will use the supercomputer called Titan, which can perform more than 20 quadrillion calculations per second and is located at the Department of Energy’s Oak Ridge National Laboratory in Tennessee.

    ORNL Titan Supercomputer
    TITAN at ORNL

    The technique, called seismic tomography, has been compared to the computerized tomography used in medical CAT scans, in which a scanner captures a series of X-ray images from different viewpoints, creating cross-sectional images that can be combined into 3D images.

    Tromp acknowledged he doesn’t think his research could one day lead to a scientist actually reaching the mantle. But he said it could help seismologists do a better job of predicting the damage from future earthquakes and the possibility of volcanic activity.

    For example, they might find a fragment of a tectonic plate that broke off and sank into the mantle. The resulting map could tell seismologists more about the precise locations of underlying tectonic plates, which can trigger earthquakes when they shift or slide against each other. The maps could also reveal the locations of magma that, if it comes to the surface, causes volcanic activity.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

     
  • richardmitnick 5:30 pm on February 18, 2015 Permalink | Reply
    Tags: , Enzyme studies, , Supercomputing,   

    From UCSD: “3D Enzyme Model Provides New Tool for Anti-Inflammatory Drug Development” 

    UC San Diego bloc

    UC San Diego

    January 26, 2015
    Heather Buschman

    Researchers develop first computer models of phospholipase A2 enzymes extracting their substrates out of the cell membrane, an early step in inflammation

    Phospholipase A2 (PLA2) enzymes are known to play a role in many inflammatory diseases, including asthma, arthritis and atherosclerosis. It then stands to reason that PLA2 inhibitors could represent a new class of anti-inflammatory medication. To better understand PLA2 enzymes and help drive therapeutic drug development, researchers at University of California, San Diego School of Medicine developed 3D computer models that show exactly how two PLA2 enzymes extract their substrates from cellular membranes. The new tool is described in a paper published online the week of Jan. 26 by the Proceedings of the National Academy of Sciences.

    1
    Phospholipase Cleavage Sites. Note that an enzyme that displays both PLA1 and PLA2 activities is called a Phospholipase B

    “This is the first time experimental data and supercomputing technology have been used to visualize an enzyme interacting with a membrane,” said Edward A. Dennis, PhD, Distinguished Professor of Pharmacology, chemistry and biochemistry and senior author of the study. “In doing so, we discovered that binding the membrane triggers a conformational change in PLA2 enzymes and activates them. We also saw several important differences between the two PLA2 enzymes we studied — findings that could influence the design and development of specific PLA2 inhibitor drugs for each enzyme.”

    The computer simulations of PLA2 enzymes developed by Dennis and his team, including first author Varnavas D. Mouchlis, PhD, show the specific molecular interactions between PLA2 enzymes and their substrate, arachidonic acid, as the enzymes suck it up from cellular membranes.

    Make no mistake, though — the animations of PLA2 in action are not mere cartoons. They are sophisticated molecular dynamics simulations based upon previously published deuterium exchange mass spectrometry (DXMS) data on PLA2. DXMS is an experimental laboratory technique that provides molecular information about the interactions of these enzymes with membranes.

    “The combination of rigorous experimental data and in silico [computer] models is a very powerful tool — the experimental data guided the development of accurate 3D models, demonstrating that these two scientific fields can inform one another,” Mouchlis said.

    The liberation of arachidonic acid by PLA2 enzymes, as shown in these simulations, sets off a cascade of molecular events that result in inflammation. Aspirin and many other anti-inflammatory drugs work by inhibiting enzymes in this cascade that rely on PLA2 enzymes to provide them with arachidonic acid. That means PLA2 enzymes could potentially also be targeted to dampen inflammation at an earlier point in the process.

    Co-authors include Denis Bucher, UC San Diego, and J. Andrew McCammon, UC San Diego and Howard Hughes Medical Institute.

    This research was funded, in part, by the National Institute of General Medical Sciences at the National Institutes of Health (grants GM20501 and P41GM103712-S1), National Science Foundation (grant ACI-1053575) and Howard Hughes Medical Institute.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    UC San Diego Campus

    The University of California, San Diego (also referred to as UC San Diego or UCSD), is a public research university located in the La Jolla area of San Diego, California, in the United States.[12] The university occupies 2,141 acres (866 ha) near the coast of the Pacific Ocean with the main campus resting on approximately 1,152 acres (466 ha).[13] Established in 1960 near the pre-existing Scripps Institution of Oceanography, UC San Diego is the seventh oldest of the 10 University of California campuses and offers over 200 undergraduate and graduate degree programs, enrolling about 22,700 undergraduate and 6,300 graduate students. UC San Diego is one of America’s Public Ivy universities, which recognizes top public research universities in the United States. UC San Diego was ranked 8th among public universities and 37th among all universities in the United States, and rated the 18th Top World University by U.S. News & World Report ‘s 2015 rankings.

     
  • richardmitnick 4:31 am on February 18, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From LBL: “Bigger steps: Berkeley Lab researchers develop algorithm to make simulation of ultrafast processes possible” 

    Berkeley Logo

    Berkeley Lab

    February 17, 2015
    Rachel Berkowitz

    When electronic states in materials are excited during dynamic processes, interesting phenomena such as electrical charge transfer can take place on quadrillionth-of-a-second, or femtosecond, timescales. Numerical simulations in real-time provide the best way to study these processes, but such simulations can be extremely expensive. For example, it can take a supercomputer several weeks to simulate a 10 femtosecond process. One reason for the high cost is that real-time simulations of ultrafast phenomena require “small time steps” to describe the movement of an electron, which takes place on the attosecond timescale – a thousand times faster than the femtosecond timescale.

    1
    Model of ion (Cl) collision with atomically thin semiconductor (MoSe2). Collision region is shown in blue and zoomed in; red points show initial positions of Cl. The simulation calculates the energy loss of the ion based on the incident and emergent velocities of the Cl.

    To combat the high cost associated with the small-time steps, Lin-Wang Wang, senior staff scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab), and visiting scholar Zhi Wang from the Chinese Academy of Sciences, have developed a new algorithm which increases the small time step from about one attosecond to about half a femtosecond. This allows them to simulate ultrafast phenomena for systems of around 100 atoms.

    “We demonstrated a collision of an ion [Cl] with a 2D material [MoSe2] for 100 femtoseconds. We used supercomputing systems for ten hours to simulate the problem – a great increase in speed,” says L.W. Wang. That represents a reduction from 100,000 time steps down to only 500. The results of the study were reported in a Physical Review Letters paper titled Efficient real-time time-dependent DFT method and its application to a collision of an ion with a 2D material.

    Conventional computational methods cannot be used to study systems in which electrons have been excited from the ground state, as is the case for ultrafast processes involving charge transfer. But using real-time simulations, an excited system can be modeled with time-dependent quantum mechanical equations that describe the movement of electrons.

    The traditional algorithms work by directly manipulating these equations. Wang’s new approach is to expand the equations into individual terms, based on which states are excited at a given time. The trick, which he has solved, is to figure out the time evolution of the individual terms. The advantage is that some terms in the expanded equations can be eliminated.

    2
    Zhi Wang (left) and Berkeley Lab’s Lin-Wang Wang (right).

    “By eliminating higher energy terms, you significantly reduce the dimension of your problem, and you can also use a bigger time step,” explains Wang, describing the key to the algorithm’s success. Solving the equations in bigger timesteps reduces the computational cost and increases the speed of the simulations

    Comparing the new algorithm with the old, slower algorithm yields similar results, e.g., the predicted energies and velocities of an atom passing through a layer of material are the same for both models. This new algorithm opens the door for efficient real-time simulations of ultrafast processes and electron dynamics, such as excitation in photovoltaic materials and ultrafast demagnetization following an optical excitation.

    The work was supported by the Department of Energy’s Office of Science and used the resources of the National Energy Research Scientific Computing center (NERSC).

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 11:43 am on January 24, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From isgtw: “Unlocking the secrets of vertebrate evolution” 


    international science grid this week

    January 21, 2015
    Lance Farrell

    Conventional wisdom holds that snakes evolved a particular form and skeleton by losing regions in their spinal column over time. These losses were previously explained by a disruption in Hox genes responsible for patterning regions of the vertebrae.

    Paleobiologists P. David Polly, professor of geological sciences at Indiana University, US, and Jason Head, assistant professor of earth and atmospheric sciences at the University of Nebraska-Lincoln, US, overturned that assumption. Recently published in Nature, their research instead reveals that snake skeletons are just as regionalized as those of limbed vertebrates.

    Using Quarry [being taken out of service Jan 30, 2015 and replaced by Karst, a supercomputer at Indiana University, Polly and Head arrived at a compelling new explanation for why snake skeletons are so different: Vertebrates like mammals, birds, and crocodiles evolved additional skeletal regions independently from ancestors like snakes and lizards.

    Karst
    Karst

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    Despite having no limbs and more vertebrae, snake skeletons are just as regionalized as lizards’ skeletons.

    “Our study finds that snakes did not require extensive modification to their regulatory gene systems to evolve their elongate bodies,” Head notes.

    3
    P. David Polly. Photo courtesy Indiana University.

    Polly and Head had to overcome challenges in collection and analysis to arrive at this insight. “If you are sequencing a genome all you really need is a little scrap of tissue, and that’s relatively easy to get,” Polly says. “But if you want to do something like we have done, you not only need an entire skeleton, but also one for a whole lot of species.”

    To arrive at their conclusion, Head and Polly sampled 56 skeletons from collections worldwide. They began by photographing and digitizing the bones, then chose specific landmarks on each spinal segment. Using the digital coordinates of each vertebra, they then applied a technique called geometric-morphometrics, a multi-variant analysis that plots x and y coordinates to analyze an object’s shape.

    Armed with shape information, the scientists then fit a series of regressions and tracked each vertebra’s gradient over the entire spine. This led to a secondary challenge — with 36,000 landmarks applied to 3,000 digitized vertebrae, the regression analyses required to peer into the snake’s past called for a new analytical tool.

    “The computations required iteratively fitting four or more segmented regression models, each with 10 to 83 parameters, for every regional permutation of up to 230 vertebrae per skeleton. The amount of computational power required is well beyond any desktop system,” Head observes.

    Researchers like Polly and Head increasingly find quantitative analyses of data sets this size require the computational resources to match. With 7.2 million different models making up the data for their study, nothing less than a supercomputer would do.

    5
    Jason Head with ball python. Photo courtesy Craig Chandler, University of Nebraska-Lincoln.

    “Our supercomputing environments serve a broad base of users and purposes,” says David Hancock, manager of IU’s high performance systems. “We often support the research done in the hard sciences and math such as Polly’s, but we also see analytics done for business faculty, marketing and modeling for interior design projects, and lighting simulations for theater productions.”

    Analyses of the scale Polly and Head needed would have been unapproachable even a decade ago, and without US National Science Foundation support remain beyond the reach of most institutions. “A lot of the big jobs ran on Quarry,” says Polly. “To run one of these exhaustive models on a single snake took about three and a half days. Ten years ago we could barely have scratched the surface.”

    As high-performance computing resources reshape the future, scientists like Polly and Head have greater abilities to look into the past and unlock the secrets of evolution.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:00 pm on January 21, 2015 Permalink | Reply
    Tags: , , , Simulation Astronomy, Supercomputing   

    From isgtw: “Exploring the universe with supercomputing” 


    international science grid this week

    January 21, 2015
    Andrew Purcell

    The Center for Computational Astrophysics (CfCA) in Japan recently upgraded its ATERUI supercomputer, doubling the machine’s theoretical peak performance to 1.058 petaFLOPS. Eiichiro Kokubo, director of the center, tells iSGTW how supercomputers are changing the way research is conducted in astronomy.

    What’s your research background?

    I investigate the origin of planetary systems. I use many-body simulations to study how planets form and I also previously worked on the development of the Gravity Pipe, or ‘GRAPE’ supercomputer.

    Why is it important to use supercomputers in this work?

    In the standard scenario of planet formation, small solid bodies — known as ‘planetisimals’ — interact with one another and this causes their orbits around the sun to evolve. Collisions between these building blocks lead to the formation of rocky planets like the Earth. To understand this process, you really need to do very-large-scale many-body simulations. This is where the high-performance computing comes in: supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see.

    The scales of mass, energy, and time are generally huge in astronomy. However, as supercomputers have become ever more powerful, we’ve become able to program the relevant physical processes — motion, fluid dynamics, radiative transfer, etc. — and do meaningful simulation of astronomical phenomena. We can even conduct experiments by changing parameters within our simulations. Simulation is numerical exploration of the universe!

    How has supercomputing changed the way research is carried out?

    Simulation astronomy’ has now become a third major methodological approach within the field, alongside observational and theoretical astronomy. Telescopes rely on electromagnetic radiation, but there are still many things that we cannot see even with today’s largest telescopes. Supercomputers enable us to use complex physical calculations to visualize phenomena that would otherwise remain hidden to us. Their use also gives us the flexibility to simulate phenomena across a vast range of spatial and temporal scales.

    Simulation can be used to simply test hypotheses, but it can also be used to explore new worlds that are beyond our current imagination. Sometimes you get results from a simulation that you really didn’t expect — this is often the first step on the road to making new discoveries and developing new astronomical theories.

    2
    ATERUI has made the leap to become a petaFLOPS-scale supercomputer. Image courtesy NAOJ/Makoto Shizugami (VERA/CfCA, NAOJ).

    In astronomy, there are three main kinds of large-scale simulation: many-body, fluid dynamics, and radiative transfer. These problems can all be parallelized effectively, meaning that massively parallel computers — like the Cray XC30 system we’ve installed — are ideally suited to performing these kinds of simulations.

    3
    “Supercomputers act as telescopes for phenomena we wouldn’t otherwise be able to see,” says Kokubo.

    What research problems will the ATERUI enable you tackle?

    There are over 100 users in our community and they are tackling a wide variety of problems. One project, for example, is looking at supernovae: having very high-resolution 3D simulations of these explosions is vital to improving our understanding. Another project is looking at the distribution of galaxies throughout the universe, and there is a whole range of other things being studied using ATERUI too.

    Since installing ATERUI, it’s been used at over 90% of its capacity, in terms of the number of CPUs running at any given time. Basically, it’s almost full every single day!

    Don’t forget, we also have the K computer here in Japan. The National Astronomical Observatory of Japan, of which the CfCA is part, is actually one of the consortium members of the K supercomputer project. As such, we also have plenty of researchers using that machine, as well. High-end supercomputers like K are absolutely great, but it is also important to have middle-class supercomputers dedicated to specific research fields available.

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 4:49 pm on November 26, 2014 Permalink | Reply
    Tags: , , Supercomputing   

    From isgtw: “HPC matters: Funding, collaboration, innovation” 


    international science grid this week

    November 26, 2014
    Amber Harmon

    This month US energy secretary Ernest Moniz announced the Department of Energy will spend $325m to research extreme-scale computing and build two new GPU-accelerated supercomputers. The goal: to put the nation on a fast-track to exascale computing, and thereby leading scientific research that addresses challenging issues in government, academia, and industry.

    3
    Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US. Image courtesy Amber Harmon.

    Moniz also announced funding awards, totaling $100m, for partnerships with HPC companies developing exascale technologies under the FastForward 2 program managed by Lawrence Livermore National Laboratory in California, US.

    The combined spending comes at a critical juncture, as just last week the Organization for Economic Co-operation and Development (OECD) released its 2014 Science, Technology and Industry Outlook report. With research and development budgets in advanced economies not yet fully recovered from the 2008 economic crisis, China is on track to lead the world in R&D spending by 2019.

    The DOE-sponsored collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) national labs will ensure each is able to deploy a supercomputer expected to provide about five times the performance of today’s top systems.

    The Summit supercomputer will outperform Titan, the Oak Ridge Leadership Computing Facility’s (OLCF) current flagship system. Research pursuits include combustion and climate science, as well as energy storage and nuclear power. “Summit builds on the hybrid multi-core architecture that the OLCF pioneered with Titan,” says Buddy Bland, director of the Summit project.

    IBM Summit & Sierra Supercomputers
    IBM new exascale supercomputer

    The other system, Sierra, will serve the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program. “Sierra will allow us to begin laying the groundwork for exascale systems,” says Bob Meisner, ASC program head, “as the heterogeneous accelerated node architecture represents one of the most promising architectural paths.” Argonne is expected to finalize a contract for a system at a later date.

    IBM Sierra supercomputer
    IBM Sierra supercomputer

    The announcements came just ahead of the 2014 International Conference for High Performance Computing, Networking, Storage and Analysis (SC14). Also ahead of SC14, organizers launched the HPC Matters campaign and announced the first HPC Matters plenary, aimed at sharing real stories about how HPC makes an everyday difference.

    When asked why the US was pushing the HPC Matters initiative, conference advisor Wilfred Pinfold, director of research and advanced technology development at Intel Federal, focused on informing and educating a broader audience. “To a large extent, growth in the use of HPC — and the benefits that come from it — will develop as more people understand in detail those benefits.” Pinfold also noted the effort the US must make to continue to lead in HPC technology. “I think other countries are catching up and there is real competition ahead — all of which is good.”

    The HPC domain is in many ways defined by two sometimes opposing drives: the push of international collaborations to solve fundamental societal issues, and the pull of national security, innovation, and economic competitiveness — a point that Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US, says we shouldn’t shy away from. Simon participated in an SC14 panel discussion of international funding strategies for HPC software, noting issues the discipline needs to overcome.

    “In principle all supercomputers are easily accessible worldwide. But while our openness as an international community in principal makes it easier, it is less of a necessity that we work out how to actually work together.” This results in very soft collaboration agreements, says Simon, that go nowhere without grassroots efforts by researchers who already have relationships and are interested in working together.

    According to Irene Qualters, division director of advanced cyberinfrastructure at the US National Science Foundation, expectations are increasing. “The community we support is not only multidisciplinary and highly internationally collaborative, but researchers expect their work to have broad societal impact.” Collective motivation is so strong, Qualters notes, that we’re moving away from a history of bilateral agreements. “The ability to do multilateral and broader umbrella agreements is an important efficiency that we’re poised for.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:32 pm on November 20, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    From NSF: “A deep dive into plasma” 

    nsf
    National Science Foundation

    November 20, 2014
    No Writer Credit

    Renowned physicist uses NSF-supported supercomputer and visualization resources to gain insight into plasma dynamic

    Studying the intricacies and mysteries of the sun is physicist Wendell Horton’s life’s work. A widely known authority on plasma physics, his study of the high temperature gases on the sun, or plasma, consistently leads him around the world to work on a diverse range of projects that have great impact.

    Fusion energy is one such key scientific issue that Horton is investigating and one that has intrigued researchers for decades.

    “Fusion energy involves the same thermonuclear reactions that take place on the sun,” Horton said. “Fusing two isotopes of hydrogen to create helium releases a tremendous amount of energy–10 times greater than that of nuclear fission.”

    It’s no secret that the demand for energy around the world is outpacing the supply. Fusion energy has tremendous potential. However, harnessing the power of the sun for this burgeoning energy source requires extensive work.

    Through the Institute for Fusion Studies at The University of Texas at Austin, Horton collaborates with researchers at ITER, a fusion lab in France and the National Institute for Fusion Science in Japan to address these challenges. At ITER, Horton is working with researchers to build the world’s largest tokamak–the device that is leading the way to produce fusion energy in the laboratory.

    ITER Tokamak
    ITER tokamak

    “Inside the tokamak, we inject 10 to 100 megawatts of power to recreate the conditions of burning hydrogen as it occurs in the sun,” Horton said. “Our challenge is confining the plasma, since temperatures are up to 10 times hotter than the center of the sun inside the machine.”

    Perfecting the design of the tokamak is essential to producing fusion energy, and since it is not fully developed, Horton performs supercomputer simulations on the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to model plasma flow and turbulence inside the device.

    “Simulations give us information about plasma in three dimensions and in time, so that we are able to see details beyond what we would get with analytic theory and probes and high-tech diagnostic measurements,” Horton said.

    The simulations also give researchers a more holistic picture of what is needed to improve the tokamak design. Comparing simulations with fusion experiments in nuclear labs around the world helps Horton and other researchers move even closer to this breakthrough energy source.

    Plasma in the ionosphere

    Because the mathematical theories used to understand fusion reactions have numerous applications, Horton is also investigating space plasma physics, which has important implications in GPS communications.

    GPS signaling, a complex form of communication, relies on signal transmission from satellites in space, through the ionosphere, to GPS devices located on Earth.

    “The ionosphere is a layer of the atmosphere that is subject to solar radiation,” Horton explained. “Due to the sun’s high-energy solar radiation plasma wind, nitrogen and oxygen atoms are ionized, or stripped of their electrons, creating plasma gas.”

    These plasma structures can scatter signals sent between global navigation satellites and ground-based receivers resulting in a “loss-of-lock” and large errors in the data used for navigational systems.

    Most people who use GPS navigation have experienced “loss-of-lock,” or instance of system inaccuracy. Although this usually results in a minor inconvenience for the casual GPS user, it can be devastating for emergency response teams in disaster situations or where issues of national security are concerned.

    To better understand how plasma in the ionosphere scatters signals and affects GPS communications, Horton is modeling plasma turbulence as it occurs in the ionosphere on Stampede. He is also sharing this knowledge with research institutions in the United States and abroad including the UT Space and Geophysics Laboratory.

    Seeing is believing

    Although Horton is a long-time TACC partner and Stampede user, he only recently began using TACC’s visualization resources to gain deeper insight into plasma dynamics.

    “After partnering with TACC for nearly 10 years, Horton inquired about creating visualizations of his research,” said Greg Foss, TACC Research Scientist Associate. “I teamed up with TACC research scientist, Anne Bowen, to develop visualizations from the myriad of data Horton accumulated on plasmas.”

    Since plasma behaves similarly inside of a fusion-generating tokamak and in the ionosphere, Foss and Bowen developed visualizations representing generalized plasma turbulence. The team used Maverick, TACC’s interactive visualization and data analysis system to create the visualizations, allowing Horton to see the full 3-D structure and dynamics of plasma for the first time in his 40-year career.

    p
    This image visualizes the effect of gravity waves on an initially relatively stable rotating column of electron density, twisting into a turbulent vortex on the verge of complete chaotic collapse. These computer generated graphics are visualizations of data from a simulation of plasma turbulence in Earth’s ionosphere. The same physics are also applied to the research team’s investigations of turbulence in the tokamak, a device used in nuclear fusion experiments.Credit: Visualization: Greg Foss, TACC Visualization software support: Anne Bowen, Greg Abram, TACC Science: Wendell Horton, Lee Leonard, U. of Texas at Austin

    “It was very exciting and revealing to see how complex these plasma structures really are,” said Horton. “I also began to appreciate how the measurements we get from laboratory diagnostics are not adequate enough to give us an understanding of the full three-dimensional plasma structure.”

    Word of the plasma visualizations soon spread and Horton received requests from physics researchers in Brazil and researchers at AMU in France to share the visualizations and work to create more. The visualizations were also presented at the XSEDE’14 Visualization Showcase and will be featured at the upcoming SC’14 conference.

    Horton plans to continue working with Bowen and Foss to learn even more about these complex plasma structures, allowing him to further disseminate knowledge nationally and internationally, also proving that no matter your experience level, it’s never too late to learn something new.
    — Makeda Easter, Texas Advanced Computing Center (512) 471-8217 makeda@tacc.utexas.edu
    — Aaron Dubrow, NSF (703) 292-4489 adubrow@nsf.gov

    Investigators
    Wendell Horton
    Daniel Stanzione

    Related Institutions/Organizations
    Texas Advanced Computing Center
    University of Texas at Austin

    Locations
    Austin , Texas

    Related Programs
    Leadership-Class System Acquisition – Creating a Petascale Computing Environment for Science and Engineering

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.

    seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    Fron LLNL: “Lawrence Livermore tops Graph 500″ 


    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston
    johnston19@llnl.gov
    925-784-3980

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    3
    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    ibm
    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:51 pm on November 18, 2014 Permalink | Reply
    Tags: , Supercomputing,   

    From Scientific American: “Next Wave of U.S. Supercomputers Could Break Up Race for Fastest” 

    Scientific American

    Scientific American

    November 17, 2014
    Alexandra Witze and Nature magazine

    Once locked in an arms race with each other for the fastest supercomputers, US national laboratories are now banding together to buy their next-generation machines.

    On November 14, the Oak Ridge National Laboratory (ORNL) in Tennessee and the Lawrence Livermore National Laboratory in California announced that they will each acquire a next-generation IBM supercomputer that will run at up to 150 petaflops. That means that the machines can perform 150 million billion floating-point operations per second, at least five times as fast as the current leading US supercomputer, the Titan system at the ORNL.

    titan
    Cray Titan

    The new supercomputers, which together will cost $325 million, should enable new types of science for thousands of researchers who model everything from climate change to materials science to nuclear-weapons performance.

    “There is a real importance of having the larger systems, and not just to do the same problems over and over again in greater detail,” says Julia White, manager of a grant program that awards supercomputing time at the ORNL and Argonne National Laboratory in Illinois. “You can actually take science to the next level.” For instance, climate modellers could use the faster machines to link together ocean and atmospheric-circulation patterns in a regional simulation to get a much more accurate picture of how hurricanes form.

    A learning experience

    Building the most powerful supercomputers is a never-ending race. Almost as soon as one machine is purchased and installed, lab managers begin soliciting bids for the next one. Vendors such as IBM and Cray use these competitions to develop the next generation of processor chips and architectures, which shapes the field of computing more generally.

    In the past, the US national labs pursued separate paths to these acquisitions. Hoping to streamline the process and save money, clusters of labs have now joined together to put out a shared call — even those that perform classified research, such as Livermore. “Our missions differ, but we share a lot of commonalities,” says Arthur Bland, who heads the ORNL computing facility.

    In June, after the first such coordinated bid, Cray agreed to supply one machine to a consortium from the Los Alamos and Sandia national labs in New Mexico, and another to the National Energy Research Scientific Computing (NERSC) Center at the Lawrence Berkeley National Laboratory in Berkeley, California. Similarly, the ORNL and Livermore have banded together with Argonne.

    The joint bids have been a learning experience, says Thuc Hoang, programme manager for high-performance supercomputing research and operations with the National Nuclear Security Administration in Washington DC, which manages Los Alamos, Sandia and Livermore. “We thought it was worth a try,” she says. “It requires a lot of meetings about which requirements are coming from which labs and where we can make compromises.”

    At the moment, the world’s most powerful supercomputer is the 55-petaflop Tianhe-2 machine at the National Super Computer Center in Guangzhou, China. Titan is second, at 27 petaflops. An updated ranking of the top 500 supercomputers will be announced on November 18 at the 2014 Supercomputing Conference in New Orleans, Louisiana.

    When the new ORNL and Livermore supercomputers come online in 2018, they will almost certainly vault to near the top of the list, says Barbara Helland, facilities-division director of the Advanced Scientific Computing Research program at the Department of Energy (DOE) Office of Science in Washington DC.

    But more important than rankings is whether scientists can get more performance out of the new machines, says Sudip Dosanjh, director of the NERSC. “They’re all being inundated with data,” he says. “People have a desperate need to analyse that.”

    A better metric than pure calculating speed, Dosanjh says, is how much better computing codes perform on a new machine. That is why the latest machines were selected not on total speed but on how well they will meet specific computing benchmarks.

    Dual paths

    The new supercomputers, to be called Summit and Sierra, will be structurally similar to the existing Titan supercomputer. They will combine two types of processor chip: central processing units, or CPUs, which handle the bulk of everyday calculations, and graphics processing units, or GPUs, which generally handle three-dimensional computations. Combining the two means that a supercomputer can direct the heavy work to GPUs and operate more efficiently overall. And because the ORNL and Livermore will have similar machines, computer managers should be able to share lessons learned and ways to improve performance, Helland says.

    Still, the DOE wants to preserve a little variety. The third lab of the trio, Argonne, will be making its announcement in the coming months, Helland says, but it will use a different architecture from the combined CPU–GPU approach. It will almost certainly be like Argonne’s current IBM machine, which uses a lot of small but identical processors networked together. The latter approach has been popular for biological simulations, Helland says, and so “we want to keep the two different paths open”.

    Ultimately, the DOE is pushing towards supercomputers that could work at the exascale, or 1,000 times more powerful than the current petascale. Those are expected around 2023. But the more power the DOE labs acquire, the more scientists seem to want, says Katie Antypas, head of the services department at the NERSC.

    “There are entire fields that didn’t used to have a computational component to them,” such as genomics and bioimaging, she says. “And now they are coming to us asking for help.”

    See the full article here.

    Please help promote STEM in your local schools.

    stem

    STEM Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 5:39 pm on November 12, 2014 Permalink | Reply
    Tags: , , Supercomputing,   

    From LBL: “Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2014
    Julie Chao (510) 486-6491

    Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

    What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1, has been published online in the Journal of Advances in Modeling Earth Systems.

    “I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

    mw
    Michael Wehner, Berkeley Lab climate scientist

    Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

    One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

    sc

    The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

    “High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

    The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

    The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

    Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

    “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

    The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 428 other followers

%d bloggers like this: