Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:08 pm on March 31, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing,   

    From Argonne Lab via PPPL: “Plasma Turbulence Simulations Reveal Promising Insight for Fusion Energy” 

    March 31, 2014
    By Argonne National Laboratory

    With the potential to provide clean, safe, and abundant energy, nuclear fusion has been called the “holy grail” of energy production. But harnessing energy from fusion, the process that powers the sun, has proven to be an extremely difficult challenge.

    turb
    Simulation of microturbulence in a tokamak fusion device. (Credit: Chad Jones and Kwan-Liu Ma, University of California, Davis; Stephane Ethier, Princeton Plasma Physics Laboratory)

    Scientists have been working to accomplish efficient, self-sustaining fusion reactions for decades, and significant research and development efforts continue in several countries today.

    For one such effort, researchers from the Princeton Plasma Physics Laboratory (PPPL), a DOE collaborative national center for fusion and plasma research in New Jersey, are running large-scale simulations at the Argonne Leadership Computing Facility (ALCF) to shed light on the complex physics of fusion energy. Their most recent simulations on Mira, the ALCF’s 10-petaflops Blue Gene/Q supercomputer, revealed that turbulent losses in the plasma are not as large as previously estimated.

    MIRA

    Good news

    This is good news for the fusion research community as plasma turbulence presents a major obstacle to attaining an efficient fusion reactor in which light atomic nuclei fuse together and produce energy. The balance between fusion energy production and the heat losses associated with plasma turbulence can ultimately determine the size and cost of an actual reactor.

    “Understanding and possibly controlling the underlying physical processes is key to achieving the efficiency needed to ensure the practicality of future fusion reactors,” said William Tang, PPPL principal research physicist and project lead.

    Tang’s work at the ALCF is focused on advancing the development of magnetically confined fusion energy systems, especially ITER, a multi-billion dollar international burning plasma experiment supported by seven governments including the United States.

    Currently under construction in France, ITER will be the world’s largest tokamak system, a device that uses strong magnetic fields to contain the burning plasma in a doughnut-shaped vacuum vessel. In tokamaks, unavoidable variations in the plasma’s ion temperature drive microturbulence, which can significantly increase the transport rate of heat, particles, and momentum across the confining magnetic field.

    “Simulating tokamaks of ITER’s physical size could not be done with sufficient accuracy until supercomputers as powerful as Mira became available,” said Tang.

    To prepare for the architecture and scale of Mira, Tim Williams of the ALCF worked with Tang and colleagues to benchmark and optimize their Gyrokinetic Toroidal Code – Princeton (GTC-P) on the ALCF’s new supercomputer. This allowed the research team to perform the first simulations of multiscale tokamak plasmas with very high phase-space resolution and long temporal duration. They are simulating a sequence of tokamak sizes up to and beyond the scale of ITER to validate the turbulent losses for large-scale fusion energy systems.

    Decades of experiments

    Decades of experimental measurements and theoretical estimates have shown turbulent losses to increase as the size of the experiment increases; this phenomenon occurs in the so-called Bohm regime. However, when tokamaks reach a certain size, it has been predicted that there will be a turnover point into a Gyro-Bohm regime, where the losses level off and become independent of size. For ITER and other future burning plasma experiments, it is important that the systems operate in this Gyro-Bohm regime.

    The recent simulations on Mira led the PPPL researchers to discover that the magnitude of turbulent losses in the Gyro-Bohm regime is up to 50% lower than indicated by earlier simulations carried out at much lower resolution and significantly shorter duration. The team also found that transition from the Bohm regime to the Gyro-Bohm regime is much more gradual as the plasma size increases. With a clearer picture of the shape of the transition curve, scientists can better understand the basic plasma physics involved in this phenomenon.

    “Determining how turbulent transport and associated confinement characteristics will scale to the much larger ITER-scale plasmas is of great interest to the fusion research community,” said Tang. “The results will help accelerate progress in worldwide efforts to harness the power of nuclear fusion as an alternative to fossil fuels.”

    This project has received computing time at the ALCF through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The effort was also awarded pre-production time on Mira through the ALCF’s Early Science Program, which allowed researchers to pursue science goals while preparing their GTC-P code for Mira.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by Maingear computers

     
  • richardmitnick 11:35 am on March 31, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Brookhaven Lab: “Generations of Supercomputers Pin Down Primordial Plasma” 

    Brookhaven Lab

    March 31, 2014
    Justin Eure

    As one groundbreaking IBM system retires, a new Blue Gene supercomputer comes online at Brookhaven Lab to help precisely model subatomic interactions

    three
    Brookhaven Lab physicists Peter Petreczky and Chulwoo Jung with technology architect Joseph DePace—who oversees operations and maintenance of the Lab’s supercomputers—in front of the Blue Gene/Q supercomputer.

    Supercomputers are constantly evolving to meet the increasing complexity of calculations ranging from global climate models to cosmic inflation. The bigger the puzzle, the more scientists and engineers push the limits of technology forward. Imagine, then, the advances driven by scientists seeking the code behind our cosmos.

    This mutual push and pull of basic science and technology plays out every day among physicists at the U.S. Department of Energy’s Brookhaven National Laboratory. The Lab’s Lattice Gauge Theory Group—led by physicist Frithjof Karsch—hunts for equations to describe the early universe and the forces binding matter together. Their search spans generations of supercomputers and parallels studies of the primordial plasma discovered and explored at Brookhaven’s Relativistic Heavy Ion Collider (RHIC).

    Brookhaven RHIC
    RHIC

    “You need more than just pen and paper to recreate the quantum-scale chemistry unfolding at the foundations of matter—you need supercomputers,” said Brookhaven Lab physicist Peter Petreczky. “The racks of IBM’s Blue Gene/L hosted here just retired after six groundbreaking years, but the cutting-edge Blue Gene/Q is now online to keep pushing nuclear physics forward. “

    Equations to Describe the Dawn of Time

    When RHIC smashes gold ions together at nearly the speed of light, the trillion-degree collisions melt the protons inside each atom. The quarks and gluons inside then break free for a fraction of a second, mirroring the ultra-hot conditions of the universe just microseconds after the Big Bang. This remarkable matter, called quark-gluon plasma, surprised physicists by exhibiting zero viscosity—it behaved like a perfect, friction-free liquid. But this raised new questions: how and why?

    Cosmic Microwave Background Planck
    Cosmic Microwave Background by ESA/Planck

    Armed with the right equations of state, scientists can begin to answer that question and model that perfect plasma at each instant. This very real quest revolves in part around the very artificial: computer simulations.

    “If our equations are accurate, the laws of physics hold up through the simulations and we gain a new and nuanced vocabulary to characterize and predict truly fundamental interactions,” Karsch said. “If we’re wrong, the simulation produces something very different from reality. We’re in the business of systematically eliminating uncertainties.”

    Building a Quantum Grid

    Quantum chromodynamics (QCD) is the theoretical framework that describes these particle interactions on the subatomic scale. But even the most sophisticated computer can’t replicate the full QCD complexity that plays out in reality.

    “To split that sea of information into discrete pieces, physicists developed a four-dimensional grid of space-time points called the lattice,” Petreczky said. “We increase the density of this lattice as technology evolves, because the closer we pack our lattice-bound particles, the closer we approximate reality.”

    Imagine a laser grid projected into a smoke-filled room, transforming that swirling air into individual squares. Each intersection in that grid represents a data point that can be used to simulate the flow of the actual smoke. In fact, scientists use this same lattice-based approximation in fields as diverse as climate science and nuclear fusion.

    As QCD scientists incorporated more and more subatomic details into an ever-denser grid—including the full range of quark and gluon types—the mathematical demands leapt exponentially.
    QCD on a Chip

    Physicist Norman Christ, a Columbia University professor and frequent Brookhaven Lab collaborator, partnered with supercomputing powerhouse IBM to tackle the unprecedented hardware challenge for QCD simulations. The new system would need a relatively small physical footprint, good temperature control, and a combination of low power and high processor density.

    The result was the groundbreaking QCDOC, or QuantumChromoDynamics On a Chip. QCDOC came online in 2004 with a processing power of 10 teraflops, or 10 trillion floating operations per second, a common performance standard.

    “The specific needs of Christ and his collaborators actually revolutionized and rejuvenated supercomputing in this country,” said physicist Berndt Mueller, who leads Brookhaven Lab’s Nuclear and Particle Physics directorate. “The new architecture developed for QCD simulations was driven by these fundamental physics questions. That group laid the foundation for generations of IBM supercomputers that routinely rank among the world’s most powerful.”
    Generations of Giants

    The first QCDOC simulations featured lattices with 16 points in each spatial direction—a strong starting point and testing ground for QCD hypotheses, but a far cry from definitive. Building on QCDOC, IBM launched its Blue Gene series of supercomputers. In fact, the chief architect for all three generations of these highly scalable, general-purpose machines was physicist Alan Gara, who did experimental work at Fermilab [Tevatron] and CERN’s Large Hadron Collider before being recruited by IBM.

    “We had the equation of state for quark-gluon plasma prepared for publication in 2007 based on QCDOC calculations,” Petreczky said, “but it was not as accurate as we hoped. Additional work on the newly installed Blue Gene/L gave us confidence that we were on the right track.”

    The New York Blue system—led by Stony Brook University and Brookhaven Lab with funding from New York State—added 18 racks of Blue Gene/L and two racks of Blue Gene/P in 2007. This 100-teraflop boost doubled the QCD model density to 32 lattice points and ran simulations some 10 million times more complex. Throughout this period, lattice theorists also used Blue Gene supercomputers at DOE’s Argonne and Lawrence Livermore national labs.

    The 600-teraflop Blue Gene/Q came online at Brookhaven Lab in 2013, packing the processing power of 18 racks of Blue Gene/P into just three racks. This new system signaled the end for Blue Gene/L, which went offline in January 2014. Both QCDOC and Blue Gene/Q were developed in close partnership with RIKEN, a leading Japanese research institution.

    “Exciting as it is, moving across multiple systems is also a bit of a headache,” Petreczky said. “Before we get to the scientific simulations, there’s a long transition period and a tremendous amount of code writing. Chulwoo Jung, one of our group members, takes on a lot of that crucial coding.”

    Pinning Down Fundamental Fabric

    Current simulations of QCD matter feature 64 spatial lattice points in each direction, allowing physicist an unprecedented opportunity to map the quark-gluon plasma created at RHIC and explore the strong nuclear force. The Lattice Gauge Theory collaboration continues to run simulations and plans to extend the equations of state to cover all the energy levels achieved at both RHIC and the Large Hadron Collider at CERN.

    The equations already ironed out by Brookhaven’s theorists apply to everything from RHIC’s friction-free superfluid to physics beyond the standard model—including the surprising spin of muons in the g-2 experiment and rare meson decays at Fermilab.

    “This is the beauty of pinning down fundamental interactions: the foundations of matter are literally universal,” Petreczky said. “And only a few groups in the world are describing this particular aspect of our universe.”

    Additional Brookhaven Lab lattice theorists include Michael Creutz, Christoph Lehner, Taku Izubuchi, Swagato Mukherjee, and Amarjit Soni.

    The Brookhaven Computational Science Center (CSC) hosts the IBM Blue Gene supercomputers and Intel clusters used by scientists across the Lab. The CSC brings together researchers in biology, chemistry, physics and medicine with applied mathematicians and computer scientists to take advantage of the new opportunities for scientific discovery made possible by modern computers. The CSC is supported by DOE’s Office of Science.

    See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 9:05 am on October 22, 2013 Permalink | Reply
    Tags: , , , Supercomputing,   

    From D.O.E. Pulse: “A toolbox to simulate the Big Bang and beyond” 

    pulse

    October 14, 2013
    Submitted by DOE’s Fermilab

    The universe is a vast and mysterious place, but thanks to high-performance computing technology scientists around the world are beginning to understand it better. They are using supercomputers to simulate how the Big Bang generated the seeds that led to the formation of galaxies such as the Milky Way.

    blob
    Courtesy of Ralf Kaehler and Tom Abel (visualization); John Wise and Tom Abel (numeric simulation).

    A new project involving DOE’s Argonne Lab, Fermilab and Berkeley Lab will allow scientists to study this vastness in greater detail with a new cosmological simulation analysis toolbox.

    Modeling the universe with a computer is very difficult, and the output of those simulations is typically very large. By anyone’s standards, this is “big data,” as each of these data sets can require hundreds of terabytes of storage space. Efficient storage and sharing of these huge data sets among scientists is paramount. Many different scientific analyses and processing sequences are carried out with each data set, making it impractical to rerun the simulations for each new study.

    This past year Argonne Lab, Fermilab and Berkeley Lab began a unique partnership on an ambitious advanced-computing project. Together the three labs are developing a new, state-of-the-art cosmological simulation analysis toolbox that takes advantage of DOE’s investments in supercomputers and specialized high-performance computing codes. Argonne’s team is led by Salman Habib, principal investigator, and Ravi Madduri, system designer. Jim Kowalkowski and Richard Gerber are the team leaders at Fermilab and Berkeley Lab.

    See the full article here.

    DOE Pulse highlights work being done at the Department of Energy’s national laboratories. DOE’s laboratories house world-class facilities where more than 30,000 scientists and engineers perform cutting-edge research spanning DOE’s science, energy, National security and environmental quality missions. DOE Pulse is distributed twice each month.

    DOE Banner


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:02 pm on October 2, 2013 Permalink | Reply
    Tags: , Supercomputing   

    From isgtw: “Preparing for tomorrow’s big data” 

    Until recently, the large CERN experiments, ATLAS and CMS, owned and controlled the computing infrastructure they operated on in the US, and accessed data only when it was locally available on the hardware they operated. However, [Frank] Würthwein, UC San Diego, explains, with data-taking rates set to increase dramatically by the end of LS1 in 2015, the current operational model is no longer viable to satisfy peak processing needs. Instead, he argues, large-scale processing centers need to be created dynamically to cope with spikes in demand. To this end, Würthwein and colleagues carried out a successful proof-of-concept study, in which the Gordon Supercomputer at the San Diego Supercomputer Center was dynamically and seamlessly integrated into the CMS production system to process a 125-terabyte data set.

    gordon
    SDSC’s Gordon Supercomputer. Photo: Alan Decker. Gordon is part of the National Science Foundation’s (NSF) Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 supercomputers and high-end visualization and data analysis resources.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 10:03 am on September 20, 2013 Permalink | Reply
    Tags: , , , , , , , Supercomputing   

    From Brookhaven Lab: “Supercomputing the Transition from Ordinary to Extraordinary Forms of Matter” 

    Brookhaven Lab

    September 18, 2013
    Karen McNulty Walsh

    Calculations plus experimental data help map nuclear phase diagram, offering insight into transition that mimics formation of visible matter in universe today

    To get a better understanding of the subatomic soup that filled the early universe, and how it “froze out” to form the atoms of today’s world, scientists are taking a closer look at the nuclear phase diagram. Like a map that describes how the physical state of water morphs from solid ice to liquid to steam with changes in temperature and pressure, the nuclear phase diagram maps out different phases of the components of atomic nuclei—from the free quarks and gluons that existed at the dawn of time to the clusters of protons and neutrons that make up the cores of atoms today.

    But “melting” atoms and their subatomic building blocks is far more difficult than taking an ice cube out of the freezer on a warm day. It requires huge particle accelerators like the Relativistic Heavy Ion Collider, a nuclear physics scientific user facility at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, to smash atomic nuclei together at close to the speed of light, and sophisticated detectors and powerful supercomputers to help physicists make sense of what comes out. By studying the collision debris and comparing those experimental observations with predictions from complex calculations, physicists at Brookhaven are plotting specific points on the nuclear phase diagram to reveal details of this extraordinary transition and other characteristics of matter created at RHIC.

    plot
    Nuclear Phase Diagram: This diagram maps out the different phases of nuclear matter physicists expect to exist at a range of high temperatures and densities, but the lines on this map are just a guess. Experiments have detected fluctuations in particle production that hint at where the lines might be; supercomputing calculations are helping to pin down the data points so scientists can make a more accurate map of the transition from the hadrons that make up ordinary atomic nuclei to the quark-gluon plasma of the early universe. The Relativistic Heavy Ion Collider at Brookhaven National Laboratory (RHIC) sits in the “sweet spot” for studying this transition and for detecting a possible critical point (yellow dot) at which the transition changes from continuous to discontinuous. No image credit.

    “At RHIC’s top energy, where we know we’ve essentially “melted” the protons and neutrons to produce a plasma of quarks and gluons—similar to what existed some 13.8 billion years ago—protons and antiprotons are produced in nearly equal amounts,” said Frithjof Karsch, a theoretical physicist mapping out this new terrain. “But as you go to lower energies, where a denser quark soup is produced, we expect to see more protons than antiprotons, with the excess number of protons fluctuating from collision to collision.”

    By looking at millions of collision events over a wide range of energies—essentially conducting a beam energy scan—RHIC’s detectors can pick up the fluctuations as likely signatures of the transition. But they can’t measure precisely the temperatures or densities at which those fluctuations were produced—the data you need to plot points on the phase diagram map.

    “That’s where the supercomputers come in,” says Karsch.

    And, this is where we leave it to the professionals. See the full article here.

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 6:33 pm on April 11, 2013 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From ALMA: “Virtual Observatories: Chilean Development of Astronomical Computing for ALMA” 

    ALMA Banner
    The Atacama Large Millimeter/submillimeter Array (ALMA)

    “The massive amounts of data generated by astronomical observatories located in Chile have created new development opportunities to meet the need for analytical tools, and underscore the need for innovation in new fields such as astronomical data management, astronomical engineering and astronomical computing.

    sc
    ALMA Data Center

    One important contribution of astronomical computing, particularly in relation to the ALMA observatory, is the project entitled, Development of an astronomical computing platform for large-scale data management and intelligent analysis. This project is being led by Mauricio Solar of the Computer Engineering Department at the Universidad Técnica Federico Santa María (UTFSM), and will be presented this Friday, April 12 at noon in the main auditorium of the university’s campus in Valparaíso.

    ‘This development is a key step toward the creation of an ALMA regional center in Chile,’ said Jorge Ibsen, Head of the ALMA Computing Department, ‘because it creates the computer tools and skills needed for this purpose.’

    Once the Atacama Large Millimiter/submillimeter Array (ALMA) is functioning at full capacity, it will generate more than 750 GB of data each day (on the order of 250 TB/year). Chilean astronomers will need to be able to connect at a high data transmission speed, archive data that requires massive storage capacity, and analyze these data. The ultimate goal of the project is to implement a virtual observatory, that is an open platform for virtual access to the large number of observations made by ALMA.

    The project is not only focused on data storage, but also on processing data intelligently to generate quality information that is useful to astronomers and the community in general. How can algorithms be assigned to classify unusual or unknown things that are discovered in ALMA observations? How can algorithms be assigned to monitor mobile sources, such as asteroids and comets, or to follow the movements of the stars themselves? These are some of the examples of the challenges faced today, and there is also a need to prepare for the future needs of astronomical projects.”

    See the full article here.

    alma
    The Atacama Large Millimeter/submillimeter Array (ALMA), an international partnership of Europe, North America and East Asia in cooperation with the Republic of Chile, is the largest astronomical project in existence. ALMA will be a single telescope of revolutionary design, composed initially of 66 high precision antennas located on the Chajnantor plateau, 5000 meters altitude in northern Chile.

    ALMA Icon


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 1:40 pm on March 21, 2013 Permalink | Reply
    Tags: , , Supercomputing   

    From JPL at Caltech: “Supercomputer Helps Planck Mission Expose Ancient Light” 

    Whitney Clavin
    March 21, 2013

    “Like archeologists carefully digging for fossils, scientists with the [ESA] Planck mission are sifting through cosmic clutter to find the most ancient light in the universe.

    The Planck space telescope has created the most precise sky map ever made of the oldest light known, harking back to the dawn of time. This light, called the cosmic microwave background, has traveled 13.8 billion years to reach us. It is so faint that Planck observes every point on the sky an average of 1,000 times to pick up its glow…

    planck
    The Planck view of Cosmic Background Radiation

    … Complicating matters further is “noise” from the Planck detectors that must be taken into account.

    That’s where a supercomputer helps out. Supercomputers are the fastest computers in the world, performing massive amounts of calculations in a short amount of time.

    ‘So far, Planck has made about a trillion observations of a billion points on the sky, said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. ‘Understanding this sheer volume of data requires a state-of-the-art supercomputer.’

    Planck is a European Space Agency mission, with significant contributions from NASA. Under a unique agreement between NASA and the Department of Energy, Planck scientists have been guaranteed access to the supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center at the Lawrence Berkeley National Laboratory. The bulk of the computations for this data release were performed on the Cray XE6 system, called the Hopper. This computer makes more than a quintillion calculations per second, placing it among the fastest in the world.

    hopper

    Noise must be understood, and corrected for, at each of the billion points observed repeatedly by Plank as it continuously sweeps across the sky. The supercomputer accomplishes this by running simulations of how Planck would observe the entire sky under different conditions, allowing the team to identify and isolate the noise.

    Another challenge is carefully teasing apart the signal of the relic radiation from the material lying in the foreground. It’s a big mess, as some astronomers might say, but one that a supercomputer can handle.

    The computations needed for Planck’s current data release required more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.

    Planck is a European Space Agency mission, with significant participation from NASA. NASA’s Planck Project Office is based at JPL. JPL, a division of the California Institute of Technology, Pasadena, contributed mission-enabling technology for both of Planck’s science instruments. European, Canadian and U.S. Planck scientists work together to analyze the Planck data.

    Jet Propulsion Laboratory (JPL) is a federally funded research and development center and NASA field center located in the San Gabriel Valley area of Los Angeles County, California, United States. Although the facility has a Pasadena postal address, it is actually headquartered in the city of La Cañada Flintridge [1], on the northwest border of Pasadena. JPL is managed by the nearby California Institute of Technology (Caltech) for the National Aeronautics and Space Administration. The Laboratory’s primary function is the construction and operation of robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASA’s Deep Space Network.

    ct
    jpl

    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 8:20 pm on March 14, 2013 Permalink | Reply
    Tags: , , , NERSC, , Supercomputing   

    From Berkeley Lab: “Building the Massive Simulation Sets Essential to Planck Results” 


    Berkeley Lab

    Using NERSC supercomputers, Berkeley Lab scientists generate thousands of simulations to analyze the flood of data from the Planck mission

    March 14, 2013
    Paul Preuss

    “To make the most precise measurement yet of the cosmic microwave background (CMB) – the remnant radiation from the big bang – the European Space Agency’s (ESA’s) Planck satellite mission has been collecting trillions of observations of the sky since the summer of 2009. On March 21, 2013, ESA and NASA, a major partner in Planck, will release preliminary cosmology results based on Planck’s first 15 months of data. The results have required the intense creative efforts of a large international collaboration, with significant participation by the U.S. Planck Team based at NASA’s Jet Propulsion Laboratory (JPL).

    four men
    From left, Reijo Keskitalo, Aaron Collier, Julian Borrill, and Ted Kisner of the Computational Cosmology Center with some of the many thousands of simulations for Planck Full Focal Plane 6. (Photo by Roy Kaltschmidt)

    ‘NERSC supports the entire international Planck effort,’ says Julian Borrill of the Computational Research Division (CRD) , who cofounded C3 in 2007 to bring together scientists from CRD and the Lab’s Physics Division. ‘Planck was given an unprecedented multi-year allocation of computational resources in a 2007 agreement between DOE and NASA, which has so far amounted to tens of millions of hours of massively parallel processing, plus the necessary data-storage and data-transfer resources.’

    JPL’s Charles Lawrence, Planck Project Scientist and leader of the U.S. team, says that ‘without the exemplary interagency cooperation between NASA and DOE, Planck would not be doing the science it’s doing today.’”

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    DOE Seal

    i2


    ScienceSprings is powered by MAINGEAR computers

     
  • richardmitnick 7:32 pm on January 29, 2013 Permalink | Reply
    Tags: , , , , Supercomputing   

    From Stanford University: “Stanford Researchers Break Million-core Supercomputer Barrier” 

    Stanford Engineering plate

    Researchers at the Center for Turbulence Research set a new record in supercomputing, harnessing a million computing cores to model supersonic jet noise. Work was performed on the newly installed Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories.

    Friday, January 25, 2013
    Andrew Myers

    Stanford Engineering’s Center for Turbulence Research (CTR) has set a new record in computational science by successfully using a supercomputer with more than one million computing cores to solve a complex fluid dynamics problem—the prediction of noise generated by a supersonic jet engine.

    Joseph Nichols, a research associate in the center, worked on the newly installed Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories (LLNL) funded by the Advanced Simulation and Computing (ASC) Program of the National Nuclear Security Administration (NNSA). Sequoia once topped list of the world’s most powerful supercomputers, boasting 1,572,864 compute cores (processors) and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect.

    seq
    A floor view of the newly installed Sequoia supercomputer at the Lawrence Livermore National Laboratories. (Photo: Courtesy of Lawrence Livermore National Laboratories)

    Because of Sequoia’s impressive numbers of cores, Nichols was able to show for the first time that million-core fluid dynamics simulations are possible—and also to contribute to research aimed at designing quieter aircraft engines.

    jet
    An image from the jet noise simulation. A new design for an engine nozzle is shown in gray at left. Exhaust tempertures are in red/orange. The sound field is blue/cyan. Chevrons along the nozzle rim enhance turbulent mixing to reduce noise. (Illustration: Courtesy of the Center for Turbulence Research, Stanford University)

    Andrew Myers is associate director of communications for the Stanford University School of Engineering.”

    See the full article here.

    Leland and Jane Stanford founded the University to “promote the public welfare by exercising an influence on behalf of humanity and civilization.” Stanford opened its doors in 1891, and more than a century later, it remains dedicated to finding solutions to the great challenges of the day and to preparing our students for leadership in today’s complex world. Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto. Since 1952, more than 54 Stanford faculty, staff, and alumni have won the Nobel Prize, including 19 current faculty members

     
  • richardmitnick 2:15 pm on January 8, 2013 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From PPPL: “PPPL physicists win supercomputing time to simulate key energy and astrophysical phenomena” 

    January 8, 2013
    John Greenwald

    “Three teams led by scientists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have won major blocks of time on two of the world’s most powerful supercomputers. Two of the projects seek to advance the development of nuclear fusion as a clean and abundant source of energy by improving understanding of the superhot, electrically charged plasma gas that fuels fusion reactions. The third project seeks to extend understanding of a process called magnetic reconnection, which is widely believed to play a critical role in the explosive release of magnetic energy in phenomena like solar flares that can disrupt cell phone service and black out power grids.

    ‘This is great for the Laboratory,’ PPPL Director Stewart Prager said of the highly competitive, three-year awards. ‘Getting this kind of computing time allows the solution of complex equations and critical issues that wouldn’t be possible otherwise.’

    The PPPL recipients:

    A nationwide center headed by PPPL physicist C.S. Chang that is developing computer codes to simulate the dazzlingly complex conditions at the edge of magnetically confined plasmas in donut-shaped devices called tokamaks. Chang’s team, the Center for Edge Physics Simulation (EPSI), won 100 million core hours a year on Titan, a Cray XK7 machine that is housed at the DOE’s Oak Ridge National Laboratory and has been proven to perform over 17 quadrillion—or million billion—calculations a second, making it the world’s fastest supercomputer, according to the November, 2012, TOP500 list.

    titan
    Titan at ORNL

    A PPPL-led international team that is studying the rapid loss of plasma confinement caused by growing turbulence as fusion facilities become larger and more powerful. Such losses can significantly decrease the power output of fusion systems but have been shown to level off when facilities reach a certain size—a development that bodes well for future tokamaks. ‘This is very good news for ITER,’ said project leader William Tang, a PPPL physicist and Princeton University lecturer with the rank of professor in the Department of Astrophysical Sciences.

    Tang’s project, called ‘Kinetic Simulations of Fusion Energy Dynamics at the Extreme Scale,’ won 40 million core hours on Mira, an IBM Blue Gene/Q supercomputer at the DOE’s Argonne National Laboratory. Mira can calculate 10 million billion times a second, a speed that will be needed to simulate the complex processes that cause the turbulence to grow to a certain level as the plasma size increases, only to stop growing when the dimensions of the system increase further. ‘The question is a very basic one,’ said Tang. ‘What’s the physics behind this favorable trend that is expected to occur in large plasmas such as ITER? No one can presently answer this question, which will require the efficient engagement of computing at the extreme scale to properly address.’

    ibm
    IBM Blue Gene/Q at Argonne

    Researchers investigating magnetic reconnection, an astrophysical phenomenon that gives rise to the northern lights, solar flares and geomagnetic storms. A team led by Amitava Bhattacharjee, head of the Theory Department at PPPL and a professor of astrophysical sciences at Princeton University, won 35 million core hours on the Titan supercomputer at Oak Ridge.

    Reconnection takes place when the magnetic field lines in merging plasmas snap apart and explosively reconnect, a process seen throughout the universe and in disruptions of plasma during fusion experiments. New insight into reconnection could lead to better predictions of geomagnetic storms and other space weather, and to greater control of experimental fusion reactions.”

    It is really wonderful to see this lab and its people thriving in this time of great questions about the future of scientific research.

    See the full article here.

    Princeton Plasma Physics Laboratory is a U.S. Department of Energy national laboratory managed by Princeton University.


    ScienceSprings is powered by MAINGEAR computers

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 248 other followers

%d bloggers like this: