Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 9:33 pm on January 14, 2016 Permalink | Reply
    Tags: , , , , Supercomputing,   

    From Symmetry: “Exploring the dark universe with supercomputers” 

    Symmetry

    Temp 1

    01/14/16
    Katie Elyce Jones

    Next-generation telescopic surveys will work hand-in-hand with supercomputers to study the nature of dark energy.

    The 2020s could see a rapid expansion in dark energy research.

    For starters, two powerful new instruments will scan the night sky for distant galaxies. The Dark Energy Spectroscopic Instrument, or DESI, will measure the distances to about 35 million cosmic objects, and the Large Synoptic Survey Telescope, or LSST, will capture high-resolution videos of nearly 40 billion galaxies.

    DESI Dark Energy Spectroscopic Instrument
    LBL DESI

    LSST Exterior
    LSST Telescope
    LSST Camera
    LSST, the building that will house it in Chile, and the camera, being built at SLAC

    Both projects will probe how dark energy—the phenomenon that scientists think is causing the universe to expand at an accelerating rate—has shaped the structure of the universe over time.

    But scientists use more than telescopes to search for clues about the nature of dark energy. Increasingly, dark energy research is taking place not only at mountaintop observatories with panoramic views but also in the chilly, humming rooms that house state-of-the-art supercomputers.

    The central question in dark energy research is whether it exists as a cosmological constant—a repulsive force that counteracts gravity, as Albert Einstein suggested a century ago—or if there are factors influencing the acceleration rate that scientists can’t see. Alternatively, Einstein’s theory of gravity [General Relativity] could be wrong.

    “When we analyze observations of the universe, we don’t know what the underlying model is because we don’t know the fundamental nature of dark energy,” says Katrin Heitmann, a senior physicist at Argonne National Laboratory. “But with computer simulations, we know what model we’re putting in, so we can investigate the effects it would have on the observational data.”

    Temp 2
    A simulation shows how matter is distributed in the universe over time. Katrin Heitmann, et al., Argonne National Laboratory

    Growing a universe

    Heitmann and her Argonne colleagues use their cosmology code, called HACC, on supercomputers to simulate the structure and evolution of the universe. The supercomputers needed for these simulations are built from hundreds of thousands of connected processors and typically crunch well over a quadrillion calculations per second.

    The Argonne team recently finished a high-resolution simulation of the universe expanding and changing over 13 billion years, most of its lifetime. Now the data from their simulations is being used to develop processing and analysis tools for the LSST, and packets of data are being released to the research community so cosmologists without access to a supercomputer can make use of the results for a wide range of studies.

    Risa Wechsler, a scientist at SLAC National Accelerator Laboratory and Stanford University professor, is the co-spokesperson of the DESI experiment. Wechsler is producing simulations that are being used to interpret measurements from the ongoing Dark Energy Survey, as well as to develop analysis tools for future experiments like DESI and LSST.

    Dark Energy Survey
    Dark Energy Camera
    CTIO Victor M Blanco 4m Telescope
    DES, The DECam camera, built at FNAL, and the Victor M Blanco 4 meter telescope in Chile that houses the camera.

    “By testing our current predictions against existing data from the Dark Energy Survey, we are learning where the models need to be improved for the future,” Wechsler says. “Simulations are our key predictive tool. In cosmological simulations, we start out with an early universe that has tiny fluctuations, or changes in density, and gravity allows those fluctuations to grow over time. The growth of structure becomes more and more complicated and is impossible to calculate with pen and paper. You need supercomputers.”

    Supercomputers have become extremely valuable for studying dark energy because—unlike dark matter, which scientists might be able to create in particle accelerators—dark energy can only be observed at the galactic scale.

    “With dark energy, we can only see its effect between galaxies,” says Peter Nugent, division deputy for scientific engagement at the Computational Cosmology Center at Lawrence Berkeley National Laboratory.

    Trial and error bars

    “There are two kinds of errors in cosmology,” Heitmann says. “Statistical errors, meaning we cannot collect enough data, and systematic errors, meaning that there is something in the data that we don’t understand.”

    Computer modeling can help reduce both.

    DESI will collect about 10 times more data than its predecessor, the Baryon Oscillation Spectroscopic Survey, and LSST will generate 30 laptops’ worth of data each night. But even these enormous data sets do not fully eliminate statistical error.

    LBL BOSS
    LBL BOSS telescope

    Simulation can support observational evidence by modeling similar conditions to see if the same results appear consistently.

    “We’re basically creating the same size data set as the entire observational set, then we’re creating it again and again—producing up to 10 to 100 times more data than the observational sets,” Nugent says.

    Processing such large amounts of data requires sophisticated analyses. Simulations make this possible.

    To program the tools that will compare observational and simulated data, researchers first have to model what the sky will look like through the lens of the telescope. In the case of LSST, this is done before the telescope is even built.

    After populating a simulated universe with galaxies that are similar in distribution and brightness to real galaxies, scientists modify the results to account for the telescope’s optics, Earth’s atmosphere, and other limiting factors. By simulating the end product, they can efficiently process and analyze the observational data.

    Simulations are also an ideal way to tackle many sources of systematic error in dark energy research. By all appearances, dark energy acts as a repulsive force. But if other, inconsistent properties of dark energy emerge in new data or observations, different theories and a way of validating them will be needed.

    “If you want to look at theories beyond the cosmological constant, you can make predictions through simulation,” Heitmann says.

    A conventional way to test new scientific theories is to introduce change into a system and compare it to a control. But in the case of cosmology, we are stuck in our universe, and the only way scientists may be able to uncover the nature of dark energy—at least in the foreseeable future—is by unleashing alternative theories in a virtual universe.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Symmetry is a joint Fermilab/SLAC publication.


     
  • richardmitnick 10:17 am on December 21, 2015 Permalink | Reply
    Tags: , Benchmarks, , Supercomputing   

    From Sandia: “Supercomputer benchmark gains adherents” 


    Sandia Lab

    1
    Sandia National Laboratories researcher Mike Heroux, developer of the High Performance Conjugate Gradients program that uses complex criteria to rank supercomputers. (Photo by Randy Montoya)

    More than 60 supercomputers were ranked by the emerging tool, termed the High Performance Conjugate Gradients (HPCG) benchmark, in ratings released at the annual supercomputing meeting SC15 in late November. Eighteen months earlier, only 15 supercomputers were on the list.

    “HPCG is designed to complement the traditional High Performance Linpack (HPL) benchmark used as the official metric for ranking the top 500 systems,” said Sandia National Laboratories researcher Mike Heroux, who developed the HPCG program in collaboration with Jack Dongarra and Piotr Luszczek from the University of Tennessee.

    The current list contains the same entries as many of the top 50 systems from Linpack’s TOP500 but significantly shuffles HPL rankings, indicating that HPCG puts different system characteristics through their paces.

    This is because the different measures provided by HPCG and HPL act as bookends on the performance spectrum of a given system, said Heroux. “While HPL tests supercomputer speed in solving relatively straightforward problems, HPCG’s more complex criteria test characteristics such as high-performance interconnects, memory systems and fine-grain cooperative threading that are important to a different and broader set of applications.”

    Heroux said only time will tell whether supercomputer manufacturers and users gravitate toward HPCG as a useful test. “All major vendor computing companies have invested heavily in optimizing our benchmark. All participating system owners have dedicated machine time to make runs. These investments are the strongest confirmation that we have developed something useful.

    “Many benchmarks have been proposed as complements or even replacements for Linpack,” he said. “We have had more success than previous Oefforts. But there is still a lot of work to keep the effort going.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Sandia Campus
    Sandia National Laboratory

    Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.
    i1
    i2
    i3

     
  • richardmitnick 8:48 pm on December 16, 2015 Permalink | Reply
    Tags: , Hypervovae, , Supercomputing   

    From Science Node: “Blue Waters goes hypernova” 

    Science Node bloc
    Science Node.

    16 Dec, 2015
    Lance Farrell

    1
    Courtesy Philipp Mösta.

    The mechanism to launch a hypernova explosion has been discovered in a simulation on the Blue Waters supercomputer. All you need is a little spin (okay, a lot of spin).

    Exploding stars look cool in the movies. Some of them provide a way to measure the size of the universe, and sure, some of these explosions create the heavy elements necessary for life as we know it. But for all these merits, you’ll not want to be anywhere in the vicinity when they explode.

    Hypernovae, the largest of these starbursts, are extremely rare but are the brightest bangs you’ll see in the night sky. Until recently, the cause of these stellar explosions was unknown, but a simulation on the Blue Waters supercomputer at the US National Center for Supercomputing Applications (NCSA) has shone a light on the mechanism responsible.

    2
    Cray Blue Waters supercomputer


    download mp4 video here.
    Blue Waters visualization of the doughnut-shaped magnetic field in a collapsed, massive star, showing how in a span of 10 milliseconds the rapid rotation amps the star’s magnetic field to a million billion times our sun’s (yellow is positive, light blue is negative). Red and blue represent weaker positive and negative magnetic fields. Courtesy Philipp Mösta.

    For most of their lives, these stars are spinning, raging infernos of hydrogen. When a star dies, it runs out of fuel and the nuclear fusion inside its core shuts off. Without this constant combustion, the star loses the outward pressure supporting its gravity and it collapses.

    As the star collapses, it falls until it reaches its compacted core, and then rebounds outward through the falling stellar debris. Astrophysicists used to think this bounce caused the explosions seen in hypernovaes, but computer simulations show that this shockwave does not have the energy required to launch the universe’s brightest explosions.

    And since the rebound isn’t strong enough, scientists found themselves in a quandary. “We see supernova all the time, so there must be a way this happens in nature,” says Philipp Mösta, a postdoctoral scholar with a NASA Einstein fellowship at the University of California at Berkeley. “We need to understand the details of the mechanism that revives that shockwave and drives the explosion.”

    As he worked with 3D simulations over the last few years, each bursting star model required an assumed magnetic field to kick-start the explosion. “And if you do that you can create these beautiful explosions that roughly match what one would see. That works out nicely, but the key question has always been how does the star create the strong magnetic field naturally.”

    3
    Philipp Mösta.

    Until Mösta’s research, this answer remained hidden, awaiting a simulation strong enough to model the collapsing massive star yet with resolution fine enough to resolve the details needed to understand the amplification of the star’s magnetic field.

    That’s when Blue Waters came into the picture. Using 130,000 cores in parallel, Mösta and his team ran simulations for 100 million core hours, generating over 500 TB of data in two weeks. Thanks to the team at NCSA, Mösta found the answer that had been eluding scientists for so long.

    Beginning with a massive star in collapse, his simulation shows that, if followed at a very high resolution, the turbulence in the spinning, collapsing star is sufficient to create the strong large-scale magnetic field needed to explain the hypernova explosions.

    “We have shown how this can happen — you only need the rapid rotation and you get the magnetic field for free.”

    How rapid? Our own star rotates at its equator every 25 days or so. The massive stars responsible for the hypernova explosions, with a mass 25 times that of our sun, complete a rotation every few seconds. At that speed, the rebounding stellar material whips around at a blinding speed, transforming the star into a super-sized rotating electric generator. As the star collapses, its shrinking mass actually increases its rotational speed. Because of this velocity, within milliseconds of the rebound the massive star transforms the energy from its rotation into a magnetic field big enough to trigger the explosion known as a hypernova.

    Mösta’s discovery is one of those breakthroughs only possible through supercomputing. But even with Blue Waters, “you really need a good team to pull this off,” he says. “Sure, you feel great when you manage to find something like this, but for me this is about much more than the results. It’s about making the simulations work as a team, because this would not have been possible without my collaborators. There’s a unique set of expertise that goes into creating and running these simulations.”


    download mp4 video here.
    The Blue Waters supercomputer at the National Center for Supercomputing Applications provided the resolution needed for astrophysicists to spot the cause of hypernova explosions, the brightest flashes in the universe. Some massive stars spin so quickly — one revolution per second — that the turbulence created when the star runs out of gas and collapses quickly converts the rotational energy of the star into the magnetic energy needed to produce hypernovae and long gamma-ray bursts. Simulations and visualization courtesy Philipp Mösta.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 2:59 pm on December 4, 2015 Permalink | Reply
    Tags: , , Supercomputing,   

    From PI: “Unveiling the Turbulent Times of a Dying Star” 

    Perimeter Institute
    Perimeter Institute

    November 26, 2015

    Eamon O’Flynn
    Manager, Media Relations
    eoflynn@perimeterinstitute.ca
    (519) 569-7600 x5071

    Running sophisticated simulations on a powerful supercomputer, an international research team has glimpsed the unique turbulence that fuels stellar explosions.

    2
    Supercomputer visualization of the toroidal magnetic field in a collapsed, massive star, showing how in a span of 10 milliseconds the rapid differential rotation revs up the stars magnetic field to a million billion times that of our sun (yellow is positive, light blue is negative). Red and blue represent weaker positive and negative magnetic fields, respectively. Credit: Robert R. Sisneros (NCSA) and Philipp Mösta.

    When a dying star goes supernova, it explodes with such ferocity that it outshines the entire galaxy in which it lived, spewing material and energy across unimaginable distances at near-light speed.

    In some cases, these cosmic cataclysms defy expectations, blasting not symmetrically in all directions – as an exploding firework might – but instead launching two narrow beams, known as jets, in opposite directions.

    Temp 1

    Understanding how these jets are created is a vexing challenge, but an international research team has recently employed powerful computer simulations to sleuth out some answers.

    The team – led by Philipp Mösta (NASA Einstein Fellow at UC Berkeley), with Caltech researchers Christian Ott, David Radice and Luke Roberts, Perimeter Institute computational scientist Erik Schnetter, and Roland Haas of the Max-Planck Institute for Gravitational Physics – published their findings Nov. 30 in Nature.

    Their work sheds light on an explosive chain reaction that creates jets and, over time, helps create the structure of the universe as we know it.

    “We were looking for the basic mechanism, the core engine, behind how a collapsing star could lead to the formation of jets,” said Schnetter, who designed computer programs for the simulations employed by the research team to model dying stars.

    That core engine, the team discovered, is a highly turbulent place. Any turbulent system – like an aging car with a deteriorating suspension on a bumpy road – is bound to get progressively more chaotic. In certain types of supernovae, that turbulence is caused by what is known as magnetorotational instability – a type of rapid change within the magnetic field of a spinning system, like some stars.


    Supercomputer visualization of the toroidal magnetic field in a collapsed, massive star, showing how in a span of 10 milliseconds the rapid differential rotation revs up the stars magnetic field to a million billion times that of our sun (yellow is positive, light blue is negative). Red and blue represent weaker positive and negative magnetic fields, respectively. Simulations and visualization by Philipp Mösta.

    Prior to the work of Schnetter and colleagues, this instability was believed to be a possible driver of jet-formation in supernovae, but the evidence to support that belief was scant.

    Uncovering such evidence, Schnetter says, required a something of a scientific perfect storm.

    “You need to have the right people, with the right expertise and the right chemistry between them, you need to have the right understanding of physics and mathematics and computer science, and in the end you need the computer hardware that can actually run the experiment.”

    They assembled the right people and found the computational horsepower they needed at the University of Urbana-Champaign in Illinois.

    The team used Blue Waters, one of the world’s most powerful supercomputers, to run simulations of supernovae explosions – simulations so complex that no typical computer could handle the number-crunching required. On Blue Waters, the simulations provided an unprecedented glimpse into the extreme magnetic forces at play in stellar explosions.

    3
    Cray Blue Waters supercomputer

    The 3D simulations revealed an inverse cascade of magnetic energy in the core of spinning stars, which builds up with enough intensity to launch jets from the stellar poles.

    Though the simulations do not take into account every chaotic variable inside a real supernova, they achieve a new level of understanding that will drive follow-up research with more specialized simulations.

    Deepening our understanding of supernova explosions is an ongoing process, Schnetter says, and one that may help us better understand the origins of – to borrow a phrase from Douglas Adams – life, the universe, and everything.

    The formation of galaxies, stars, and even life itself are fundamentally connected to energy and matter blasted outward in exploding stars. Even our own Sun, which supports all life on our planet, is known to be the descendent of earlier supernovae.

    So the study of stellar explosions is, Schnetter says, deeply connected to some of the most fundamental questions humans can ask about the universe. A nice bonus, he adds, is that supernovae are also really awesome explosions.

    “These are some of the most powerful events in the universe,” he says. “Who wouldn’t want to know more about that?”


    Supercomputer visualization of the toroidal magnetic field in a collapsed, massive star, showing how in a span of 10 milliseconds the rapid differential rotation revs up the star’s magnetic field to a million billion times that of our sun (yellow is positive, light blue is negative). Red and blue represent weaker positive and negative magnetic fields, respectively. From left to right are shown: 500m, 200m, 100m, and 50m simulations. Simulations and visualization by Philipp Mösta.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Perimeter

    Perimeter Institute is a leading centre for scientific research, training and educational outreach in foundational theoretical physics. Founded in 1999 in Waterloo, Ontario, Canada, its mission is to advance our understanding of the universe at the most fundamental level, stimulating the breakthroughs that could transform our future. Perimeter also trains the next generation of physicists through innovative programs, and shares the excitement and wonder of science with students, teachers and the general public.

     
  • richardmitnick 4:50 pm on December 1, 2015 Permalink | Reply
    Tags: , , , Supercomputing   

    From LBL: “Berkeley Lab Opens State-of-the-Art Facility for Computational Science” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2015 [This just became available]
    Jon Weiner

    A new center for advancing computational science and networking at research institutions and universities across the country opened today at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab).

    1
    Berkeley Lab’s Shyh Wang Hall

    Named Shyh Wang Hall, the facility will house the National Energy Research Scientific Computing Center, or NERSC, one of the world’s leading supercomputing centers for open science which serves nearly 6,000 researchers in the U.S. and abroad. Wang Hall will also be the center of operations for DOE’s Energy Sciences Network, or ESnet, the fastest network dedicated to science, which connects tens of thousands of scientists as they collaborate on solving some of the world’s biggest scientific challenges.

    Complementing NERSC and ESnet in the facility will be research programs in applied mathematics and computer science, which develop new methods for advancing scientific discovery. Researchers from UC Berkeley will also share space in Wang Hall as they collaborate with Berkeley Lab staff on computer science programs.

    2
    The ceremonial “connection” marking the opening of Shyh Wang Hall.

    The 149,000 square foot facility built on a hillside overlooking the UC Berkeley campus and San Francisco Bay will house one of the most energy-efficient computing centers anywhere, tapping into the region’s mild climate to cool the supercomputers at the National Energy Research Scientific Computing Center (NERSC) and eliminating the need for mechanical cooling.

    “With over 5,000 computational users each year, Berkeley Lab leads in providing scientific computing to the national energy and science user community, and the dedication of Wang Hall for the Computing program at Berkeley Lab will allow this community to continue to flourish,” said DOE Under Secretary for Science and Energy Lynn Orr.

    Modern science increasingly relies on high performance computing to create models and simulate problems that are otherwise too big, too small, too fast, too slow or too expensive to study. Supercomputers are also used to analyze growing mountains of data generated by experiments at specialized facilities. High speed networks are needed to move the scientific data, as well as allow distributed teams to share and analyze the same datasets.

    3
    Shyh Wang

    Wang Hall is named in honor of Shyh Wang, a professor at UC Berkeley for 34 years who died in 1992. Well-known for his research in semiconductors, magnetic resonances and semiconductor lasers, which laid the foundation for optoelectronics, he supervised a number of students who are now well-known in their own right, and authored two graduate-level textbooks, “Solid State Electronics” and “Fundamentals of Semi-conductor Theory and Device Physics.” Dila Wang, Shyh Wang’s widow, was the founding benefactor of the Berkeley Lab Foundation.

    Solid state electronics, semiconductors and optical networks are at the core of the supercomputers at NERSC—which will be located on the second level of Wang Hall—and the networking routers and switches supporting the Energy Sciences Network (ESnet), both of which are managed by Berkeley Lab from Wang Hall. The Computational Research Division (CRD), which develops advanced mathematics and computing methods for research, will also have a presence in the building.

    4
    NERSC’s Cray Cori supercomputer’s graphic panels being installed at Wang Hall.

    “Berkeley Lab is the most open, sharing, networked, and connected National Lab, with over 10,000 visiting scientists using our facilities and leveraging our expertise each year, plus about 1,000 UC graduate students and postdocs actively involved in the Lab’s world-leading research,” said Berkeley Lab Director Paul Alivisatos. “Wang Hall will allow us to serve more scientists in the future, expanding this unique role we play in the national innovation ecosystem. The computational power housed in Wang Hall will be used to advance research that helps us better understand ourselves, our planet, and our universe. When you couple the combined experience and expertise of our staff with leading-edge systems, you unlock amazing potential for solving the biggest scientific challenges.”

    The $143 million structure financed by the University of California provides an open, collaborative environment bringing together nearly 300 staff members from three lab divisions and colleagues from UC Berkeley to encourage new ideas and new approaches to solving some of the nation’s biggest scientific challenges.

    5
    UC President Janet Napolitano at the Shyh Wang Hall opening.

    “All of our University of California campuses rely on high performance computing for their scientific research,” said UC President Janet Napolitano. “The collaboration between UC Berkeley and Berkeley Lab to make this building happen will go a long ways towards advancing our knowledge of the world around us.”

    The building features unique, large, open windows on the lowest level, facing west toward the Pacific Ocean, which will draw in natural air conditioning for the computing systems. Heat captured from those systems will in turn be used to heat the building. The building will house two leading-edge Cray supercomputers – Edison and Cori [pictured above]– which operate around the clock 52 weeks a year to keep up with the computing demands of users.

    6
    Edison supercomputer

    Temp 1
    The disassembly of our Edison ‪‎supercomputer‬ has begun at NERSC. Edison is relocating to Berkeley from Oakland and into our all-new Shyh Wang Hall.

    Wang Hall will be occupied by Berkeley Lab’s Computing Sciences organization, which comprises three divisions:

    NERSC, the DOE Office of Science’s leading supercomputing center for open science. NERSC supports nearly 6,000 researchers at national laboratories and universities across the country. NERSC’s flagship computer is Edison, a Cray XC30 system capable of performing more than two quadrillion calculations per second. The first phase of Cori, a new Cray XC40 supercomputer designed for data-intensive science has already been installed in Wang Hall.

    ESnet, which links 40 DOE sites across the country and scientists at universities and other research institutions via a 100 gigabits-per second backbone network. ESnet also connects researchers in the U.S. and Europe over connections with a combined capacity of 340 Gbps. To support the transition of NERSC from its 15-year home in downtown Oakland to Berkeley Lab, NERSC and ESnet have developed and deployed a 400 Gbps link for moving massive datasets. This is the first-ever 400 Gbps production network deployed by a research and education network.

    The Computational Research Division, the center for one of DOE’s strongest research programs in applied mathematics and computer science, where more efficient computer architectures are developed alongside more effective algorithms and applications that help scientists make the most effective use of supercomputers and networks to tackle problems in energy, the environment and basic science.

    About Berkeley Lab Computing Sciences
    The Berkeley Lab Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 6,000 scientists at national laboratories and universities. The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
  • richardmitnick 9:15 pm on November 29, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From PNNL: “CENATE: A Computing Proving Ground” 

    PNNL BLOC
    PNNL Lab

    October 2015 [This was just made available.]

    New center at PNNL will shape future extreme-scale computing systems

    1
    PNNL’s Seapearl compute cluster can closely monitor and measure power and temperature in high-performance computing systems

    High-performance computing systems at the embedded or extreme scales are a union of technologies and hardware subsystems, including memories, networks, processing elements, and inputs/outputs, with system software that ensure smooth interaction among components and provide a user environment to make the system productive. The recently launched Center for Advanced Technology Evaluation, dubbed CENATE, at Pacific Northwest National Laboratory is a first-of-its-kind computing proving ground. Before setting the next-generation, extreme-scale supercomputers to work solving some of the nation’s biggest problems, CENATE’s evaluation of early technologies to predict their overall potential and guide their designs will help hone future technology, systems, and applications before these high-cost machines make it to production. Funding for CENATE at PNNL is being provided by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research.

    Modern computing systems are increasingly complex, incorporating a multitude of leading-edge technologies and nonlinear interactions. This complexity has led to a growing and continuous need to employ advanced methods to reassess; prototype; measure; and anticipate, using performance and power modeling and simulation, the life cycle of new technologies, as well as the co-design of new computing systems and applications.

    CENATE uses a multitude of “tools of the trade,” depending on the maturity of the technology under investigation. The scientists in CENATE will conduct research in a complex measurement laboratory setting that allows for measuring performance, power, reliability, and thermal effects. When actual hardware is not available for technologies early in their life cycle, modeling and simulation techniques for power, performance, and thermal modeling will be used. Through its Performance and Architecture Laboratory (PAL)—a key technical capability of the Laboratory—PNNL can offer a unique modeling environment for high-performance computing systems and applications. In a near-turnkey way, CENATE will evaluate both complete system solutions and individual subsystem component technologies, from pre-production boards and technologies to full nodes and systems that pave the way to larger-scale production. CENATE will focus on technology evaluations in the context of workloads of interest to DOE’s Office of Science and build on instrumentation and expertise already gleaned from other programs and PNNL institutional investments.

    CENATE stands apart because its overarching goal is to take these advanced technology evaluations out of isolation. CENATE will provide the central point for these once-fragmented investigations, incorporating a user facility type of model where other national laboratories and technology providers will have the opportunity to access CENATE resources and share in the integrated evaluation and prediction processes that can benefit computing research.

    “A central focus on examining the prediction of potential future extreme-scale high-performance computing systems has been missing from DOE’s HPC research community,” said Adolfy Hoisie, PNNL’s chief scientist for computing and the principal investigator and director of CENATE. “In PAL, we already have applicable resources and experience amid its considerable modeling and simulation of systems and applications portfolio to undertake the empirical evaluations, and we have steadily invested in dedicated infrastructure. CENATE will allow us to bolster our dedicated laboratory with leading-edge testbeds and measurement equipment for rapidly evolving technology evaluations. We also will make the most of our industry connections, adding ‘loaner’ equipment to CENATE’s technology mix as appropriate.”

    CENATE evaluations will mostly concern processors; memory; networks; storage; input/output; and the physical aspects of certain systems, such as sizing and thermal effects. All associated system software will be included in the evaluation and analyses, with some investigations emphasizing system software. Multiple types of testbeds will be employed to examine advanced multi-core designs and memory component technologies, as well as smaller-scale technologies with a small number of nodes that can be interconnected using a commodity-type or proprietary network. CENATE’s testbeds also will accommodate larger-scale advanced scalability platforms with hybrid or homogeneous processor technologies and state-of-the-art network infrastructures, as well as disruptive technologies not typically evaluated as physical testbeds, such as Silicon Photonics or quantum computing.

    “We’ll strive for CENATE to become the premier destination for technology evaluation, measurement facilities, testbeds, and predictive exploration—driven by transparency and collaboration—that will shape the design and capabilities of future exascale computing systems and beyond,” Hoisie added.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Pacific Northwest National Laboratory (PNNL) is one of the United States Department of Energy National Laboratories, managed by the Department of Energy’s Office of Science. The main campus of the laboratory is in Richland, Washington.

    PNNL scientists conduct basic and applied research and development to strengthen U.S. scientific foundations for fundamental research and innovation; prevent and counter acts of terrorism through applied research in information analysis, cyber security, and the nonproliferation of weapons of mass destruction; increase the U.S. energy capacity and reduce dependence on imported oil; and reduce the effects of human activity on the environment. PNNL has been operated by Battelle Memorial Institute since 1965.

    i1

     
  • richardmitnick 2:54 pm on November 25, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From Science Node: “Supercomputers put photosynthesis in the spotlight” 

    Science Node bloc
    Science Node

    11.25.15
    David Lugmayer

    1
    Courtesy Julia Schwab, Pixabay (CC0 Public Domain).

    Photosynthesis is one of the most important processes on Earth, essential to the existence of much life on our planet. But for all its importance, scientists still do not understand some of the small-scale processes of how plants absorb light.

    An international team, led by researchers from the University of the Basque Country (UPV/EHU) in Spain, has conducted detailed simulations of the processes behind photosynthesis. Working in collaboration with several other universities and institutions, the researchers are using supercomputers to better understand how photosynthesis functions at the most basic level.

    Photosynthesis is fundamental to much life on earth. The process of converting energy from our sun into a chemical form that can be stored enables the plethora of plant life that covers the globe to live. Without photosynthesis, plants — along with the animals that depend on them for food and oxygen — would not exist. During photosynthesis, carbon dioxide and water are converted into carbohydrates and oxygen. However, this process requires energy to function; energy that sunlight provides.

    Over half of the sunlight that green plants capture for use in photosynthesis is absorbed by a complex of chlorophyll molecules and proteins called the light-harvesting complex (LHC II). Yet the scientific community still does not fully understand how this molecule acts when it absorbs photons of light.

    2
    The LHC II molecule, visualized here, is a complex of proteins and chlorophyll molecules. It is responsible for capturing over 50% of the solar energy absorbed for the process of photosynthesis. Image courtesy Joaquim Jornet-Somoza and colleagues (CC BY 3.0)

    To help illuminate this mystery, the team at UPV/EHU is simulating the LHC II molecule using a quantum mechanical theory called ‘real-space time-dependent density functional theory’ (TDDFT), implemented in a special software package called ‘Octopus’. Simulating LHC II is an impressive feat considering that the molecule is comprised of over 17,000 atoms, each of which must be simulated individually.

    Because of the size and complexity of the study, some of the TDDFT calculations required significant computing resources. Two supercomputers, MareNostrum III and Hydra, played an important role in the experiment. Joaquim Jornet-Somoza, a postdoctoral researcher from the University of Barcelona in Spain, explains why: “The memory storage needed to solve the equations, and the number of algorithmic operations increases exponentially with the number of electrons that are involved. For that reason, the use of supercomputers is essential for our goal. The use of parallel computing reduces the execution time and makes resolving quantum mechanical equations feasible.” In total 2.6 million core hours have been used for the study.

    3
    MareNostrum III

    4
    Hydra

    However, to run these simulations, several issues had to first be sorted out, and the Octopus software code had to be extensively optimized to cope with the experiment. “Our group has worked on the enhancement of the Octopus package to run in parallel-computing systems,” says Jornet.

    The simulations, comprising of thousands of atoms, are reported to be the biggest of their kind ever performed to date. Nevertheless, the team is still working towards simulating the full 17,000 atoms of the LHC II complex. “The maximum number of atoms simulated in our calculations was 6,025, all of them treated at TDDFT level. These calculations required the use of 5,120 processors, and around 10TB of memory,” explains Jornet.

    The implications of the study are twofold, says Jornet. From a photosynthetic perspective, it shows that the LHC II complex has evolved to optimize the capture of light energy. From a computational perspective, the team successfully applied quantum mechanical simulations on a system comprised of thousands of atoms, paving the way for similar studies on large systems.

    The study, published in the journal Physical Chemistry Chemical Physics, proposed that studying the processes behind photosynthesis could also yield applied benefits. One such benefit is the optimization of crop production. Enhanced understanding of photosynthesis could also potentially be used to improve solar power technologies or the production of hydrogen fuel.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    Science Node is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, Science Node is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read Science Node via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 11:38 am on November 20, 2015 Permalink | Reply
    Tags: , , , , Supercomputing   

    From BNL: “Supercomputing the Strange Difference Between Matter and Antimatter” 

    Brookhaven Lab

    November 20, 2015
    Karen McNulty Walsh, (631) 344-8350
    Peter Genzer, (631) 344-3174

    1
    Members of the “RIKEN-Brookhaven-Columbia” Collaboration who participated in this work (seated L to R): Taku Izubuchi (RIKEN BNL Research Center, or RBRC, and Brookhaven Lab), Christoph Lehner (Brookhaven), Robert Mawhinney (Columbia University), Amarjit Soni (Brookhaven), Norman Christ (Columbia), Christopher Kelly (RBRC), Chulwoo Jung (Brookhaven); (standing L to R): Sergey Syritsyn (RBRC), Tomomi Ishikawa (RBRC), Luchang Jin (Columbia), Shigemi Ohta (RBRC), and Seth Olsen (Columbia). Mawhinney, Soni, and Christ were the founding members of the collaboration, along with Thomas Blum (not shown, now at the University of Connecticut).

    2
    Supercomputers such as Brookhaven Lab’s Blue Gene/Q were essential for completing the complex calculation of direct CP symmetry violation. The same calculation would have required two thousand years using a laptop.

    An international team of physicists including theorists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory has published the first calculation of direct “CP” symmetry violation—how the behavior of subatomic particles (in this case, the decay of kaons) differs when matter is swapped out for antimatter. Should the prediction represented by this calculation not match experimental results, it would be conclusive evidence of new, unknown phenomena that lie outside of the Standard Model—physicists’ present understanding of the fundamental particles and the forces between them.

    3
    The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.

    The current result—reported in the November 20 issue of Physical Review Letters—does not yet indicate such a difference between experiment and theory, but scientists expect the precision of the calculation to improve dramatically now that they’ve proven they can tackle the task. With increasing precision, such a difference—and new physics—might still emerge.

    “This so called ‘direct’ symmetry violation is a tiny effect, showing up in just a few particle decays in a million,” said Brookhaven physicist Taku Izubuchi, a member of the team performing the calculation. Results from the first, less difficult part of this calculation were reported by the same group in 2012. However, it is only now, with completion of the second part of this calculation—which was hundreds of times more difficult than the first—that a comparison with the measured size of direct CP violation can be made. This final part of the calculation required more than 200 million core processing hours on supercomputers, “and would have required two thousand years using a laptop,” Izubuchi said.

    The calculation determines the size of the symmetry violating effect as predicted by the Standard Model, and was compared with experimental results that were firmly established in 2000 at the European Center for Nuclear Research (CERN) and Fermi National Accelerator Laboratory.

    “This is an especially important place to compare with the Standard Model because the small size of this effect increases the chance that other, new phenomena may become visible,” said Robert Mawhinney of Columbia University.

    “Although the result from this direct CP violation calculation is consistent with the experimental measurement, revealing no inconsistency with the Standard Model, the calculation is on-going with an accuracy that is expected to increase two-fold within two years,” said Peter Boyle of the University of Edinburgh. “This leaves open the possibility that evidence for new phenomena, not described by the Standard Model, may yet be uncovered.”

    Matter-antimatter asymmetry

    Physicists’ present understanding of the universe requires that particles and their antiparticles (which have the same mass but opposite charge) behave differently. Only with matter-antimatter asymmetry can they hope to explain why the universe, which was created with equal parts of matter and antimatter, is filled mostly with matter today. Without this asymmetry, matter and antimatter would have annihilated one another leaving a cold, dim glow of light with no material particles at all.

    The first experimental evidence for the matter-antimatter asymmetry known as CP violation was discovered in 1964 at Brookhaven Lab. This Nobel-Prize-winning experiment also involved the decays of kaons, but demonstrated what is now referred to as “indirect” CP violation. This violation arises from a subtle imperfection in the two distinct types of neutral kaons.

    The target of the present calculation is a phenomenon that is even more elusive: a one-part-in-a-million difference between the matter and antimatter decay probabilities. The small size of this “direct” CP violation made its experimental discovery very difficult, requiring 36 years of intense experimental effort following the 1964 discovery of “indirect” CP violation.

    While these two examples of matter-antimatter asymmetry are of very different size, they are related by a remarkable theory for which physicists Makoto Kobayashi and Toshihide Maskawa were awarded the 2008 Nobel Prize in physics. The theory provides an elegant and simple explanation of CP violation that manages to explain both the 1964 experiment and later CP-violation measurements in experiments at the KEK laboratory in Japan and the SLAC National Accelerator Laboratory in California.

    “This new calculation provides another test of this theory—a test that the Standard Model passes, at least at the present level of accuracy,” said Christoph Lehner, a Brookhaven Lab member of the team.

    Although the Standard Model does successfully relate the matter-antimatter asymmetries seen in the 1964 and later experiments, this Standard-Model asymmetry is insufficient to explain the preponderance of matter over antimatter in the universe today.

    “This suggests that a new mechanism must be responsible for the preponderance of matter of which we are made,” said Christopher Kelly, a member of the team from the RIKEN BNL Research Center (RBRC). “This one-part-per-million, direct CP violation may be a good place to first see it. The approximate agreement between this new calculation and the 2000 experimental results suggests that we need to look harder, which is exactly what the team performing this calculation plans to do.”

    This calculation was carried out on the Blue Gene/Q supercomputers at the RIKEN BNL Research Center (RBRC), at Brookhaven National Laboratory, at the Argonne Leadership Class Computing Facility (ALCF) at Argonne National Laboratory, and at the DiRAC facility at the University of Edinburgh. The research was carried out by Ziyuan Bai, Norman Christ, Robert Mawhinney, and Daiqian Zhang of Columbia University; Thomas Blum of the University of Connecticut; Peter Boyle and Julien Frison of the University of Edinburgh; Nicolas Garron of Plymouth University; Chulwoo Jung, Christoph Lehner, and Amarjit Soni of Brookhaven Lab; Christopher Kelly, and Taku Izubuchi of the RBRC and Brookhaven Lab; and Christopher Sachrajda of the University of Southampton. The work was funded by the U.S. Department of Energy’s Office of Science, by the RIKEN Laboratory of Japan, and the U.K. Science and Technology Facilities Council. The ALCF is a DOE Office of Science User Facility.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 4:02 pm on November 19, 2015 Permalink | Reply
    Tags: , , Supercomputing   

    From LLNL: “Tri-lab collaboration that will bring Sierra supercomputer to Lab recognized” 


    Lawrence Livermore National Laboratory

    1
    Sierra is the next in a long line of supercomputers at Lawrence Livermore National Laboratory.

    The collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) that will bring the Sierra supercomputer to the Lab in 2018 has been recognized by HPCWire with an Editor’s Choice Award for Best HPC Collaboration between Government and Industry.

    The award was received by Doug Wade, head of the Advanced Simulation and Computing (ASC) program, in the DOE booth at Supercomputing 2015 (SC15), and representatives from Oak Ridge and Argonne. HPCWire is an online news service that covers the high performance computing (HPC) industry.

    CORAL represents an innovative procurement strategy pioneered by Livermore that couples acquisition with R&D non-recurring engineering (NRE) contracts that make it possible for vendors to assume greater risks in their proposals than they would otherwise for an HPC system that is several years out. Delivery of Sierra is expected in late 2017 with full deployment in 2018. This procurement strategy has since been widely adopted by DOE labs.

    CORAL’s industry partners include IBM, NVIDIA and Mellanox. In addition to bringing Sierra to Livermore, CORAL will bring an HPC system called Summit to Oak Ridge National Laboratory and a system called Aurora to Argonne National Laboratory.

    3
    Summit supercomputer

    6
    Aurora supercomputer

    Sierra will be an IBM system expected to exceed 120 petaflops (120 quadrillion floating point operations per second) and will serve NNSA’s ASC program, an integral part of stockpile stewardship.

    In other SC15 news, LLNL’s 20-petaflop (trillion floating point operations per second) IBM Blue Gene Q Sequoia system was again ranked No. 3 on the Top500 list of the world’s most powerful supercomputers released Tuesday. For the third year running, the Chinese Tiahne-2 (Milky Way-2) supercomputer holds the No. 1 ranking on the list followed by Titan at Oak Ridge National Laboratory. LLNL’s 5-petaflop Vulcan, also a Blue Gene Q system, dropped out of the top 10 on the list and is now ranked No. 12.

    4
    IBM Blue Gene Q Sequoia system

    5
    Tiahne-2 supercomputer

    5
    Titan supercomputer

    The United States has five of the top 10 supercomputers on the Top500 and four of those are DOE and NNSA systems. In addition to China, other countries with HPC systems in the top 10 include Germany, Japan, Switzerland and Saudi Arabia.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA

     
  • richardmitnick 9:48 am on November 16, 2015 Permalink | Reply
    Tags: , , , Q Continuum, Supercomputing   

    From ANL: “Researchers model birth of universe in one of largest cosmological simulations ever run” 

    News from Argonne National Laboratory

    October 29, 2015
    Louise Lerner

    Temp 1
    This series shows the evolution of the universe as simulated by a run called the Q Continuum, performed on the Titan supercomputer and led by Argonne physicist Katrin Heitmann. These images give an impression of the detail in the matter distribution in the simulation. At first the matter is very uniform, but over time gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form. Image by Heitmann et. al.

    Researchers are sifting through an avalanche of data produced by one of the largest cosmological simulations ever performed, led by scientists at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory.

    The simulation, run on the Titan supercomputer at DOE’s Oak Ridge National Laboratory, modeled the evolution of the universe from just 50 million years after the Big Bang to the present day — from its earliest infancy to its current adulthood. Over the course of 13.8 billion years, the matter in the universe clumped together to form galaxies, stars, and planets; but we’re not sure precisely how.

    2
    Cray/Titan

    These kinds of simulations help scientists understand dark energy, a form of energy that affects the expansion rate of the universe, including the distribution of galaxies, composed of ordinary matter, as well as dark matter, a mysterious kind of matter that no instrument has directly measured so far.

    Temp 1
    Galaxies have halos surrounding them, which may be composed of both dark and regular matter. This image shows a substructure within a halo in the Q Continuum simulation, with “subhalos” marked in different colors. Image by Heitmann et al.

    Intensive sky surveys with powerful telescopes, like the Sloan Digital Sky Survey and the new, more detailed Dark Energy Survey, show scientists where galaxies and stars were when their light was first emitted.

    SDSS Telescope
    SDSS telescope at Apache Point, NM, USA

    Dark Energy Camera
    CTIO Victor M Blanco 4m Telescope
    DECam and the Blanco telecope in CHile where it is housed

    And surveys of the Cosmic Microwave Background [CMB], light remaining from when the universe was only 300,000 years old, show us how the universe began — “very uniform, with matter clumping together over time,” said Katrin Heitmann, an Argonne physicist who led the simulation.

    Cosmic Microwave Background  Planck
    CMB

    The simulation fills in the temporal gap to show how the universe might have evolved in between: “Gravity acts on the dark matter, which begins to clump more and more, and in the clumps, galaxies form,” said Heitmann.

    Called the Q Continuum, the simulation involved half a trillion particles — dividing the universe up into cubes with sides 100,000 kilometers long. This makes it one of the largest cosmology simulations at such high resolution. It ran using more than 90 percent of the supercomputer. For perspective, typically less than one percent of jobs use 90 percent of the Mira supercomputer at Argonne, said officials at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. Staff at both the Argonne and Oak Ridge computing facilities helped adapt the code for its run on Titan.

    “This is a very rich simulation,” Heitmann said. “We can use this data to look at why galaxies clump this way, as well as the fundamental physics of structure formation itself.”

    Analysis has already begun on the two and a half petabytes of data that were generated, and will continue for several years, she said. Scientists can pull information on such astrophysical phenomena as strong lensing, weak lensing shear, cluster lensing and galaxy-galaxy lensing.

    The code to run the simulation is called Hardware/Hybrid Accelerated Cosmology Code (HACC), which was first written in 2008, around the time scientific supercomputers broke the petaflop barrier (a quadrillion operations per second). HACC is designed with an inherent flexibility that enables it to run on supercomputers with different architectures.

    Details of the work are included in the study, The Q continuum simulation: harnessing the power of GPU accelerated supercomputers, published in August in the Astrophysical Journal Supplement Series by the American Astronomical Society. Other Argonne scientists on the study included Nicholas Frontiere, Salman Habib, Adrian Pope, Hal Finkel, Silvio Rizzi, Joe Insley and Suman Bhattacharya, as well as Chris Sewell at DOE’s Los Alamos National Laboratory.

    This work was supported by the DOE Office of Science (Scientific Discovery through Advanced Computing (SciDAC) jointly by High Energy Physics and Advanced Scientific Computing Research ) and used resources of the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory, a DOE Office of Science User Facility. The work presented here results from an award of computer time provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program at the OLCF.

    See the full article here .

    Please help promote STEM in your local schools.
    STEM Icon
    Stem Education Coalition
    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    The Advanced Photon Source at Argonne National Laboratory is one of five national synchrotron radiation light sources supported by the U.S. Department of Energy’s Office of Science to carry out applied and basic research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels, provide the foundations for new energy technologies, and support DOE missions in energy, environment, and national security. To learn more about the Office of Science X-ray user facilities, visit http://science.energy.gov/user-facilities/basic-energy-sciences/.

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 534 other followers

%d bloggers like this: