Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 5:51 pm on February 13, 2018 Permalink | Reply
    Tags: , , , Fermilab joins CERN openlab on data reduction, Supercomputing   

    From CERN Courier: “Fermilab joins CERN openlab on data reduction” 


    CERN Courier

    Jan 15, 2018

    1
    Computing centre

    In November, Fermilab became a research member of CERN openlab – a public-private partnership between CERN and major ICT companies established in 2001 to meet the demands of particle-physics research. Fermilab researchers will now collaborate with members of the LHC’s CMS experiment and the CERN IT department to improve technologies related to physics data reduction, which is vital for gaining insights from the vast amounts of data produced by high-energy physics experiments.

    The work will take place within an existing CERN openlab project with Intel on big-data analytics. The goal is to use industry-standard big-data tools to create a new tool for filtering many petabytes of heterogeneous collision data to create manageable, but still rich, datasets of a few terabytes for analysis. Using current systems, this kind of targeted data reduction can often take weeks, but the Intel-CERN project aims to reduce it to a matter of hours.

    The team plans to first create a prototype capable of processing 1 PB of data with about 1000 computer cores. Based on current projections, this is about one twentieth of the scale of the final system that would be needed to handle the data produced when the High-Luminosity LHC comes online in 2026. “This kind of work, investigating big-data analytics techniques is vital for high-energy physics — both in terms of physics data and data from industrial control systems on the LHC,” says Maria Girone, CERN openlab CTO.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    THE FOUR MAJOR PROJECT COLLABORATIONS

    ATLAS
    CERN ATLAS New

    ALICE
    CERN ALICE New

    CMS
    CERN CMS New

    LHCb
    CERN LHCb New II

    LHC

    CERN LHC Map
    CERN LHC Grand Tunnel

    CERN LHC particles

    Advertisements
     
  • richardmitnick 5:29 pm on February 1, 2018 Permalink | Reply
    Tags: , IllustrisTNG project, , Modeling the universe, Supercomputing   

    From MIT: “Modeling the universe” 

    MIT News

    MIT Widget

    MIT News

    January 31, 2018
    Julia Keller

    1
    Rendering of the gas velocity in a thin slice of 100 kiloparsec thickness (in the viewing direction), centered on the second most massive galaxy cluster in the TNG100 calculation. Where the image is black, the gas is hardly moving, while white regions have velocities that exceed 1,000 kilometers per second. The image contrasts the gas motions in cosmic filaments against the fast chaotic motions triggered by the deep gravitational potential well and the supermassive black hole sitting at its center. Image courtesy of the IllustrisTNG collaboration

    2
    Thin slice through the cosmic large-scale structure in the largest simulation of the IllustrisTNG project. The image brightness indicates the mass density, and color visualizes the mean gas temperature of ordinary (“baryonic”) matter. The displayed region extends by about 1.2 billion light years from left to right. The underlying simulation is presently the largest magneto-hydrodynamic simulation of galaxy formation, containing more than 30 billion volume elements and particles. Image courtesy of the IllustrisTNG collaboration

    3
    The background image shows the dark matter in the TNG300 simulation over large scales, highlighting the backbone of cosmic structure. In the upper right inset, the distribution of stellar mass across the somewhat smaller TNG100 volume is displayed, while the panels on the left show galaxy-galaxy interactions and the fine-grained structure of extended stellar light around galaxies. Image courtesy of the IllustrisTNG collaboration

    4
    Visualization of the intensity of shock waves in the cosmic gas (blue) around collapsed dark matter structures (orange/white). Similar to a sonic boom, the gas in these shock waves is accelerated with a jolt when impacting on the cosmic filaments and galaxies. Image courtesy of the IllustrisTNG collaboration

    MIT’s Mark Vogelsberger and an international astrophysics team have created a new model pointing to black holes’ role in galaxy formation.

    A supercomputer simulation of the universe has produced new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed throughout the cosmos, and where magnetic fields originate.

    Astrophysicists from MIT, Harvard University, the Heidelberg Institute for Theoretical Studies, the Max-Planck Institutes for Astrophysics and for Astronomy, and the Center for Computational Astrophysics gained new insights into the formation and evolution of galaxies by developing and programming a new simulation model for the universe — “Illustris – The Next Generation” or IllustrisTNG.

    Mark Vogelsberger, an assistant professor of physics at MIT and the MIT Kavli Institute for Astrophysics and Space Research, has been working to develop, test, and analyze the new IllustrisTNG simulations. Along with postdocs Federico Marinacci and Paul Torrey, Vogelsberger has been using IllustrisTNG to study the observable signatures from large-scale magnetic fields that pervade the universe.

    Vogelsberger used the IllustrisTNG model to show that the turbulent motions of hot, dilute gases drive small-scale magnetic dynamos that can exponentially amplify the magnetic fields in the cores of galaxies — and that the model accurately predicts the observed strength of these magnetic fields.

    “The high resolution of IllustrisTNG combined with its sophisticated galaxy formation model allowed us to explore these questions of magnetic fields in more detail than with any previous cosmological simulation,” says Vogelsberger, an author on the three papers reporting the new work, published today in the Monthly Notices of the Royal Astronomical Society [ https://doi.org/10.1093/mnras/stx3304 , https://doi.org/10.1093/mnras/stx3112 , https://doi.org/10.1093/mnras/stx3040 ].

    Modeling a (more) realistic universe

    The IllustrisTNG project is a successor model to the original Illustris simulation developed by this same research team but has been updated to include some of the physical processes that play crucial roles in the formation and evolution of galaxies.

    Like Illustris, the project models a cube-shaped piece of the universe. This time, the project followed the formation of millions of galaxies in a representative region of the universe with nearly 1 billion light years on a side (up from 350 million light years on a side just four years ago). lllustrisTNG is the largest hydrodynamic simulation project to date for the emergence of cosmic structures, says Volker Springel, principal investigator of IllustrisTNG and a researcher at Heidelberg Institute for Theoretical Studies, Heidelberg University, and the Max-Planck Institute for Astrophysics.

    The cosmic web of gas and stars predicted by IllustrisTNG produces galaxies quite similar to the shape and size of real galaxies. For the first time, hydrodynamical simulations could directly compute the detailed clustering pattern of galaxies in space. In comparison with observational data — including the newest large galaxy surveys such as the Sloan Digital Sky Survey — IllustrisTNG demonstrates a high degree of realism, says Springel.

    In addition, the simulations predict how the cosmic web changes over time, in particular in relation to the underlying backbone of the dark matter cosmos. “It is particularly fascinating that we can accurately predict the influence of supermassive black holes on the distribution of matter out to large scales,” says Springel. “This is crucial for reliably interpreting forthcoming cosmological measurements.”

    Astrophysics via code and supercomputers

    For the project, the researchers developed a particularly powerful version of their highly parallel moving-mesh code AREPO and used it on the “Hazel-Hen” machine at the Supercomputing Center in Stuttgart, Germany’s fastest mainframe computer.

    5
    Cray XC40 “Hazel-Hen” machine at the Supercomputing Center in Stuttgart, Germany’

    To compute one of the two main simulation runs, more than 24,000 processors were used over the course of more than two months.

    “The new simulations produced more than 500 terabytes of simulation data,” says Springel. “Analyzing this huge mountain of data will keep us busy for years to come, and it promises many exciting new insights into different astrophysical processes.”

    To compute one of the two main simulation runs, more than 24,000 processors were used over the course of more than two months.

    “The new simulations produced more than 500 terabytes of simulation data,” says Springel. “Analyzing this huge mountain of data will keep us busy for years to come, and it promises many exciting new insights into different astrophysical processes.”

    Supermassive black holes squelch star formation

    In another study, Dylan Nelson, researcher at the Max-Planck Institute for Astrophysics, was able to demonstrate the important impact of black holes on galaxies.

    Star-forming galaxies shine brightly in the blue light of their young stars until a sudden evolutionary shift quenches the star formation, such that the galaxy becomes dominated by old, red stars, and joins a graveyard full of old and dead galaxies.

    “The only physical entity capable of extinguishing the star formation in our large elliptical galaxies are the supermassive black holes at their centers,” explains Nelson. “The ultrafast outflows of these gravity traps reach velocities up to 10 percent of the speed of light and affect giant stellar systems that are billions of times larger than the comparably small black hole itself.“

    New findings for galaxy structure

    IllustrisTNG also improves researchers’ understanding of the hierarchical structure formation of galaxies. Theorists argue that small galaxies should form first, and then merge into ever-larger objects, driven by the relentless pull of gravity. The numerous galaxy collisions literally tear some galaxies apart and scatter their stars onto wide orbits around the newly created large galaxies, which should give them a faint background glow of stellar light.

    These predicted pale stellar halos are very difficult to observe due to their low surface brightness, but IllustrisTNG was able to simulate exactly what astronomers should be looking for.

    “Our predictions can now be systematically checked by observers,” says Annalisa Pillepich, a researcher at Max-Planck Institute for Astronomy, who led a further Illustris-TNG study. “This yields a critical test for the theoretical model of hierarchical galaxy formation.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    MIT Seal

    The mission of MIT is to advance knowledge and educate students in science, technology, and other areas of scholarship that will best serve the nation and the world in the twenty-first century. We seek to develop in each member of the MIT community the ability and passion to work wisely, creatively, and effectively for the betterment of humankind.

    MIT Campus

     
  • richardmitnick 12:26 pm on January 30, 2018 Permalink | Reply
    Tags: , NCI- National Computational Infrastructure, Supercomputing   

    From NCI: “NCI welcomes $70M investment in HPC capability” 

    NCI

    18 December 2017

    The Board of Australia’s National Computational Infrastructure (NCI), based at The Australian National University (ANU), welcomes the Australian Government’s announcement that it will invest $70 million to replace Australia’s highest performance research supercomputer, Raijin, which is rapidly nearing the end of its service life.

    1
    NCI Raijin supercomputer.

    The funding, through the Department of Education and Training, will be provided as $69.2 million in 2017-18 and $800,000 in 2018-19.

    Chair of the NCI Board, Emeritus Professor Michael Barber, said NCI was crucial to Australia’s future research needs.

    “This announcement is very welcome. NCI plays a pivotal role in the national research landscape, and the supercomputer is the centrepiece of NCI’s renowned and tightly integrated, high-performance computing and data environment,” he said.

    “The Government’s announcement is incredibly important for the national research endeavour.

    “It means NCI can continue to provide Australian researchers with a world-class advanced computing environment that is a fusion of powerful computing, high-performance ‘big data’, and world-leading expertise that enables cutting-edge Australian research and innovation.

    “The NCI supercomputer is one of the most important pieces of research infrastructure in Australia. It is critical to the competitiveness of Australian research and development in every field of scientific and technological endeavour, spanning the national science and research priorities.”

    ANU Vice-Chancellor Professor Brian Schmidt said the funding would ensure NCI remains at the centre of Australia’s research needs.

    “The new NCI supercomputer will be a valuable tool for Australian researchers and industry, and will be central to scientific developments in medical research, climate and weather, engineering and all fields that require analysis of so-called big data, including, of course, astronomy,” Professor Schmidt said.

    Australia’s Chief Scientist Dr Alan Finkel said high-performance computing is a national priority.

    “Throughout our consultations to develop the 2016 National Research Infrastructure Roadmap the critical importance of Australia’s two high performance computers was manifestly clear,” Dr Finkel said.

    “Our scientific community will be overwhelmingly delighted by the Australian Government’s decision today to support the modernisation of the NCI computer hosted at ANU.”

    The announcement of funding ensures researchers in 35 universities, five national science agencies, three medical research institutes, and industry will benefit from a boost in computational horsepower, enabling new research that is more ambitious and more innovative than ever before once the new supercomputer is commissioned in early 2019.

    NCI anticipates the resulting supercomputer will be ranked in the top 25 internationally.

    The Australian Government’s 2016 National Research Infrastructure Roadmap specifically recognised the critical importance of such a resource, and the need for an urgent upgrade.

    The new supercomputer will ensure NCI can continue to provide essential support for research funded and sustained by the national research councils (the Australian Research Council and the National Health and Medical Research Council), and the national science agencies—notably CSIRO, the Bureau of Meteorology and Geoscience Australia.

    This research will drive innovation that is critical to Australia’s future economic development and the wellbeing of Australians.

    To view a video about NCI and the supercomputer click here.
    [No image of the proposed new supercomputer is available]

    See the full article here [no video shows up] .

    History

    11NCI can trace its lineage back through three stages of the evolution of high-end computing services in Australia.

    These are:

    The Early Years: The initiation of high-performance computing services through the Australian National University Supercomputing Facility (ANUSF) from 1987;
    The APAC Years: Its extension to a national role under the Australian Partnership for Advanced Computing (APAC), hosted by ANU from 2000– 07, during which national HPC service was provided from ANUSF, a national partnership was formed, services were broadened to include a range of outreach activities to build uptake, and a national grid program, and nascent data services were established.
    The NCI Years: The current stage of advanced computing services that have been developed from 2007 onwards under the badge of NCI, again hosted by ANU, which are characterised by the broadening and integration of services, the evolution of a strong sustaining partnership, and the transition from high-terascale to petascale computational and data infrastructure to support Australian science.

     
  • richardmitnick 4:08 pm on January 26, 2018 Permalink | Reply
    Tags: 2017 Workshop on Open Source Supercomputing, , , ORNL Researchers Explore Supercomputing Workflow Best Practices, , Supercomputing   

    From HPC wire: “ORNL Researchers Explore Supercomputing Workflow Best Practices” 

    HPC Wire

    January 25, 2018
    Scientists at the Department of Energy’s Oak Ridge National Laboratory are examining the diverse supercomputing workflow management systems in use in the United States and around the world to help supercomputers work together more effectively and efficiently.

    Because supercomputers have largely developed in isolation from each other, existing modeling and simulation, grid/data analysis, and optimization workflows meet highly specific needs and therefore cannot easily be transferred from one computing environment to another.

    Divergent workflow management systems can make it difficult for research scientists at national laboratories to collaborate with partners at universities and international supercomputing centers to create innovative workflow-based solutions that are the strength and promise of supercomputing.

    Led by Jay Jay Billings, team lead for the Scientific Software Development group in ORNL’s Computing and Computational Sciences Directorate, the scientists have proposed a “building blocks” approach in which individual components from multiple workflow management systems are combined in specialized workflows.

    Billings worked with Shantenu Jha of the Computational Science Initiative at Brookhaven National Laboratory and Rutgers University, and Jha presented their research at the 2017 Workshop on Open Source Supercomputing in Denver in November 2017. Their article appears in the workshop’s proceedings.

    The researchers began by analyzing how existing workflow management systems work—the tasks and data they process, the order of execution, and the components involved. Factors that can be used to define workflow management systems include whether a workflow is long or short running, runs internal cycles or in linear fashion with an endpoint, and requires humans to complete. Long used to understand business processes, the workflow concept was introduced in scientific contexts where automation was useful for research tasks such as setting up and running problems on supercomputers and then analyzing the resulting data.

    Viewed through the prism of today’s complex research endeavors, supercomputers’ workflows clearly have disconnects that can hamper scientific advancement. For example, Billings pointed out that a project might draw on multiple facilities’ work while acquiring data from experimental equipment, performing modeling and simulation on supercomputers, and conducting data analysis using grid computers or supercomputers. Workflow management systems with few common building blocks would require installation of one or more additional workflow management systems—a burdensome level of effort that also causes work to slow down.

    “Poor or nonexistent interoperability is almost certainly a consequence of the ‘Wild West’ state of the field,” Billings said. “And lack of interoperability limits reusability, so it may be difficult to replicate data analysis to verify research results or adapt the workflow for new problems.”

    The open building blocks workflows concept being advanced by ORNL’s Scientific Software Development group will enable supercomputers around the world to work together to address larger scientific problems that require workflows to run on multiple systems for complete execution.

    Future work includes testing the hypothesis that the group’s approach is more scalable and sustainable and a better practice.

    This research is supported by DOE and ORNL’s Laboratory Directed Research and Development program.

    ORNL is managed by UT–Battelle for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1987, HPC has enjoyed a legacy of world-class editorial and topnotch journalism, making it the portal of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. For topics ranging from late-breaking news and emerging technologies in HPC, to new trends, expert analysis, and exclusive features, HPCwire delivers it all and remains the HPC communities’ most reliable and trusted resource. Don’t miss a thing – subscribe now to HPCwire’s weekly newsletter recapping the previous week’s HPC news, analysis and information at: http://www.hpcwire.com.

     
  • richardmitnick 1:12 pm on January 19, 2018 Permalink | Reply
    Tags: , , , Supercomputing,   

    From OLCF: “Optimizing Miniapps for Better Portability” 

    i1

    Oak Ridge National Laboratory

    OLCF

    January 17, 2018
    Rachel Harken

    When scientists run their scientific applications on massive supercomputers, the last thing they want to worry about is optimizing their codes for new architectures. Computer scientist Sunita Chandrasekaran at the University of Delaware is taking steps to make sure they don’t have a reason to worry.

    Chandrasekaran collaborates with a team at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) to optimize miniapps, smaller pieces of large applications that can be extracted and fine-tuned to run on GPU architectures. Chandrasekaran and her PhD student, Robert Searles, have taken on the task of porting (adapting) one such miniapp, Minisweep, to OpenACC—a directive-based programming model that allows users to run a code on multiple computing platforms without having to change or rewrite it.

    1
    Minisweep performs a “sweep” computation across a grid (pictured)—representative of a 3D volume in space—to calculate the positions, energies, and flows of neutrons in a nuclear reactor. The yellow cube marks the beginning location of the sweep. The green cubes are dependent upon information from the yellow cube, the blue cubes are dependent upon information from the green cubes, and so forth. In practice, sweeps are performed from each of the eight corners of the cube simultaneously.

    Minisweep is particularly important because it represents approximately 80–99 percent of the computation time of Denovo, a 3D code for radiation transport in nuclear reactors being used in a current DOE Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, project. Minisweep is also being used in benchmarking for the Oak Ridge Leadership Computing Facility’s (OLCF’s) new Summit supercomputer.

    ORNL IBM Summit supercomputer depiction

    Summit is scheduled to be in full production in 2019 and will be the next leadership-class system at the OLCF, a DOE Office of Science User Facility located at ORNL.

    Created from Denovo by OLCF computational scientist Wayne Joubert, Minisweep works by “sweeping” diagonally across grid cells that represent points in space, allowing it to track the positions, flows, and energies of neutrons in a nuclear reactor. Cubes in the grid cell represent a number of these qualities and depend on information from previous cubes in the grid.

    “Scientists need to know how neutrons are flowing in a reactor because it can help them figure out how to build the radiation shield around it,” Chandrasekaran said. “Using Denovo, physicists can simulate this flow of neutrons, and with a faster code, they can compute many different configurations quickly and get their work done faster.”

    Minisweep has already been ported to multicore platforms using the OpenMP programming interface and to GPU accelerators using the lower-level programming language CUDA. ORNL computer scientists and ORNL Miniapps Port Collaboration organizers Tiffany Mintz and Oscar Hernandez knew that porting these kinds of codes to OpenACC would equip them for use on different high-performance computing architectures.

    Chandrasekaran and Searles have been using the Summit early access system, Summitdev, and the Cray XK7 Titan supercomputer at the OLCF to test Minisweep since mid-2017.

    ORNL Cray XK7 Titan Supercomputer

    2
    Visualization of a nuclear reactor simulation on Titan.

    Now, they’ve successfully enabled Minisweep to run on parallel architectures using OpenACC for fast execution on the targeted computer. An option to port to these types of systems without compromising performance didn’t previously exist.

    Whereas the code typically sweeps in eight directions from diagonal corners of a cube inward, the team saw that with only one sweep, the OpenACC directive performed on par with CUDA.

    “We saw OpenACC performing as well as CUDA on an NVIDIA Volta GPU, which is a state-of-the-art GPU card,” Searles said. “That’s huge for us to take away, because we are normally lucky to get performance that’s even 85 percent of CUDA. That one sweep consistently showed us about 0.3 or 0.4 seconds faster, which is significant at the problem size we used for measuring performance.”

    Chandrasekaran and the team at ORNL will continue optimizing Minisweep to get the application up and “sweeping” from all eight corners of a grid cell. Other radiation transport applications and one for DNA sequencing may be able to take advantage of Minisweep for multiple GPU architectures such as Summit—and even exascale systems—in the future.

    “I’m constantly trying to look at how I can package these kinds of tools from a user’s perspective,” Chandrasekaran said. “I take applications that are essential for these scientists’ research and try to find out how to make them more accessible. I always say: write once, reuse multiple times.”

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

    i2

    The Oak Ridge Leadership Computing Facility (OLCF) was established at Oak Ridge National Laboratory in 2004 with the mission of accelerating scientific discovery and engineering progress by providing outstanding computing and data management resources to high-priority research and development projects.

    ORNL’s supercomputing program has grown from humble beginnings to deliver some of the most powerful systems in the world. On the way, it has helped researchers deliver practical breakthroughs and new scientific knowledge in climate, materials, nuclear science, and a wide range of other disciplines.

    The OLCF delivered on that original promise in 2008, when its Cray XT “Jaguar” system ran the first scientific applications to exceed 1,000 trillion calculations a second (1 petaflop). Since then, the OLCF has continued to expand the limits of computing power, unveiling Titan in 2013, which is capable of 27 petaflops.


    ORNL Cray XK7 Titan Supercomputer

    Titan is one of the first hybrid architecture systems—a combination of graphics processing units (GPUs), and the more conventional central processing units (CPUs) that have served as number crunchers in computers for decades. The parallel structure of GPUs makes them uniquely suited to process an enormous number of simple computations quickly, while CPUs are capable of tackling more sophisticated computational algorithms. The complimentary combination of CPUs and GPUs allow Titan to reach its peak performance.

    The OLCF gives the world’s most advanced computational researchers an opportunity to tackle problems that would be unthinkable on other systems. The facility welcomes investigators from universities, government agencies, and industry who are prepared to perform breakthrough research in climate, materials, alternative energy sources and energy storage, chemistry, nuclear physics, astrophysics, quantum mechanics, and the gamut of scientific inquiry. Because it is a unique resource, the OLCF focuses on the most ambitious research projects—projects that provide important new knowledge or enable important new technologies.

     
  • richardmitnick 11:23 am on January 8, 2018 Permalink | Reply
    Tags: , Scientist's Work May Provide Answer to Martian Mountain Mystery, Supercomputing, ,   

    From U Texas Dallas: “Scientist’s Work May Provide Answer to Martian Mountain Mystery” 

    U Texas Dallas

    Jan. 8, 2018
    Stephen Fontenot, UT Dallas
    (972) 883-4405
    stephen.fontenot@utdallas.edu

    By seeing which way the wind blows, a University of Texas at Dallas fluid dynamics expert has helped propose a solution to a Martian mountain mystery.

    4
    Dr. William Anderson

    Dr. William Anderson, an assistant professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science, co-authored a paper published in the journal Physical Review E that explains the common Martian phenomenon of a mountain positioned downwind from the center of an ancient meteorite impact zone.

    Anderson’s co-author, Dr. Mackenzie Day, worked on the project as part of her doctoral research at The University of Texas at Austin, where she earned her PhD in geology in May 2017. Day is a postdoctoral scholar at the University of Washington in Seattle.

    Gale Crater was formed by meteorite impact early in the history of Mars, and it was subsequently filled with sediments transported by flowing water. This filling preceded massive climate change on the planet, which introduced the arid, dusty conditions that have been prevalent for the past 3.5 billion years. This chronology indicates wind must have played a role in sculpting the mountain.

    “On Mars, wind has been the only driver of landscape change for over 3 billion years,” Anderson said. “This makes Mars an ideal planetary laboratory for aeolian morphodynamics — wind-driven movement of sediment and dust. We’re studying how Mars’ swirling atmosphere sculpted its surface.”

    Wind vortices blowing across the crater slowly formed a radial moat in the sediment, eventually leaving only the off-center Mount Sharp, a 3-mile-high peak similar in height to the rim of the crater. The mountain was skewed to one side of the crater because the wind excavated one side faster than the other, the research suggests.

    Day and Anderson first advanced the concept in an initial publication on the topic in Geophysical Research Letters. Now, they have shown via computer simulation that, given more than a billion years, Martian winds were capable of digging up tens of thousands of cubic kilometers of sediment from the crater — largely thanks to turbulence, the swirling motion within the wind stream.

    2
    A digital elevation model of Gale Crater shows the pattern of mid-latitude Martian craters with interior sedimentary mounds.

    “The role of turbulence cannot be overstated,” Anderson said. “Since sediment movement increases non-linearly with drag imposed by the aloft winds, turbulent gusts literally amplify sediment erosion and transport.”

    The location — and mid-latitude Martian craters in general — became of interest as NASA’s Curiosity rover landed in Gale Crater in 2012, where it has gathered data since then.

    “The rover is digging and cataloging data housed within Mount Sharp,” Anderson said. “The basic science question of what causes these mounds has long existed, and the mechanism we simulated has been hypothesized. It was through high-fidelity simulations and careful assessment of the swirling eddies that we could demonstrate efficacy of this model.”

    The theory Anderson and Day tested via computer simulations involves counter-rotating vortices — picture in your mind horizontal dust devils — spiraling around the crater to dig up sediment that had filled the crater in a warmer era, when water flowed on Mars.

    “These helical spirals are driven by winds in the crater, and, we think, were foremost in churning away at the dry Martian landscape and gradually scooping sediment from within the craters, leaving behind these off-center mounds,” Anderson said.

    That simulations have demonstrated that wind erosion could explain these geographical features offers insight into Mars’ distant past, as well as context for the samples collected by Curiosity.

    “It’s further indication that turbulent winds in the atmosphere could have excavated sediment from the craters,” Anderson said. “The results also provide guidance on how long different surface samples have been exposed to Mars’ thin, dry atmosphere.”

    This understanding of the long-term power of wind can be applied to Earth as well, although there are more variables on our home planet than Mars, Anderson said.

    “Swirling, gusty winds in Earth’s atmosphere affect problems at the nexus of landscape degradation, food security and epidemiological factors affecting human health,” Anderson said. “On Earth, however, landscape changes are also driven by water and plate tectonics, which are now absent on Mars. These drivers of landscape change generally dwarf the influence of air on Earth.”

    Computational resources for the study were provided by the Texas Advanced Computing Center at UT Austin.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer


    Day’s role in the research was supported by a Graduate Research Fellowship from the National Science Foundation.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The University of Texas at Dallas is a Carnegie R1 classification (Doctoral Universities – Highest research activity) institution, located in a suburban setting 20 miles north of downtown Dallas. The University enrolls more than 27,600 students — 18,380 undergraduate and 9,250 graduate —and offers a broad array of bachelor’s, master’s, and doctoral degree programs.

    Established by Eugene McDermott, J. Erik Jonsson and Cecil Green, the founders of Texas Instruments, UT Dallas is a young institution driven by the entrepreneurial spirit of its founders and their commitment to academic excellence. In 1969, the public research institution joined The University of Texas System and became The University of Texas at Dallas.

    A high-energy, nimble, innovative institution, UT Dallas offers top-ranked science, engineering and business programs and has gained prominence for a breadth of educational paths from audiology to arts and technology. UT Dallas’ faculty includes a Nobel laureate, six members of the National Academies and more than 560 tenured and tenure-track professors.

     
  • richardmitnick 10:32 am on December 20, 2017 Permalink | Reply
    Tags: , Computation combined with experimentation helped advance work in developing a model of osteoregeneration, Genes could be activated in human stem cells that initiate biomineralization a key step in bone formation, , SDSC Dell Comet supercomputer, Silk has been shown to be a suitable scaffold for tissue regeneration, Silky Secrets to Make Bones, Stampede1, Supercomputing, ,   

    From TACC: “Silky Secrets to Make Bones” 

    TACC bloc

    Texas Advanced Computing Center

    December 19, 2017
    Jorge Salazar

    1
    Scientists used supercomputers and fused golden orb weaver spider web silk with silica to activate genes in human stem cells that initiated biomineralization, a key step in bone formation. (devra/flickr)

    Some secrets to repair our skeletons might be found in the silky webs of spiders, according to recent experiments guided by supercomputers. Scientists involved say their results will help understand the details of osteoregeneration, or how bones regenerate.
    A study found that genes could be activated in human stem cells that initiate biomineralization, a key step in bone formation. Scientists achieved these results with engineered silk derived from the dragline of golden orb weaver spider webs, which they combined with silica. The study appeared September 2017 in the journal Advanced Functional Materials and has been the result of the combined effort from three institutions: Tufts University, Massachusetts Institute of Technology and Nottingham Trent University.

    2
    XSEDE supercomputers Stampede at TACC and Comet at SDSC helped study authors simulate the head piece domain of the cell membrane protein receptor integrin in solution, based on molecular dynamics modeling. (Davoud Ebrahimi)

    SDSC Dell Comet supercomputer

    Study authors used the supercomputers Stampede1 at the Texas Advanced Computing Center (TACC) and Comet at the San Diego Supercomputer Center (SDSC) at the University of California San Diego through an allocation from XSEDE, the eXtreme Science and Engineering Discovery Environment, funded by the National Science Foundation. The supercomputers helped scientists model how the cell membrane protein receptor called integrin folds and activates the intracellular pathways that lead to bone formation. The research will help larger efforts to cure bone growth diseases such as osteoporosis or calcific aortic valve disease.

    “This work demonstrates a direct link between silk-silica-based biomaterials and intracellular pathways leading to osteogenesis,” said study co-author Zaira Martín-Moldes, a post-doctoral scholar at the Kaplan Lab at Tufts University. She researches the development of new biomaterials based on silk. “The hybrid material promoted the differentiation of human mesenchymal stem cells, the progenitor cells from the bone marrow, to osteoblasts as an indicator of osteogenesis, or bone-like tissue formation,” Martín-Moldes said.

    “Silk has been shown to be a suitable scaffold for tissue regeneration, due to its outstanding mechanical properties,” Martín-Moldes explained. It’s biodegradable. It’s biocompatible. And it’s fine-tunable through bioengineering modifications. The experimental team at Tufts University modified the genetic sequence of silk from golden orb weaver spiders (Nephila clavipes) and fused the silica-promoting peptide R5 derived from a gene of the diatom Cylindrotheca fusiformis silaffin.

    The bone formation study targeted biomineralization, a critical process in materials biology. “We would love to generate a model that helps us predict and modulate these responses both in terms of preventing the mineralization and also to promote it,” Martín-Moldes said.

    “High performance supercomputing simulations are utilized along with experimental approaches to develop a model for the integrin activation, which is the first step in the bone formation process,” said study co-author Davoud Ebrahimi, a postdoctoral associate at the Laboratory for Atomistic and Molecular Mechanics of the Massachusetts Institute of Technology.

    Integrin embeds itself in the cell membrane and mediates signals between the inside and the outside of cells. In its dormant state, the head unit sticking out of the membrane is bent over like a nodding sleeper. This inactive state prevents cellular adhesion. In its activated state, the head unit straightens out and is available for chemical binding at its exposed ligand region.

    “Sampling different states of the conformation of integrins in contact with silicified or non-silicified surfaces could predict activation of the pathway,” Ebrahimi explained. Sampling the folding of proteins remains a classically computationally expensive problem, despite recent and large efforts in developing new algorithms.

    The derived silk–silica chimera they studied weighed in around a hefty 40 kilodaltons. “In this research, what we did in order to reduce the computational costs, we have only modeled the head piece of the protein, which is getting in contact with the surface that we’re modeling,” Ebrahimi said. “But again, it’s a big system to simulate and can’t be done on an ordinary system or ordinary computers.”

    The Computational team at MIT used the molecular dynamics package called Gromacs, a software for chemical simulation available on both the Stampede1 and Comet supercomputing systems. “We could perform those large simulations by having access to XSEDE computational clusters,” he said.

    “I have a very long-standing positive experience using XSEDE resources,” said Ebrahimi. “I’ve been using them for almost 10 years now for my projects during my graduate and post-doctoral experiences. And the staff at XSEDE are really helpful if you encounter any problems. If you need software that should be installed and it’s not available, they help and guide you through the process of doing your research. I remember exchanging a lot of emails the first time I was trying to use the clusters, and I was not so familiar. I got a lot of help from XSEDE resources and people at XSEDE. I really appreciate the time and effort that they put in order to solve computational problems that we usually encounter during our simulation,” Ebrahimi reflected.

    Computation combined with experimentation helped advance work in developing a model of osteoregeneration. “We propose a mechanism in our work,” explained Martín-Moldes, “that starts with the silica-silk surface activating a specific cell membrane protein receptor, in this case integrin αVβ3.” She said this activation triggers a cascade in the cell through three mitogen-activated protein kinsase (MAPK) pathways, the main one being the c-Jun N-terminal kinase (JNK) cascade.

    3
    Proposed mechanism for hMSC osteogenesis induction on silica surfaces. The binding of integrin αVβ3 to the silica surface promotes its activation, that triggers an activation cascade that involves the three MAPK pathways, ERK, p38, but mainly JNK (reflected as wider arrow), which promotes AP-1 activation and translocation to the nucleus to activate Runx2 transcription factor. Runx2 is the finally responsible for the induction of bone extracellular matrix proteins and other osteoblast differentiation genes. B) In the presence of a neutralizing antibody against αVβ3, there is no activation and induction of MAPK cascades, thus no induction of bone extracellular matrix genes and hence, no differentiation. (Davoud Ebrahimi)

    She added that other factors are also involved in this process such as Runx2, the main transcription factor related to osteogenesis. According to the study, the control system did not show any response, and neither did the blockage of integrin using an antibody, confirming its involvement in this process. “Another important outcome was the correlation between the amount of silica deposited in the film and the level of induction of the genes that we analyzed,” Martín-Moldes said. “These factors also provide an important feature to control in future material design for bone-forming biomaterials.”

    “We are doing a basic research here with our silk-silica systems,” Martín-Moldes explained. “But we are helping in building the pathway to generate biomaterials that could be used in the future. The mineralization is a critical process. The final goal is to develop these models that help design the biomaterials to optimize the bone regeneration process, when the bone is required to regenerate or to minimize it when we need to reduce the bone formation.”

    These results help advance the research and are useful in larger efforts to help cure and treat bone diseases. “We could help in curing disease related to bone formation, such as calcific aortic valve disease or osteoporosis, which we need to know the pathway to control the amount of bone formed, to either reduce or increase it, Ebrahimi said.

    “Intracellular Pathways Involved in Bone Regeneration Triggered by Recombinant Silk–Silica Chimeras,” DOI: 10.1002/adfm.201702570, appeared September 2017 in the journal Advanced Functional Materials. The National Institutes of Health funded the study, and the National Science Foundation through XSEDE provided computational resources. The study authors are Zaira Martín-Moldes, Nina Dinjaski, David L. Kaplan of Tufts University; Davoud Ebrahimi and Markus J. Buehler of the Massachusetts Institute of Technology; Robyn Plowright and Carole C. Perry of Nottingham Trent University.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    The Texas Advanced Computing Center (TACC) designs and operates some of the world’s most powerful computing resources. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies.

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer

     
  • richardmitnick 4:02 pm on December 11, 2017 Permalink | Reply
    Tags: , HPC centers, Supercomputing   

    From ESnet: “ESnet’s Petascale DTN Project Speeds up Data Transfers between Leading HPC Centers” 

    ESnet map

    ESnet

    2017-12-11

    1
    Operations staff monitor the network in the ESnet/NERSC control room. (Photo by Marilyn Chung, Berkeley Lab)

    The Department of Energy’s (DOE) Office of Science operates three of the world’s leading supercomputing centers, where massive data sets are routinely imported, analyzed, used to create simulations and exported to other sites. Fortunately, DOE also runs a networking facility, ESnet (short for Energy Sciences Network), the world’s fastest network for science, which is managed by Lawrence Berkeley National Laboratory.

    Over the past two years, ESnet engineers have been working with staff at DOE labs to fine tune the specially configured systems called data transfer nodes (DTNs) that move data in and out of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and the leadership computing facilities at Argonne National Laboratory in Illinois and Oak Ridge National Laboratory in Tennessee. All three of the computing centers and ESnet are DOE Office of Science User Facilities used by thousands of researchers across the country.

    NERSC Cray XC40 Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Currently at ORNL OCLF
    3
    Titan Cray XK7 at OCLF

    Soon to come at ORNL OCLF

    ORNL IBM Summit supercomputer depiction

    The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets.

    4
    Blue Waters IBM supercomputer at the University of Illinois in Urbana-Champaign,

    5
    Performance data from March 2016 showing transfer rates between facilities. (Image credit: Eli Dart, ESnet)

    Research projects such as cosmology and climate have very large (multi-petabyte) datasets and scientists typically compute at multiple HPC centers, moving data between facilities in order to take full advantage of the computing and storage allocations available at different sites.

    Since data transfers traverse multiple networks, the slowest link determines the overall speed. Tuning the data transfer nodes and the border router where a center’s internal network connects to ESnet can smooth out virtual speedbumps. Because transfers over the wide area network have high latency between sender and receiver, getting the highest speed requires careful configuration of all the devices along the data path, not just the core network.

    In the past few weeks, the project has shown sustained data transfers at well over the target rate of 1 petabyte per week. The number of sites with this base capability is also expanding, with Brookhaven National Laboratory in New York now testing its transfer capabilities with encouraging results. Future plans including bringing the NSF-funded San Diego Supercomputer Center and other big data sites into the mix.

    SDSC Triton HP supercomputer

    SDSC Gordon-Simons supercomputer

    SDSC Dell Comet supercomputer

    “This increase in data transfer capability benefits projects across the DOE mission science portfolio” said Eli Dart, an ESnet network engineer and leader of the project. “HPC facilities are central to many collaborations, and they are becoming more important to more scientists as data rates and volumes increase. The ability to move data in and out of HPC facilities at scale is critical to the success of an ever-growing set of projects.”

    When it comes to moving data, there are many factors to consider, including the number of transfer nodes and their speeds, their utilization, the file systems connected to these transfer nodes on both sides, and the network path between them, according to Daniel Pelfrey, a high performance computing network administrator at the Oak Ridge Leadership Computing Facility.

    The actual improvements being made range from updating software on the DTNs to changing the configuration of existing DTNs to adding new nodes at the centers.

    6
    Performance measurements from November 2017 at the end of the Petascale DTN project. All of the sites met or exceed project goals. (Image Credit: Eli Dart, ESnet)

    “Transfer node operating systems and applications need to be configured to allow for WAN transfer,” Pelfrey said. “The connection is only going to be as fast as the slowest point in the path allows. A heavily utilized server, or a misconfigured server, or a heavily utilized network, or heavily utilized file system can degrade the transfer and make it take much longer.”

    At NERSC, the DTN project resulted in adding eight more nodes, tripling the number, in order achieve enough internal bandwidth to meet the project’s goals. “It’s a fairly complicated thing to do,” said Damian Hazen, head of NERSC’s Storage Systems Group. “It involves adding infrastructure and tuning as we connected our border routers to internal routers to the switches connected to the DTNs. Then we needed to install the software, get rid of some bugs and tune the entire system for optimal performance.”

    The work spanned two months and involved NERSC’s Storage Systems, Networking, and Data and Analytics Services groups, as well as ESnet, all working together, Hazen said.

    At the Argonne Leadership Computing Facility, the DTNs were already in place and with minor tuning, transfer speeds were increased to the 15 Gbps.

    “One of our users, Katrin Heitmann, had a ton of cosmology data to move and she saw a tremendous benefit from the project,” said Bill Allcock, who was director of operations at the ALCF during the project. “The project improved the overall end-to-end transfer rates, which is especially important for our users who are either moving their data to a community archive outside the center or are using data archived elsewhere and need to pull it in to compute with it at the ALCF.”

    As a result of the Petascale DTN project, the OLCF now has 28 transfer nodes in production on 40-Gigabit Ethernet. The nodes are deployed under a new model—a diskless boot—which makes it easy for OLCF staff to move resources around, reallocating as needed to respond to users’ needs.

    “The Petascale DTN project basically helped us increase the ‘horsepower under the hood’ of network services we provide and make them more resilient,” said Jason Anderson, an HPC UNIX/storage systems administrator at OLCF. “For example, we recently moved 12TB of science data from OLCF to NCSA in less than 30 minutes. That’s fast!”

    Anderson recalled that a user at the May 2017 OLCF user meeting said that she was very pleased with how quickly and easily she was able to move her data to take advantage of the breadth of the Department of Energy’s computing resources.

    “When the initiative started we were in the process of implementing a Science DMZ and upgrading our network,” Pelfrey said. “At the time, we could move a petabyte internally in 6-18 hours, but moving a petabyte externally would have taken just a bit over a week. With our latest upgrades, we have the ability to move a petabyte externally in about 48 hours.”

    The fourth site in the project is the NSF-funded NCSA in Illinois, where senior network engineer Matt Kollross said it’s important for NCSA, the only non DOE participant, to collaborate with other DOE HPC sites to develop common practices and speed up adoption of new technologies.

    “The participation in this project helped confirm that the design and investments in network and storage that we made when building Blue Waters five years ago were solid investments and will help in the design of future systems here and at other centers,” Kollross said. “It’s important that real-world benchmarks which test many aspects of an HPC system, such as storage, file systems and networking, be considered in evaluating overall performance of an HPC compute system and help set reasonable expectations for scientists and researchers.”

    Origins of the project

    The project grew out of a Cross-Connects Workshop on “Improving Data Mobility & Management for International Cosmology,” held at Berkeley Lab in February 2015 and co-sponsored by ESnet and Internet2.

    Salman Habib, who leads the Computational Cosmology Group at Argonne National Laboratory, gave a talk at the workshop, noting that large-scale simulations are critical for understanding observational data and that the size and scale of simulation datasets far exceed those of observational data. “To be able to observe accurately, we need to create accurate simulations,” he said.

    During the workshop, Habib and other attendees spoke about the need to routinely move these large data sets between computing centers and agreed that it would be important to be able to move at least a terabyte a week. As the Argonne lead for DOE’s High Energy Physics Center for Computational Excellence project, Habib had been working with ESnet and other labs on data transfer issues.

    To get the project moving, Katrin Heitmann, who works in cosmology at Argonne, created a data package of small and medium files totaling about 4.4 terabytes. The data would then be used to test network links between the leadership computing facilities at Argonne and Oak Ridge national labs, the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory, and the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation.

    “The idea was to use the data as a test, to send it over and over and over between the centers,” Habib said. “We wanted to establish a performance baseline, then see if we could improve the performance by eliminating any choke points.”

    Habib admitted that moving a petabyte in a week would only use a fraction of ESnet’s total bandwidth, but the goal was to automate the transfers using Globus Online, a primary tool for researchers accessing high performance networks like ESnet for rapidly sharing data or to use remote computing facilities.

    “For our research, it’s very important that we have the ability to transfer large amounts of data,” Habib said. “For example, we may run a simulation at one of the large DOE computing centers, but often where we run the simulation is not where we want to do the analysis. Each center has different capabilities and we have various accounts at the centers, so the data gets moved around to take advantage of this. It happens all the time.”

    Although the project’s roots are in cosmology, the Petascale DTN project will help all DOE scientists who have a need to transfer data to, from, or between the DOE computing facilities to take advantage of rapidly advancing data analytics techniques. In addition, the increase in data transfer capability at the HPC facilities will improve the performance of data portals, such as the Research Data Archive at the National Center for Atmospheric Research, that use Globus to transfer data from their storage systems.

    “As the scientists deal with data deluge and more research disciplines depend on high-performance computing, data movement between computing centers needs to be a no-brainer for scientists so they can take advantage of the compute cycles at all DOE Office of Science user facilities and the extreme heterogeneity of systems in the future” said ESnet Director Inder Monga.

    This work was supported by the HEP Center for Computational Excellence. ESnet is funded by DOE’s Office of Science.

    Not included in this Center:

    Ohio Super Computer Center

    Ohio Oakley HP supercommputer

    Ohio Ruby HP supercomputer

    Ohio Dell Owens supercompter

    TACC Maverick HP NVIDIA supercomputer

    TACC Lonestar Cray XC40 supercomputer

    Dell Poweredge U Texas Austin Stampede Supercomputer. Texas Advanced Computer Center 9.6 PF

    TACC HPE Apollo 8000 Hikari supercomputer

    TACC Maverick HP NVIDIA supercomputer

    TACC DELL EMC Stampede2 supercomputer


    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    Created in 1986, the U.S. Department of Energy’s (DOE’s) Energy Sciences Network (ESnet) is a high-performance network built to support unclassified science research. ESnet connects more than 40 DOE research sites—including the entire National Laboratory system, supercomputing facilities and major scientific instruments—as well as hundreds of other science networks around the world and the Internet.

     
  • richardmitnick 4:32 pm on November 28, 2017 Permalink | Reply
    Tags: , , , , , , NERSC Cori II XC40 supercomputer, , , , Supercomputing   

    From BNL: “High-Performance Computing Cuts Particle Collision Data Prep Time” 

    Brookhaven Lab

    November 28, 2017
    Karen McNulty Walsh
    kmcnulty@bnl.gov

    New approach to raw data reconstruction has potential to turn particle tracks into physics discoveries faster.

    1
    Mark Lukascsyk, Jérôme Lauret, and Levente Hajdu standing beside a tape silo at the RHIC & ATLAS Computing Facility at Brookhaven National Laboratory. Data sets from RHIC runs are stored on tape and were transferred from Brookhaven to NERSC.

    For the first time, scientists have used high-performance computing (HPC) to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries.

    The demonstration project used the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC), a high-performance computing center at Lawrence Berkeley National Laboratory in California, to reconstruct multiple datasets collected by the STAR detector during particle collisions at the Relativistic Heavy Ion Collider (RHIC), a nuclear physics research facility at Brookhaven National Laboratory in New York.

    NERSC Cray Cori II XC40 supercomputer at NERSC at LBNL

    BNL/RHIC Star Detector


    BNL RHIC Campus

    “The reason why this is really fantastic,” said Brookhaven physicist Jérôme Lauret, who manages STAR’s computing needs, “is that these high-performance computing resources are elastic. You can call to reserve a large allotment of computing power when you need it—for example, just before a big conference when physicists are in a rush to present new results.” According to Lauret, preparing raw data for analysis typically takes many months, making it nearly impossible to provide such short-term responsiveness. “But with HPC, perhaps you could condense that many months production time into a week. That would really empower the scientists!”

    The accomplishment showcases the synergistic capabilities of RHIC and NERSC—U.S. Department of Energy (DOE) Office of Science User Facilities located at DOE-run national laboratories on opposite coasts—connected by one of the most extensive high-performance data-sharing networks in the world, DOE’s Energy Sciences Network (ESnet), another DOE Office of Science User Facility.

    “This is a key usage model of high-performance computing for experimental data, demonstrating that researchers can get their raw data processing or simulation campaigns done in a few days or weeks at a critical time instead of spreading out over months on their own dedicated resources,” said Jeff Porter, a member of the data and analytics services team at NERSC.

    NERSC Cray XC40 Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer


    The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

    NERSC PDSF


    PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

    Billions of data points

    To make physics discoveries at RHIC, scientists must sort through hundreds of millions of collisions between ions accelerated to very high energy. STAR, a sophisticated, house-sized electronic instrument, records the subatomic debris streaming from these particle smashups. In the most energetic events, many thousands of particles strike detector components, producing firework-like displays of colorful particle tracks. But to figure out what these complex signals mean, and what they can tell us about the intriguing form of matter created in RHIC’s collisions, scientists need detailed descriptions of all the particles and the conditions under which they were produced. They must also compare huge statistical samples from many different types of collision events.

    Cataloging that information requires sophisticated algorithms and pattern recognition software to combine signals from the various readout electronics, and a seamless way to match that data with records of collision conditions. All the information must then be packaged in a way that physicists can use for their analyses.

    By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.

    Since RHIC started running in the year 2000, this raw data processing, or reconstruction, has been carried out on dedicated computing resources at the RHIC and ATLAS Computing Facility (RACF) at Brookhaven. High-throughput computing (HTC) clusters crunch the data, event-by-event, and write out the coded details of each collision to a centralized mass storage space accessible to STAR physicists around the world.

    But the challenge of keeping up with the data has grown with RHIC’s ever-improving collision rates and as new detector components have been added. In recent years, STAR’s annual raw data sets have reached billions of events with data sizes in the multi-Petabyte range. So the STAR computing team investigated the use of external resources to meet the demand for timely access to physics-ready data.

    Many cores make light work

    Unlike the high-throughput computers at the RACF, which analyze events one-by-one, HPC resources like those at NERSC break large problems into smaller tasks that can run in parallel. So the first challenge was to “parallelize” the processing of STAR event data.

    “We wrote workflow programs that achieved the first level of parallelization—event parallelization,” Lauret said. That means they submit fewer jobs made of many events that can be processed simultaneously on the many HPC computing cores.

    3
    In high-throughput computing, a workload made up of data from many STAR collisions is processed event-by-event in a sequential manner to give physicists “reconstructed data” —the product they need to fully analyze the data. High-performance computing breaks the workload into smaller chunks that can be run through separate CPUs to speed up the data reconstruction. In this simple illustration, breaking a workload of 15 events into three chunks of five events processed in parallel yields the same product in one-third the time as the high-throughput method. Using 32 CPUs on a supercomputer like Cori can greatly reduce the time it takes to transform the raw data from a real STAR dataset, with many millions of events, into useful information physicists can analyze to make discoveries.

    “Imagine building a city with 100 homes. If this was done in high-throughput fashion, each home would have one builder doing all the tasks in sequence—building the foundation, the walls, and so on,” Lauret said. “But with HPC we change the paradigm. Instead of one worker per house we have 100 workers per house, and each worker has a task—building the walls or the roof. They work in parallel, at the same time, and we assemble everything together at the end. With this approach, we will build that house 100 times faster.”

    Of course, it takes some creativity to think about how such problems can be broken up into tasks that can run simultaneously instead of sequentially, Lauret added.

    HPC also saves time matching raw detector signals with data on the environmental conditions during each event. To do this, the computers must access a “condition database”—a record of the voltage, temperature, pressure, and other detector conditions that must be accounted for in understanding the behavior of the particles produced in each collision. In event-by-event, high-throughput reconstruction, the computers call up the database to retrieve data for every single event. But because HPC cores share some memory, events that occur close in time can use the same cached condition data. Fewer calls to the database means faster data processing.

    Networking teamwork

    Another challenge in migrating the task of raw data reconstruction to an HPC environment was just getting the data from New York to the supercomputers in California and back. Both the input and output datasets are huge. The team started small with a proof-of-principle experiment—just a few hundred jobs—to see how their new workflow programs would perform.

    “We had a lot of assistance from the networking professionals at Brookhaven,” said Lauret, “particularly Mark Lukascsyk, one of our network engineers, who was so excited about the science and helping us make discoveries.” Colleagues in the RACF and ESnet also helped identify hardware issues and developed solutions as the team worked closely with Jeff Porter, Mustafa Mustafa, and others at NERSC to optimize the data transfer and the end-to-end workflow.

    Start small, scale up

    4
    This animation shows a series of collision events at STAR, each with thousands of particle tracks and the signals registered as some of those particles strike various detector components. It should give you an idea of how complex the challenge is to reconstruct a complete record of every single particle and the conditions under which it was created so scientists can compare hundreds of millions of events to look for trends and make discoveries.

    After fine-tuning their methods based on the initial tests, the team started scaling up to using 6,400 computing cores at NERSC, then up and up and up.

    “6,400 cores is already half of the size of the resources available for data reconstruction at RACF,” Lauret said. “Eventually we went to 25,600 cores in our most recent test.” With everything ready ahead of time for an advance-reservation allotment of time on the Cori supercomputer, “we did this test for a few days and got an entire data production done in no time,” Lauret said.According to Porter at NERSC, “This model is potentially quite transformative, and NERSC has worked to support such resource utilization by, for example, linking its center-wide high-performant disk system directly to its data transfer infrastructure and allowing significant flexibility in how job slots can be scheduled.”

    The end-to-end efficiency of the entire process—the time the program was running (not sitting idle, waiting for computing resources) multiplied by the efficiency of using the allotted supercomputing slots and getting useful output all the way back to Brookhaven—was 98 percent.

    “We’ve proven that we can use the HPC resources efficiently to eliminate backlogs of unprocessed data and resolve temporary resource demands to speed up science discoveries,” Lauret said.

    He’s now exploring ways to generalize the workflow to the Open Science Grid—a global consortium that aggregates computing resources—so the entire community of high-energy and nuclear physicists can make use of it.

    This work was supported by the DOE Office of Science.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 9:18 am on November 10, 2017 Permalink | Reply
    Tags: Computational Infrastructure for Geodynamics is headquartered at UC Davis, , Earth’s magnetic field is an essential part of life on our planet, , Supercomputing, UC Davis egghead blog   

    From UC Davis egghead blog: “Supercomputer Simulates Dynamic Magnetic Fields of Jupiter, Earth, Sun 

    UC Davis bloc

    UC Davis

    UC Davis egghead blog

    November 9th, 2017
    Becky Oskin

    As the Juno space probe approached Jupiter in June last year, researchers with the Computational Infrastructure for Geodynamics’ Dynamo Working Group were starting to run simulations of the giant planet’s magnetic field on one of the world’s fastest computers.

    NASA/Juno

    While the timing was coincidental, the supercomputer modeling should help scientists interpret the data from Juno, and vice versa.

    “Even with Juno, we’re not going to be able to get a great physical sampling of the turbulence occurring in Jupiter’s deep interior,” Jonathan Aurnou, a geophysics professor at UCLA who leads the geodynamo working group, said in an article for Argonne National Laboratory news. “Only a supercomputer can help get us under that lid.”

    Computational Infrastructure for Geodynamics is headquartered at UC Davis.

    2

    The CIG describes itself as a community organization of scientists that disseminates software for geophysics and related fields. The CIG’s Geodynamo Working Group, led by Aurnou, includes researchers from UC Berkeley, UC Boulder, UC Davis, UC Santa Cruz, the University of Alberta, UW-Madison and Johns Hopkins University.

    Earth’s magnetic field is an essential part of life on our planet — from guiding birds on vast migrations to shielding us from solar storms.

    3
    Representation of Earth’s Invisible Magnetic Field. NASA

    Scientists think Earth’s magnetic field is generated by the swirling liquid iron in the planet’s outer core (called the geodynamo), but many mysteries remain. For example, observations of magnetic fields encircling other planets and stars suggest there could be many ways of making a planet-sized magnetic field. And why has the field has flipped polarity (swapping magnetic north and south) more than 150 times in the past 70 million years?

    “The geodynamo is one of the most challenging geophysical problems in existence — and one of the most challenging computational problems as well,” said Louise Kellogg, director of the CIG and a professor in the UC Davis Department of Earth and Planetary Sciences.

    The working group was awarded 260 million core hours on the Mira supercomputer at the U.S. Department of Energy’s Argonne National Laboratory – rated the sixth-fastest in the world — to model magnetic fields inside the Earth, Sun and Jupiter.

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    The CIG project was funded by the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program, which provides access to computing centers at Argonne and Oak Ridge national laboratories. Researchers from academia, government and industry will share a total of 5.8 billion core hours on two supercomputers, Titan at Oak Ridge National Laboratory and Mira at Argonne.

    ORNL Cray XK7 Titan Supercomputer

    Video: Simulation of magnetic fields inside the Earth

    More information

    The inner secrets of planets and stars (Argonne National Lab)

    Juno Mission Home (NASA)

    Computational Infrastructure for Geodynamics

    About INCITE grants

    Videos by CIG Geodynamo Working Group/U.S. Department of Energy Argonne National Lab.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    About Egghead

    Egghead is a blog about research by, with or related to UC Davis. Comments on posts are welcome, as are tips and suggestions for posts. General feedback may be sent to Andy Fell. This blog is created and maintained by UC Davis Strategic Communications, and mostly edited by Andy Fell.

    UC Davis Campus

    The University of California, Davis, is a major public research university located in Davis, California, just west of Sacramento. It encompasses 5,300 acres of land, making it the second largest UC campus in terms of land ownership, after UC Merced.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: