Tagged: Supercomputing Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:49 pm on November 26, 2014 Permalink | Reply
    Tags: , , Supercomputing   

    From isgtw: “HPC matters: Funding, collaboration, innovation” 


    international science grid this week

    November 26, 2014
    Amber Harmon

    This month US energy secretary Ernest Moniz announced the Department of Energy will spend $325m to research extreme-scale computing and build two new GPU-accelerated supercomputers. The goal: to put the nation on a fast-track to exascale computing, and thereby leading scientific research that addresses challenging issues in government, academia, and industry.

    3
    Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US. Image courtesy Amber Harmon.

    Moniz also announced funding awards, totaling $100m, for partnerships with HPC companies developing exascale technologies under the FastForward 2 program managed by Lawrence Livermore National Laboratory in California, US.

    The combined spending comes at a critical juncture, as just last week the Organization for Economic Co-operation and Development (OECD) released its 2014 Science, Technology and Industry Outlook report. With research and development budgets in advanced economies not yet fully recovered from the 2008 economic crisis, China is on track to lead the world in R&D spending by 2019.

    The DOE-sponsored collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) national labs will ensure each is able to deploy a supercomputer expected to provide about five times the performance of today’s top systems.

    The Summit supercomputer will outperform Titan, the Oak Ridge Leadership Computing Facility’s (OLCF) current flagship system. Research pursuits include combustion and climate science, as well as energy storage and nuclear power. “Summit builds on the hybrid multi-core architecture that the OLCF pioneered with Titan,” says Buddy Bland, director of the Summit project.

    IBM Summit & Sierra Supercomputers
    IBM new exascale supercomputer

    The other system, Sierra, will serve the National Nuclear Security Administration’s Advanced Simulation and Computing (ASC) program. “Sierra will allow us to begin laying the groundwork for exascale systems,” says Bob Meisner, ASC program head, “as the heterogeneous accelerated node architecture represents one of the most promising architectural paths.” Argonne is expected to finalize a contract for a system at a later date.

    IBM Sierra supercomputer
    IBM Sierra supercomputer

    The announcements came just ahead of the 2014 International Conference for High Performance Computing, Networking, Storage and Analysis (SC14). Also ahead of SC14, organizers launched the HPC Matters campaign and announced the first HPC Matters plenary, aimed at sharing real stories about how HPC makes an everyday difference.

    When asked why the US was pushing the HPC Matters initiative, conference advisor Wilfred Pinfold, director of research and advanced technology development at Intel Federal, focused on informing and educating a broader audience. “To a large extent, growth in the use of HPC — and the benefits that come from it — will develop as more people understand in detail those benefits.” Pinfold also noted the effort the US must make to continue to lead in HPC technology. “I think other countries are catching up and there is real competition ahead — all of which is good.”

    The HPC domain is in many ways defined by two sometimes opposing drives: the push of international collaborations to solve fundamental societal issues, and the pull of national security, innovation, and economic competitiveness — a point that Horst Simon, deputy director of Lawrence Berkeley National Lab in California, US, says we shouldn’t shy away from. Simon participated in an SC14 panel discussion of international funding strategies for HPC software, noting issues the discipline needs to overcome.

    “In principle all supercomputers are easily accessible worldwide. But while our openness as an international community in principal makes it easier, it is less of a necessity that we work out how to actually work together.” This results in very soft collaboration agreements, says Simon, that go nowhere without grassroots efforts by researchers who already have relationships and are interested in working together.

    According to Irene Qualters, division director of advanced cyberinfrastructure at the US National Science Foundation, expectations are increasing. “The community we support is not only multidisciplinary and highly internationally collaborative, but researchers expect their work to have broad societal impact.” Collective motivation is so strong, Qualters notes, that we’re moving away from a history of bilateral agreements. “The ability to do multilateral and broader umbrella agreements is an important efficiency that we’re poised for.”

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

     
  • richardmitnick 5:32 pm on November 20, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    From NSF: “A deep dive into plasma” 

    nsf
    National Science Foundation

    November 20, 2014
    No Writer Credit

    Renowned physicist uses NSF-supported supercomputer and visualization resources to gain insight into plasma dynamic

    Studying the intricacies and mysteries of the sun is physicist Wendell Horton’s life’s work. A widely known authority on plasma physics, his study of the high temperature gases on the sun, or plasma, consistently leads him around the world to work on a diverse range of projects that have great impact.

    Fusion energy is one such key scientific issue that Horton is investigating and one that has intrigued researchers for decades.

    “Fusion energy involves the same thermonuclear reactions that take place on the sun,” Horton said. “Fusing two isotopes of hydrogen to create helium releases a tremendous amount of energy–10 times greater than that of nuclear fission.”

    It’s no secret that the demand for energy around the world is outpacing the supply. Fusion energy has tremendous potential. However, harnessing the power of the sun for this burgeoning energy source requires extensive work.

    Through the Institute for Fusion Studies at The University of Texas at Austin, Horton collaborates with researchers at ITER, a fusion lab in France and the National Institute for Fusion Science in Japan to address these challenges. At ITER, Horton is working with researchers to build the world’s largest tokamak–the device that is leading the way to produce fusion energy in the laboratory.

    ITER Tokamak
    ITER tokamak

    “Inside the tokamak, we inject 10 to 100 megawatts of power to recreate the conditions of burning hydrogen as it occurs in the sun,” Horton said. “Our challenge is confining the plasma, since temperatures are up to 10 times hotter than the center of the sun inside the machine.”

    Perfecting the design of the tokamak is essential to producing fusion energy, and since it is not fully developed, Horton performs supercomputer simulations on the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to model plasma flow and turbulence inside the device.

    “Simulations give us information about plasma in three dimensions and in time, so that we are able to see details beyond what we would get with analytic theory and probes and high-tech diagnostic measurements,” Horton said.

    The simulations also give researchers a more holistic picture of what is needed to improve the tokamak design. Comparing simulations with fusion experiments in nuclear labs around the world helps Horton and other researchers move even closer to this breakthrough energy source.

    Plasma in the ionosphere

    Because the mathematical theories used to understand fusion reactions have numerous applications, Horton is also investigating space plasma physics, which has important implications in GPS communications.

    GPS signaling, a complex form of communication, relies on signal transmission from satellites in space, through the ionosphere, to GPS devices located on Earth.

    “The ionosphere is a layer of the atmosphere that is subject to solar radiation,” Horton explained. “Due to the sun’s high-energy solar radiation plasma wind, nitrogen and oxygen atoms are ionized, or stripped of their electrons, creating plasma gas.”

    These plasma structures can scatter signals sent between global navigation satellites and ground-based receivers resulting in a “loss-of-lock” and large errors in the data used for navigational systems.

    Most people who use GPS navigation have experienced “loss-of-lock,” or instance of system inaccuracy. Although this usually results in a minor inconvenience for the casual GPS user, it can be devastating for emergency response teams in disaster situations or where issues of national security are concerned.

    To better understand how plasma in the ionosphere scatters signals and affects GPS communications, Horton is modeling plasma turbulence as it occurs in the ionosphere on Stampede. He is also sharing this knowledge with research institutions in the United States and abroad including the UT Space and Geophysics Laboratory.

    Seeing is believing

    Although Horton is a long-time TACC partner and Stampede user, he only recently began using TACC’s visualization resources to gain deeper insight into plasma dynamics.

    “After partnering with TACC for nearly 10 years, Horton inquired about creating visualizations of his research,” said Greg Foss, TACC Research Scientist Associate. “I teamed up with TACC research scientist, Anne Bowen, to develop visualizations from the myriad of data Horton accumulated on plasmas.”

    Since plasma behaves similarly inside of a fusion-generating tokamak and in the ionosphere, Foss and Bowen developed visualizations representing generalized plasma turbulence. The team used Maverick, TACC’s interactive visualization and data analysis system to create the visualizations, allowing Horton to see the full 3-D structure and dynamics of plasma for the first time in his 40-year career.

    p
    This image visualizes the effect of gravity waves on an initially relatively stable rotating column of electron density, twisting into a turbulent vortex on the verge of complete chaotic collapse. These computer generated graphics are visualizations of data from a simulation of plasma turbulence in Earth’s ionosphere. The same physics are also applied to the research team’s investigations of turbulence in the tokamak, a device used in nuclear fusion experiments.Credit: Visualization: Greg Foss, TACC Visualization software support: Anne Bowen, Greg Abram, TACC Science: Wendell Horton, Lee Leonard, U. of Texas at Austin

    “It was very exciting and revealing to see how complex these plasma structures really are,” said Horton. “I also began to appreciate how the measurements we get from laboratory diagnostics are not adequate enough to give us an understanding of the full three-dimensional plasma structure.”

    Word of the plasma visualizations soon spread and Horton received requests from physics researchers in Brazil and researchers at AMU in France to share the visualizations and work to create more. The visualizations were also presented at the XSEDE’14 Visualization Showcase and will be featured at the upcoming SC’14 conference.

    Horton plans to continue working with Bowen and Foss to learn even more about these complex plasma structures, allowing him to further disseminate knowledge nationally and internationally, also proving that no matter your experience level, it’s never too late to learn something new.
    — Makeda Easter, Texas Advanced Computing Center (512) 471-8217 makeda@tacc.utexas.edu
    — Aaron Dubrow, NSF (703) 292-4489 adubrow@nsf.gov

    Investigators
    Wendell Horton
    Daniel Stanzione

    Related Institutions/Organizations
    Texas Advanced Computing Center
    University of Texas at Austin

    Locations
    Austin , Texas

    Related Programs
    Leadership-Class System Acquisition – Creating a Petascale Computing Environment for Science and Engineering

    See the full article here.

    Please help promote STEM in your local schools.
    STEM Icon

    Stem Education Coalition

    The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…we are the funding source for approximately 24 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.

    seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:46 pm on November 19, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    Fron LLNL: “Lawrence Livermore tops Graph 500″ 


    Lawrence Livermore National Laboratory

    Nov. 19, 2014

    Don Johnston
    johnston19@llnl.gov
    925-784-3980

    Lawrence Livermore National Laboratory scientists’ search for new ways to solve large complex national security problems led to the top ranking on Graph 500 and new techniques for solving large graph problems on small high performance computing (HPC) systems, all the way down to a single server.

    “To fulfill our missions in national security and basic science, we explore different ways to solve large, complex problems, most of which include the need to advance data analytics,” said Dona Crawford, associate director for Computation at Lawrence Livermore. “These Graph 500 achievements are a product of that work performed in collaboration with our industry partners. Furthermore, these innovations are likely to benefit the larger scientific computing community.”

    3
    Photo from left: Robin Goldstone, Dona Crawford and Maya Gokhale with the Graph 500 certificate. Missing is Scott Futral.

    Lawrence Livermore’s Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system, achieved the world’s best performance on the Graph 500 data analytics benchmark, announced Tuesday at SC14. LLNL and IBM computer scientists attained the No. 1 ranking by completing the largest problem scale ever attempted — scale 41 — with a performance of 23.751 teraTEPS (trillions of traversed edges per second). The team employed a technique developed by IBM.

    ibm
    LLNL Sequoia supercomputer, a 20-petaflop IBM Blue Gene/Q system

    The Graph 500 offers performance metrics for data intensive computing or ‘big data,’ an area of growing importance to the high performance computing (HPC) community.

    In addition to achieving the top Graph 500 ranking, Lawrence Livermore computer scientists also have demonstrated scalable Graph 500 performance on small clusters and even a single node. To achieve these results, Livermore computational researchers have combined innovative research in graph algorithms and data-intensive runtime systems.

    Robin Goldstone, a member of LLNL’s HPC Advanced Technologies Office said: “These are really exciting results that highlight our approach of leveraging HPC to solve challenging large-scale data science problems.”

    The results achieved demonstrate, at two different scales, the ability to solve very large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. Enabling technologies were provided through collaborations with Cray, Intel, Saratoga Speed and Mellanox.

    A scale 40-graph problem, containing 17.6 trillion edges, was solved on 300 nodes of LLNL’s Catalyst cluster. Catalyst, designed in partnership with Intel and Cray, augments a standard HPC architecture with additional capabilities targeted at data intensive computing. Each Catalyst computer node features 128 gigabytes (GB) of dynamic random access memory (DRAM) plus an additional 800 GB of high performance flash storage and uses the LLNL DI-MMAP runtime that integrates flash into the memory hierarchy. With the HavoqGT graph traversal framework, Catalyst was able to store and process the 217 TB scale 40 graph, a feat that is otherwise only achievable on the world’s largest supercomputers. The Catalyst run was No. 4 in size on the list.

    DI-MMAP and HavoqGT also were used to solve a smaller, but equally impressive, scale 37-graph problem on a single server with 50 TB of network-attached flash storage. The server, equipped with four Intel E7-4870 v2 processors and 2 TB of DRAM, was connected to two Altamont XP all-flash arrays from Saratoga Speed Inc., over a high bandwidth Mellanox FDR Infiniband interconnect. The other scale 37 entries on the Graph 500 list required clusters of 1,024 nodes or larger to process the 2.2 trillion edges.

    “Our approach really lowers the barrier of entry for people trying to solve very large graph problems,” said Roger Pearce, a researcher in LLNL’s Center for Applied Scientific Computing (CASC).

    “These results collectively demonstrate LLNL’s preeminence as a full service data intensive HPC shop, from single server to data intensive cluster to world class supercomputer,” said Maya Gokhale, LLNL principal investigator for data-centric computing architectures.

    See the full article here.

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    LLNL Campus

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 1:51 pm on November 18, 2014 Permalink | Reply
    Tags: , Supercomputing,   

    From Scientific American: “Next Wave of U.S. Supercomputers Could Break Up Race for Fastest” 

    Scientific American

    Scientific American

    November 17, 2014
    Alexandra Witze and Nature magazine

    Once locked in an arms race with each other for the fastest supercomputers, US national laboratories are now banding together to buy their next-generation machines.

    On November 14, the Oak Ridge National Laboratory (ORNL) in Tennessee and the Lawrence Livermore National Laboratory in California announced that they will each acquire a next-generation IBM supercomputer that will run at up to 150 petaflops. That means that the machines can perform 150 million billion floating-point operations per second, at least five times as fast as the current leading US supercomputer, the Titan system at the ORNL.

    titan
    Cray Titan

    The new supercomputers, which together will cost $325 million, should enable new types of science for thousands of researchers who model everything from climate change to materials science to nuclear-weapons performance.

    “There is a real importance of having the larger systems, and not just to do the same problems over and over again in greater detail,” says Julia White, manager of a grant program that awards supercomputing time at the ORNL and Argonne National Laboratory in Illinois. “You can actually take science to the next level.” For instance, climate modellers could use the faster machines to link together ocean and atmospheric-circulation patterns in a regional simulation to get a much more accurate picture of how hurricanes form.

    A learning experience

    Building the most powerful supercomputers is a never-ending race. Almost as soon as one machine is purchased and installed, lab managers begin soliciting bids for the next one. Vendors such as IBM and Cray use these competitions to develop the next generation of processor chips and architectures, which shapes the field of computing more generally.

    In the past, the US national labs pursued separate paths to these acquisitions. Hoping to streamline the process and save money, clusters of labs have now joined together to put out a shared call — even those that perform classified research, such as Livermore. “Our missions differ, but we share a lot of commonalities,” says Arthur Bland, who heads the ORNL computing facility.

    In June, after the first such coordinated bid, Cray agreed to supply one machine to a consortium from the Los Alamos and Sandia national labs in New Mexico, and another to the National Energy Research Scientific Computing (NERSC) Center at the Lawrence Berkeley National Laboratory in Berkeley, California. Similarly, the ORNL and Livermore have banded together with Argonne.

    The joint bids have been a learning experience, says Thuc Hoang, programme manager for high-performance supercomputing research and operations with the National Nuclear Security Administration in Washington DC, which manages Los Alamos, Sandia and Livermore. “We thought it was worth a try,” she says. “It requires a lot of meetings about which requirements are coming from which labs and where we can make compromises.”

    At the moment, the world’s most powerful supercomputer is the 55-petaflop Tianhe-2 machine at the National Super Computer Center in Guangzhou, China. Titan is second, at 27 petaflops. An updated ranking of the top 500 supercomputers will be announced on November 18 at the 2014 Supercomputing Conference in New Orleans, Louisiana.

    When the new ORNL and Livermore supercomputers come online in 2018, they will almost certainly vault to near the top of the list, says Barbara Helland, facilities-division director of the Advanced Scientific Computing Research program at the Department of Energy (DOE) Office of Science in Washington DC.

    But more important than rankings is whether scientists can get more performance out of the new machines, says Sudip Dosanjh, director of the NERSC. “They’re all being inundated with data,” he says. “People have a desperate need to analyse that.”

    A better metric than pure calculating speed, Dosanjh says, is how much better computing codes perform on a new machine. That is why the latest machines were selected not on total speed but on how well they will meet specific computing benchmarks.

    Dual paths

    The new supercomputers, to be called Summit and Sierra, will be structurally similar to the existing Titan supercomputer. They will combine two types of processor chip: central processing units, or CPUs, which handle the bulk of everyday calculations, and graphics processing units, or GPUs, which generally handle three-dimensional computations. Combining the two means that a supercomputer can direct the heavy work to GPUs and operate more efficiently overall. And because the ORNL and Livermore will have similar machines, computer managers should be able to share lessons learned and ways to improve performance, Helland says.

    Still, the DOE wants to preserve a little variety. The third lab of the trio, Argonne, will be making its announcement in the coming months, Helland says, but it will use a different architecture from the combined CPU–GPU approach. It will almost certainly be like Argonne’s current IBM machine, which uses a lot of small but identical processors networked together. The latter approach has been popular for biological simulations, Helland says, and so “we want to keep the two different paths open”.

    Ultimately, the DOE is pushing towards supercomputers that could work at the exascale, or 1,000 times more powerful than the current petascale. Those are expected around 2023. But the more power the DOE labs acquire, the more scientists seem to want, says Katie Antypas, head of the services department at the NERSC.

    “There are entire fields that didn’t used to have a computational component to them,” such as genomics and bioimaging, she says. “And now they are coming to us asking for help.”

    See the full article here.

    Please help promote STEM in your local schools.

    stem

    STEM Education Coalition

    Scientific American, the oldest continuously published magazine in the U.S., has been bringing its readers unique insights about developments in science and technology for more than 160 years.

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 5:39 pm on November 12, 2014 Permalink | Reply
    Tags: , , , Supercomputing,   

    From LBL: “Latest Supercomputers Enable High-Resolution Climate Models, Truer Simulation of Extreme Weather” 

    Berkeley Logo

    Berkeley Lab

    November 12, 2014
    Julie Chao (510) 486-6491

    Not long ago, it would have taken several years to run a high-resolution simulation on a global climate model. But using some of the most powerful supercomputers now available, Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner was able to complete a run in just three months.

    What he found was that not only were the simulations much closer to actual observations, but the high-resolution models were far better at reproducing intense storms, such as hurricanes and cyclones. The study, The effect of horizontal resolution on simulation quality in the Community Atmospheric Model, CAM5.1, has been published online in the Journal of Advances in Modeling Earth Systems.

    “I’ve been calling this a golden age for high-resolution climate modeling because these supercomputers are enabling us to do gee-whiz science in a way we haven’t been able to do before,” said Wehner, who was also a lead author for the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). “These kinds of calculations have gone from basically intractable to heroic to now doable.”

    mw
    Michael Wehner, Berkeley Lab climate scientist

    Using version 5.1 of the Community Atmospheric Model, developed by the Department of Energy (DOE) and the National Science Foundation (NSF) for use by the scientific community, Wehner and his co-authors conducted an analysis for the period 1979 to 2005 at three spatial resolutions: 25 km, 100 km, and 200 km. They then compared those results to each other and to observations.

    One simulation generated 100 terabytes of data, or 100,000 gigabytes. The computing was performed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. “I’ve literally waited my entire career to be able to do these simulations,” Wehner said.

    sc

    The higher resolution was particularly helpful in mountainous areas since the models take an average of the altitude in the grid (25 square km for high resolution, 200 square km for low resolution). With more accurate representation of mountainous terrain, the higher resolution model is better able to simulate snow and rain in those regions.

    “High resolution gives us the ability to look at intense weather, like hurricanes,” said Kevin Reed, a researcher at the National Center for Atmospheric Research (NCAR) and a co-author on the paper. “It also gives us the ability to look at things locally at a lot higher fidelity. Simulations are much more realistic at any given place, especially if that place has a lot of topography.”

    The high-resolution model produced stronger storms and more of them, which was closer to the actual observations for most seasons. “In the low-resolution models, hurricanes were far too infrequent,” Wehner said.

    The IPCC chapter on long-term climate change projections that Wehner was a lead author on concluded that a warming world will cause some areas to be drier and others to see more rainfall, snow, and storms. Extremely heavy precipitation was projected to become even more extreme in a warmer world. “I have no doubt that is true,” Wehner said. “However, knowing it will increase is one thing, but having a confident statement about how much and where as a function of location requires the models do a better job of replicating observations than they have.”

    Wehner says the high-resolution models will help scientists to better understand how climate change will affect extreme storms. His next project is to run the model for a future-case scenario. Further down the line, Wehner says scientists will be running climate models with 1 km resolution. To do that, they will have to have a better understanding of how clouds behave.

    “A cloud system-resolved model can reduce one of the greatest uncertainties in climate models, by improving the way we treat clouds,” Wehner said. “That will be a paradigm shift in climate modeling. We’re at a shift now, but that is the next one coming.”

    The paper’s other co-authors include Fuyu Li, Prabhat, and William Collins of Berkeley Lab; and Julio Bacmeister, Cheng-Ta Chen, Christopher Paciorek, Peter Gleckler, Kenneth Sperber, Andrew Gettelman, and Christiane Jablonowski from other institutions. The research was supported by the Biological and Environmental Division of the Department of Energy’s Office of Science.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:59 pm on October 22, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From isgtw: “Laying the groundwork for data-driven science” 


    international science grid this week

    October 22, 2014
    Amber Harmon

    he ability to collect and analyze massive amounts of data is rapidly transforming science, industry, and everyday life — but many of the benefits of big data have yet to surface. Interoperability, tools, and hardware are still evolving to meet the needs of diverse scientific communities.

    data
    Image courtesy istockphoto.com.

    One of the US National Science Foundation’s (NSF’s) goals is to improve the nation’s capacity in data science by investing in the development of infrastructure, building multi-institutional partnerships to increase the number of data scientists, and augmenting the usefulness and ease of using data.

    As part of that effort, the NSF announced $31 million in new funding to support 17 innovative projects under the Data Infrastructure Building Blocks (DIBBs) program. Now in its second year, the 2014 DIBBs awards support research in 22 states and touch on research topics in computer science, information technology, and nearly every field of science supported by the NSF.

    “Developed through extensive community input and vetting, NSF has an ambitious vision and strategy for advancing scientific discovery through data,” says Irene Qualters, division director for Advanced Cyberinfrastructure. “This vision requires a collaborative national data infrastructure that is aligned to research priorities and that is efficient, highly interoperable, and anticipates emerging data policies.”

    Of the 17 awards, two support early implementations of research projects that are more mature; the others support pilot demonstrations. Each is a partnership between researchers in computer science and other science domains.

    One of the two early implementation grants will support a research team led by Geoffrey Fox, a professor of computer science and informatics at Indiana University, US. Fox’s team plans to create middleware and analytics libraries that enable large-scale data science on high-performance computing systems. Fox and his team plan to test their platform with several different applications, including geospatial information systems (GIS), biomedicine, epidemiology, and remote sensing.

    “Our innovative architecture integrates key features of open source cloud computing software with supercomputing technology,” Fox said. “And our outreach involves ‘data analytics as a service’ with training and curricula set up in a Massive Open Online Course or MOOC.”Among others, US institutions collaborating on the project include Arizona State University in Phoenix; Emory University in Atlanta, Georgia; and Rutgers University in New Brunswick, New Jersey.

    Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon University in Pittsburgh, Pennsylvania, US, leads the other early implementation project. Koedinger’s team concentrates on developing infrastructure that will drive innovation in education.

    The team will develop a distributed data infrastructure, LearnSphere, that will make more educational data accessible to course developers, while also motivating more researchers and companies to share their data with the greater learning sciences community.

    “We’ve seen the power that data has to improve performance in many fields, from medicine to movie recommendations,” Koedinger says. “Educational data holds the same potential to guide the development of courses that enhance learning while also generating even more data to give us a deeper understanding of the learning process.”

    The DIBBs program is part of a coordinated strategy within NSF to advance data-driven cyberinfrastructure. It complements other major efforts like the DataOne project, the Research Data Alliance, and Wrangler, a groundbreaking data analysis and management system for the national open science community.

    See the full article here.

    iSGTW is an international weekly online publication that covers distributed computing and the research it enables.

    “We report on all aspects of distributed computing technology, such as grids and clouds. We also regularly feature articles on distributed computing-enabled research in a large variety of disciplines, including physics, biology, sociology, earth sciences, archaeology, medicine, disaster management, crime, and art. (Note that we do not cover stories that are purely about commercial technology.)

    In its current incarnation, iSGTW is also an online destination where you can host a profile and blog, and find and disseminate announcements and information about events, deadlines, and jobs. In the near future it will also be a place where you can network with colleagues.

    You can read iSGTW via our homepage, RSS, or email. For the complete iSGTW experience, sign up for an account or log in with OpenID and manage your email subscription from your account preferences. If you do not wish to access the website’s features, you can just subscribe to the weekly email.”

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 2:47 pm on October 22, 2014 Permalink | Reply
    Tags: , , , Supercomputing   

    From BNL: “Brookhaven Lab Launches Computational Science Initiative” 

    Brookhaven Lab

    October 22, 2014
    Karen McNulty Walsh, (631) 344-8350 or Peter Genzer, (631) 344-3174

    Leveraging computational science expertise and investments across the Laboratory to tackle “big data” challenges

    Building on its capabilities in computational science and data management, the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory is embarking upon a major new Computational Science Initiative (CSI). This program will leverage computational science expertise and investments across multiple programs at the Laboratory—including the flagship facilities that attract thousands of scientific users each year—further establishing Brookhaven as a leader in tackling the “big data” challenges at the frontiers of scientific discovery. Key partners in this endeavor include nearby universities such as Columbia, Cornell, New York University, Stony Brook, and Yale, and IBM Research.

    blue
    Blue Gene/Q Supercomputer at Brookhaven National Laboratory

    “The CSI will bring together under one umbrella the expertise that drives [the success of Brookhaven’s scientific programs] to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”
    — Robert Tribble

    “Advances in computational science and management of large-scale scientific data developed at Brookhaven Lab have been a key factor in the success of the scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, as well as our collaborative participation in international research endeavors, such as the ATLAS experiment at Europe’s Large Hadron Collider,” said Robert Tribble, Brookhaven Lab’s Deputy Director for Science and Technology, who is leading the development of the new initiative. “The CSI will bring together under one umbrella the expertise that drives this success to foster cross-disciplinary collaboration and make optimal use of existing technologies, while also leading the development of new tools and methods that will benefit science both within and beyond the Laboratory.”

    BNL RHIC
    BNL RHIC Campus
    RHIC at BNL

    BNL NSLS
    BNL NSLS Interior
    NSLS at BNL

    A centerpiece of the initiative will be a new Center for Data-Driven Discovery (C3D) that will serve as a focal point for this activity. Within the Laboratory it will drive the integration of intellectual, programmatic, and data/computational infrastructure with the goals of accelerating and expanding discovery by developing critical mass in key disciplines, enabling nimble response to new opportunities for discovery or collaboration, and ultimately integrating the tools and capabilities across the entire Laboratory into a single scientific resource. Outside the Laboratory C3D will serve as a focal point for recruiting, collaboration, and communication.

    The people and capabilities of C3D are also integral to the success of Brookhaven’s key scientific facilities, including those named above, the new National Synchrotron Light Source II (NSLS-II), and a possible future electron ion collider (EIC) at Brookhaven. Hundreds of scientists from Brookhaven and thousands of facility users from universities, industry, and other laboratories around the country and throughout the world will benefit from the capabilities developed by C3D personnel to make sense of the enormous volumes of data produced at these state-of-the-art research facilities.

    BNL NSLS II Photo
    BNL NSLS-II Interior
    NSLS II at BNL

    The CSI in conjunction with C3D will also host a series of workshops/conferences and training sessions in high-performance computing—including annual workshops on extreme-scale data and scientific knowledge discovery, extreme-scale networking, and extreme-scale workflow for integrated science. These workshops will explore topics at the frontier of data-centric, high-performance computing, such as the combination of efficient methodologies and innovative computer systems and concepts to manage and analyze scientific data generated at high volumes and rates.

    “The missions of C3D and the overall CSI are well aligned with the broad missions and goals of many agencies and industries, especially those of DOE’s Office of Science and its Advanced Scientific Computing Research (ASCR) program,” said Robert Harrison, who holds a joint appointment as director of Brookhaven Lab’s Computational Science Center (CSC) and Stony Brook University’s Institute for Advanced Computational Science (IACS) and is leading the creation of C3D.

    The CSI at Brookhaven will specifically address the challenge of developing new tools and techniques to deliver on the promise of exascale science—the ability to compute at a rate of 1018 floating point operations per second (exaFLOPS), to handle the copious amount of data created by computational models and simulations, and to employ exascale computation to interpret and analyze exascale data anticipated from experiments in the near future.

    “Without these tools, scientific results would remain hidden in the data generated by these simulations,” said Brookhaven computational scientist Michael McGuigan, who will be working on data visualization and simulation at C3D. “These tools will enable researchers to extract knowledge and share key findings.”

    Through the initiative, Brookhaven will establish partnerships with leading universities, including Columbia, Cornell, Stony Brook, and Yale to tackle “big data” challenges.

    “Many of these institutions are already focusing on data science as a key enabler to discovery,” Harrison said. “For example, Columbia University has formed the Institute for Data Sciences and Engineering with just that mission in mind.”

    Computational scientists at Brookhaven will also seek to establish partnerships with industry. “As an example, partnerships with IBM have been successful in the past with co-design of the QCDOC and BlueGene computer architectures,” McGuigan said. “We anticipate more success with data-centric computer designs in the future.”

    An area that may be of particular interest to industrial partners is how to interface big-data experimental problems (such as those that will be explored at NSLS-II, or in the fields of high-energy and nuclear physics) with high-performance computing using advanced network technologies. “The reality of ‘computing system on a chip’ technology opens the door to customizing high-performance network interface cards and application program interfaces (APIs) in amazing ways,” said Dantong Yu, a group leader and data scientist in the CSC.

    “In addition, the development of asynchronous data access and transports based on remote direct memory access (RDMA) techniques and improvements in quality of service for network traffic could significantly lower the energy footprint for data processing while enhancing processing performance. Projects in this area would be highly amenable to industrial collaboration and lead to an expansion of our contributions beyond system and application development and designing programming algorithms into the new arena of exascale technology development,” Yu said.

    “The overarching goal of this initiative will be to bring under one umbrella all the major data-centric activities of the Lab to greatly facilitate the sharing of ideas, leverage knowledge across disciplines, and attract the best data scientists to Brookhaven to help us advance data-centric, high-performance computing to support scientific discovery,” Tribble said. “This initiative will also greatly increase the visibility of the data science already being done at Brookhaven Lab and at its partner institutions.”

    See the full article here.

    BNL Campus

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world.Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 3:41 pm on September 27, 2014 Permalink | Reply
    Tags: , CO2 studies, , , Supercomputing   

    From LBL: “Pore models track reactions in underground carbon capture” 

    Berkeley Logo

    Berkeley Lab

    September 25, 2014

    Using tailor-made software running on top-tier supercomputers, a Lawrence Berkeley National Laboratory team is creating microscopic pore-scale simulations that complement or push beyond laboratory findings.

    image
    Computed pH on calcite grains at 1 micron resolution. The iridescent grains mimic crushed material geoscientists extract from saline aquifers deep underground to study with microscopes. Researchers want to model what happens to the crystals’ geochemistry when the greenhouse gas carbon dioxide is injected underground for sequestration. Image courtesy of David Trebotich, Lawrence Berkeley National Laboratory.

    The models of microscopic underground pores could help scientists evaluate ways to store carbon dioxide produced by power plants, keeping it from contributing to global climate change.

    The models could be a first, says David Trebotich, the project’s principal investigator. “I’m not aware of any other group that can do this, not at the scale at which we are doing it, both in size and computational resources, as well as the geochemistry.” His evidence is a colorful portrayal of jumbled calcite crystals derived solely from mathematical equations.

    The iridescent menagerie is intended to act just like the real thing: minerals geoscientists extract from saline aquifers deep underground. The goal is to learn what will happen when fluids pass through the material should power plants inject carbon dioxide underground.

    Lab experiments can only measure what enters and exits the model system. Now modelers would like to identify more of what happens within the tiny pores that exist in underground materials, as chemicals are dissolved in some places but precipitate in others, potentially resulting in preferential flow paths or even clogs.

    Geoscientists give Trebotich’s group of modelers microscopic computerized tomography (CT, similar to the scans done in hospitals) images of their field samples. That lets both camps probe an anomaly: reactions in the tiny pores happen much more slowly in real aquifers than they do in laboratories.

    Going deep

    Deep saline aquifers are underground formations of salty water found in sedimentary basins all over the planet. Scientists think they’re the best deep geological feature to store carbon dioxide from power plants.

    But experts need to know whether the greenhouse gas will stay bottled up as more and more of it is injected, spreading a fluid plume and building up pressure. “If it’s not going to stay there (geoscientists) will want to know where it is going to go and how long that is going to take,” says Trebotich, who is a computational scientist in Berkeley Lab’s Applied Numerical Algorithms Group.

    He hopes their simulation results ultimately will translate to field scale, where “you’re going to be able to model a CO2 plume over a hundred years’ time and kilometers in distance.” But for now his group’s focus is at the microscale, with attention toward the even smaller nanoscale.

    At such tiny dimensions, flow, chemical transport, mineral dissolution and mineral precipitation occur within the pores where individual grains and fluids commingle, says a 2013 paper Trebotich coauthored with geoscientists Carl Steefel (also of Berkeley Lab) and Sergi Molins in the journal Reviews in Mineralogy and Geochemistry.

    These dynamics, the paper added, create uneven conditions that can produce new structures and self-organized materials – nonlinear behavior that can be hard to describe mathematically.

    Modeling at 1 micron resolution, his group has achieved “the largest pore-scale reactive flow simulation ever attempted” as well as “the first-ever large-scale simulation of pore-scale reactive transport processes on real-pore-space geometry as obtained from experimental data,” says the 2012 annual report of the lab’s National Energy Research Scientific Computing Center (NERSC).

    The simulation required about 20 million processor hours using 49,152 of the 153,216 computing cores in Hopper, a Cray XE6 that at the time was NERSC’s flagship supercomputer.

    cray hopper
    Cray Hopper at NERSC

    “As CO2 is pumped underground, it can react chemically with underground minerals and brine in various ways, sometimes resulting in mineral dissolution and precipitation, which can change the porous structure of the aquifer,” the NERSC report says. “But predicting these changes is difficult because these processes take place at the pore scale and cannot be calculated using macroscopic models.

    “The dissolution rates of many minerals have been found to be slower in the field than those measured in the laboratory. Understanding this discrepancy requires modeling the pore-scale interactions between reaction and transport processes, then scaling them up to reservoir dimensions. The new high-resolution model demonstrated that the mineral dissolution rate depends on the pore structure of the aquifer.”

    Trebotich says “it was the hardest problem that we could do for the first run.” But the group redid the simulation about 2½ times faster in an early trial of Edison, a Cray XC-30 that succeeded Hopper. Edison, Trebotich says, has larger memory bandwidth.

    cray edison
    Cray Edison at NERSC

    Rapid changes

    Generating 1-terabyte data sets for each microsecond time step, the Edison run demonstrated how quickly conditions can change inside each pore. It also provided a good workout for the combination of interrelated software packages the Trebotich team uses.

    The first, Chombo, takes its name from a Swahili word meaning “toolbox” or “container” and was developed by a different Applied Numerical Algorithms Group team. Chombo is a supercomputer-friendly platform that’s scalable: “You can run it on multiple processor cores, and scale it up to do high-resolution, large-scale simulations,” he says.

    Trebotich modified Chombo to add flow and reactive transport solvers. The group also incorporated the geochemistry components of CrunchFlow, a package Steefel developed, to create Chombo-Crunch, the code used for their modeling work. The simulations produce resolutions “very close to imaging experiments,” the NERSC report said, combining simulation and experiment to achieve a key goal of the Department of Energy’s Energy Frontier Research Center for Nanoscale Control of Geologic CO2

    Now Trebotich’s team has three huge allocations on DOE supercomputers to make their simulations even more detailed. The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program is providing 80 million processor hours on Mira, an IBM Blue Gene/Q at Argonne National Laboratory. Through the Advanced Scientific Computing Research Leadership Computing Challenge (ALCC), the group has another 50 million hours on NERSC computers and 50 million on Titan, a Cray XK78 at Oak Ridge National Laboratory’s Leadership Computing Center. The team also held an ALCC award last year for 80 million hours at Argonne and 25 million at NERSC.

    mira
    MIRA at Argonne

    titan
    TITAN at Oak Ridge

    With the computer time, the group wants to refine their image resolutions to half a micron (half of a millionth of a meter). “This is what’s known as the mesoscale: an intermediate scale that could make it possible to incorporate atomistic-scale processes involving mineral growth at precipitation sites into the pore scale flow and transport dynamics,” Trebotich says.

    Meanwhile, he thinks their micron-scale simulations already are good enough to provide “ground-truthing” in themselves for the lab experiments geoscientists do.

    See the full article here.

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 4:33 pm on August 25, 2014 Permalink | Reply
    Tags: , , , , , Supercomputing   

    From Livermore Lab: “Calculating conditions at the birth of the universe” 


    Lawrence Livermore National Laboratory

    08/25/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    Using a calculation originally proposed seven years ago to be performed on a petaflop computer, Lawrence Livermore researchers computed conditions that simulate the birth of the universe.

    When the universe was less than one microsecond old and more than one trillion degrees, it transformed from a plasma of quarks and gluons into bound states of quarks – also known as protons and neutrons, the fundamental building blocks of ordinary matter that make up most of the visible universe.

    The theory of quantum chromodynamics (QCD) governs the interactions of the strong nuclear force and predicts it should happen when such conditions occur.

    In a paper appearing in the Aug. 18 edition of Physical Review Letters, Lawrence Livermore scientists Chris Schroeder, Ron Soltz and Pavlos Vranas calculated the properties of the QCD phase transition using LLNL’s Vulcan, a five-petaflop machine. This work was done within the LLNL-led HotQCD Collaboration, involving Los Alamos National Laboratory, Institute for Nuclear Theory, Columbia University, Central China Normal University, Brookhaven National Laboratory and Universität Bielefed in Germany.

    vulcan
    A five Petaflop IBM Blue Gene/Q supercomputer named Vulcan

    This is the first time that this calculation has been performed in a way that preserves a certain fundamental symmetry of the QCD, in which the right and left-handed quarks (scientists call this chirality) can be interchanged without altering the equations. These important symmetries are easy to describe, but they are computationally very challenging to implement.

    “But with the invention of petaflop computing, we were able to calculate the properties with a theory proposed years ago when petaflop-scale computers weren’t even around yet,” Soltz said.

    The research has implications for our understanding of the evolution of the universe during the first microsecond after the Big Bang, when the universe expanded and cooled to a temperature below 10 trillion degrees.

    Below this temperature, quarks and gluons are confined, existing only in hadronic bound states such as the familiar proton and neutron. Above this temperature, these bound states cease to exist and quarks and gluons instead form plasma, which is strongly coupled near the transition and coupled more and more weakly as the temperature increases.

    “The result provides an important validation of our understanding of the strong interaction at high temperatures, and aids us in our interpretation of data collected at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN.” Soltz said.

    Brookhaven RHIC
    RHIC at Brookhaven

    CERN LHC Grand Tunnel
    LHC at CERN

    Soltz and Pavlos Vranas, along with former colleague Thomas Luu, wrote an essay predicting that if there were powerful enough computers, the QCD phase transition could be calculated. The essay was published in Computing in Science & Engineering in 2007, “back when a petaflop really did seem like a lot of computing,” Soltz said. “With the invention of petaflop computers, the calculation took us several months to complete, but the 2007 estimate turned out to be pretty close.”

    The extremely computationally intensive calculation was made possible through a Grand Challenge allocation of time on the Vulcan Blue Gene/Q Supercomputer at Lawrence Livermore National Laboratory.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
  • richardmitnick 10:00 pm on August 19, 2014 Permalink | Reply
    Tags: , , , , Supercomputing   

    From Livermore Lab: “New project is the ACME of addressing climate change” 


    Lawrence Livermore National Laboratory

    08/19/2014
    Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov

    High performance computing (HPC) will be used to develop and apply the most complete climate and Earth system model to address the most challenging and demanding climate change issues.

    Eight national laboratories, including Lawrence Livermore, are combining forces with the National Center for Atmospheric Research, four academic institutions and one private-sector company in the new effort. Other participating national laboratories include Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, Pacific Northwest and Sandia.

    The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications. The plan is to exploit advanced software and new high performance computing machines as they become available.

    book

    The initial focus will be on three climate change science drivers and corresponding questions to be answered during the project’s initial phase:

    Water Cycle: How do the hydrological cycle and water resources interact with the climate system on local to global scales? How will more realistic portrayals of features important to the water cycle (resolution, clouds, aerosols, snowpack, river routing, land use) affect river flow and associated freshwater supplies at the watershed scale?
    Biogeochemistry: How do biogeochemical cycles interact with global climate change? How do carbon, nitrogen and phosphorus cycles regulate climate system feedbacks, and how sensitive are these feedbacks to model structural uncertainty?
    Cryosphere Systems: How do rapid changes in cryospheric systems, or areas of the earth where water exists as ice or snow, interact with the climate system? Could a dynamical instability in the Antarctic Ice Sheet be triggered within the next 40 years?

    Over a planned 10-year span, the project aim is to conduct simulations and modeling on the most sophisticated HPC machines as they become available, i.e., 100-plus petaflop machines and eventually exascale supercomputers. The team initially will use U.S. Department of Energy (DOE) Office of Science Leadership Computing Facilities at Oak Ridge and Argonne national laboratories.

    “The grand challenge simulations are not yet possible with current model and computing capabilities,” said David Bader, LLNL atmospheric scientist and chair of the ACME council. “But we developed a set of achievable experiments that make major advances toward answering the grand challenge questions using a modeling system, which we can construct to run on leading computing architectures over the next three years.”
    To address the water cycle, the project plan (link below) hypothesized that: 1) changes in river flow over the last 40 years have been dominated primarily by land management, water management and climate change associated with aerosol forcing; 2) during the next 40 years, greenhouse gas (GHG) emissions in a business as usual scenario may drive changes to river flow.

    “A goal of ACME is to simulate the changes in the hydrological cycle, with a specific focus on precipitation and surface water in orographically complex regions such as the western United States and the headwaters of the Amazon,” the report states.

    To address biogeochemistry, ACME researchers will examine how more complete treatments of nutrient cycles affect carbon-climate system feedbacks, with a focus on tropical systems, and investigate the influence of alternative model structures for below-ground reaction networks on global-scale biogeochemistry-climate feedbacks.

    For cryosphere, the team will examine the near-term risks of initiating the dynamic instability and onset of the collapse of the Antarctic Ice Sheet due to rapid melting by warming waters adjacent to the ice sheet grounding lines.

    The experiment would be the first fully-coupled global simulation to include dynamic ice shelf-ocean interactions for addressing the potential instability associated with grounding line dynamics in marine ice sheets around Antarctica.

    Other LLNL researchers involved in the program leadership are atmospheric scientist Peter Caldwell (co-leader of the atmospheric model and coupled model task teams) and computer scientists Dean Williams (council member and workflow task team leader) and Renata McCoy (project engineer).

    Initial funding for the effort has been provided by DOE’s Office of Science.

    More information can be found in the Accelerated Climate Modeling For Energy: Project Strategy and Initial Implementation Plan.

    See the full article here.

    Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security
    Administration
    DOE Seal
    NNSA
    ScienceSprings relies on technology from

    MAINGEAR computers

    Lenovo
    Lenovo

    Dell
    Dell

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 378 other followers

%d bloggers like this: