From Lawrence Livermore National Laboratory: “Spack, a Lab-developed ‘app store for supercomputers,’ becoming standard-bearer”

From Lawrence Livermore National Laboratory

Sept. 18, 2018
Jeremy Thomas
thomas244@llnl.gov
925-422-5539

1
In July, Lawrence Livermore National Laboratory computer scientists (from left) Todd Gamblin and Greg Becker met with HPC Application Expert Massimiliano Culpo at the École polytechnique fédérale de Lausanne (EPFL) in Lausanne, Switzerland. Culpo is an EPFL scientist and longtime Spack contributor who uses Spack to manage software on EPFL’s supercomputers.

Spack, a Lawrence Livermore National Laboratory-developed open source package manager optimized for high performance computing (HPC), is making waves throughout the HPC community, including internationally, as evidenced by a recent tour of European HPC facilities by the tool’s developers.

Despite its niche status, Spack (short for Supercomputer PACKage manager), is one of the most popular pieces of software the Lab has ever released to the GitHub open source community. Described by its developers as “an app store for supercomputers,” Spack was started by LLNL computer scientist Todd Gamblin in 2013 and has quickly become the go-to package manager at LLNL and Argonne, Oak Ridge, Los Alamos and Sandia national laboratories, as well as Lawrence Berkeley’s National Energy Research Scientific Computing Center (NERSC).

Not only is it being used on the Department of Energy’s (DOE) latest and greatest flagship systems, Oak Ridge’s Summit and LLNL’s Sierra,

ORNL IBM AC922 SUMMIT supercomputer. Credit: Carlos Jones, Oak Ridge National Laboratory/U.S. Dept. of Energy

LLNL SIERRA IBM supercomputer

it’s also become the official deployment tool for the Exascale Computing Project, the “glue” for coordinating exascale software releases and deploying them to HPC facilities.

Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

“It’s been pretty amazing,” Gamblin said of Spack’s rise to broad acceptance. “It wrecks my inbox — I get 200 emails a day about Spack from GitHub and the mailing list — but the momentum is great. We continue to drive development, and we review features and merge bug fixes, but the community helps tremendously with new ideas, new features and regular maintenance. I don’t think we could sustain a project of this scale without their help.”

Perhaps nothing has epitomized Spack’s growing reach more than the month of July, which began with Gamblin presenting Spack at the Platform for Advanced Scientific Computing (PASC) Conference in Basel, Switzerland, piquing interest from France’s Atomic Energy Institute (CEA) and other institutions. From there, Gamblin took a day trip to the Technical Institute of Munich (TUM), where he discussed potential collaborations with former LLNL computer scientist Martin Schulz, who is now TUM’s chair professor for Computer Architecture and Parallel Systems, as well as staff at the affiliated Leibniz Supercomputing Centre (LRZ).

LRZ is deploying a 26-petaflop supercomputer called SuperMUC-NG and is planning to use Spack to set up the machine’s software.

Gamblin then drove to Lausanne, Switzerland, to visit École polytechnique fédérale de Lausanne (EPFL) on July 6, where he was joined by fellow LLNL computer scientist Greg Becker, who is part of the Spack team and has been instrumental to its development. While there, the pair met with longtime Spack contributor Massimiliano Culpo, who uses Spack to manage software on EPFL’s supercomputers. From Lausanne, they drove to Paris for a visit at CEA facilitated by LLNL computer scientist Edgar Leon, who is on a yearlong visiting assignment at the facility. CEA is interested in using Spack to modernize its developer workflow, Gamblin said, and the group discussed adding features to support the institute’s work and ways that CEA and LLNL could work together on future Spack features.

After enjoying a festive evening in Paris as the French celebrated their win over Belgium in the World Cup, Gamblin returned to the States, and Becker went on to London and the Atomic Weapons Establishment (AWE), which is exploring package deployment with Spack. Becker spent more than a week at AWE and met with British scientists involved in the Joint Working Group, a treaty-based high performance computing partnership between LLNL and the U.K.’s Science and Technology Facilities Council aimed at improving industry, promoting collaborations and boosting economic competitiveness.

Gamblin and Becker said the trip was useful in picturing what other HPC sites are attempting to do with Spack, figuring out what features to focus on next and starting a conversation about new collaborations. It also left them thinking they needed to expand community outreach. Since the meeting, Gamblin and collaborators from CEA and LRZ have had a birds-of-a-feather session accepted at the upcoming Supercomputing Conference 18 (SC18) in Dallas, where they will have a larger face-to-face community meeting. Gamblin and others will also hold a Spack tutorial at SC18.

“I think we got a lot of feedback that was some version of ‘Wow, this fills a use case that nothing else really does for me, and it would be great if it had these features, too,’” Becker said. “People definitely weren’t shy about letting us know what they hoped we were planning on doing or what they were planning on submitting, but they were very clear that they had looked at everything they could find out there and there wasn’t anything else that was going this direction.”

Spack has come a long way in the few short years since Gamblin first started coding it on weekends in coffee shops. He built the first version, a Python-based program that would automatically build libraries on the Lab’s Linux machines, to help his summer students by freeing them up to do their work. Subsequent Lab Hackathons attracted additional contributors and more packages, and after Gamblin presented a paper on Spack at Supercomputing Conference (SC15), interest began pouring in from other Department of Energy national laboratories, academia and companies with HPC resources.

“After SC15 my inbox exploded,” Gamblin said. “There were days where I would check my mail and think ‘how am I going to sustain this?’”

Through the open source repository GitHub, Spack has attracted hundreds of users who have added software packages (Spack now supports 2,800 of them), and HPC centers like NERSC, EPFL, Fermilab and the European Organization for Nuclear Research (CERN) have contributed significant features.

Gamblin, Becker, and Peter Scheibel (GS) work to evaluate contributions from all of these organizations. The three also have appeared on HPC-related podcasts and conferences, including tutorials at SC16 and SC17, to spread the word about Spack’s usefulness and versatility.

“It’s like the app store for HPC, but the tricky bit of HPC is that we want 15 different configurations of the same app at once,” Becker said. “One of the key things for Spack is that the underlying model allows us to satisfy that need.”

The reasons for Spack’s popularity among the HPC community, Gamblin said, are twofold. Most system package managers require users to run with superuser privileges, which is fine for most developers because they own their machines. But HPC machines are shared, he explained, and Spack can install a lot of low-level software as a regular user in their home directory.

“For the HPC space it definitely fills a gap,” Gamblin said. “People needed something that could install custom packages in their own directory. The fact that you can run as a user is a big deal. There are other systems, like EasyBuild, that also have traction in this space, but they are very much targeted at system administrators rather than computational scientists. Spack gives you additional flexibility that both administrators and developers need.”

Another advantage, Gamblin said, is that other package managers that targeted developers are specific to a certain programming language, such as npm for Javascript, or Bundler for Ruby. HPC software crosses languages (C++, Python, Fortran etc.) so the relationships between packages are inherently more complex.

“Integrating so many packages into one application from so many different software ecosystems makes HPC particularly hard,” Gamblin said. “HPC software is more complicated today than 10 years ago. There are more dependencies, libraries and integration, so the need became more acute.”

Also working in Spack’s favor is that a lot of HPC labor involves porting software over to new machines, as LLNL is currently doing with Sierra. While most package managers are specific to one machine, Spack packages are templated, so if developers write a package for one machine, Becker said the likelihood is higher that it will work on another machine.

“If you get on a platform that no one’s ever tried to build this on before, Spack will at least make a best effort,” Becker said. “If that platform is really weird, it might not get very far, but in many cases, the best effort works.” This is the flexibility that Spack offers that other systems don’t.

Today, Spack is used by 40-50 people at LLNL, mostly developers in Livermore Computing (LC) and other parts of the Lab, as well as code teams who are using it as the interface to install scientific packages to run on Linux cluster machines, including Blue Gene/Q and Sierra. Spack has reduced the time needed to deploy complex codes on certain Lab supercomputers from weeks to days.

“We’re moving toward using Spack exclusively to deploy user-facing software in LC, but we’re moving from our current process, which uses Spack to generate RPM packages for the system package manager,” Becker said. “We have a fair number of people in the development environment group who use Spack to feed packages into that process. I think we’re collectively using it at every level in the hierarchy: single-user, application teams and system deployments.”

Gamblin and the Spack team, including its outside contributors, are working on new improvements and features with hopes of releasing version 1.0 in November, possibly at SC18. Gamblin said that in the coming year, they plan to add features that enable facilities to deploy extremely large suites of software easily, as well as features that simplify the workflow for individual developers working on multiple projects at once. The team is calling these features “Spack Stacks” and “Spack Environments,” respectively.

While optimized for supercomputers, Spack also can be used on home computers and laptops, where Gamblin and others see the potential for wider acceptance. Gamblin said he wants to include more machine learning libraries, to allow users to combine those workflows with HPC using the same tool. The Spack team also is looking to focus on greater reproducibility from one stack to another, polishing workflows and working on better support for binary software packages.

Additionally, Gamblin said he would like to expand community engagement and explore a steering committee that could govern future Spack-related decisions. Gamblin, Becker and others want Spack to eventually be part of the general deployment strategy for libraries across DOE. Spack has been adopted as the deployment tool for the U.S. Exascale Computing Project’s (ECP’s) software stack, and other DOE national labs are gradually joining in the fray.

Exascale Computing Project

“It’s nice to have industry standards where possible, and it would be great if we could fill that role in terms of getting everyone on the same page,” Becker said. “Spack is already good at the individual level of avoiding duplication of work and if we could keep on extending that so that large HPC sites are able to share work with each other, that would be great as well.”

“I’d like it if Spack were the way people use supercomputers and if it were part of everyone’s development environment. Good package management helps to grease the wheels,” Gamblin added. “The dream is to take the grunt work out of HPC: users get on a machine, assemble a stack of hundreds of libraries in minutes, then get back to focusing on the science.”

For more about open source software from LLNL, visit the web

See the full article here .


five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

LLNL Campus

Operated by Lawrence Livermore National Security, LLC, for the Department of Energy’s National Nuclear Security Administration
Lawrence Livermore National Laboratory (LLNL) is an American federal research facility in Livermore, California, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center (FFRDC), it is primarily funded by the U.S. Department of Energy (DOE) and managed and operated by Lawrence Livermore National Security, LLC (LLNS), a partnership of the University of California, Bechtel, BWX Technologies, AECOM, and Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it.

LLNL is self-described as “a premier research and development institution for science and technology applied to national security.” Its principal responsibility is ensuring the safety, security and reliability of the nation’s nuclear weapons through the application of advanced science, engineering and technology. The Laboratory also applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.

The Laboratory is located on a one-square-mile (2.6 km2) site at the eastern edge of Livermore. It also operates a 7,000 acres (28 km2) remote experimental test site, called Site 300, situated about 15 miles (24 km) southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of roughly 5,800 employees.

LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley. It was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence,[2] director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility.

The new laboratory was sited at a former naval air station of World War II. It was already home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions (i.e. fusion). About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus.

Lawrence tapped 32-year-old Herbert York, a former graduate student of his, to run Livermore. Under York, the Lab had four main programs: Project Sherwood (the magnetic-fusion program), Project Whitney (the weapons-design program), diagnostic weapon experiments (both for the Los Alamos and Livermore laboratories), and a basic physics program. York and the new lab embraced the Lawrence “big science” approach, tackling challenging projects with physicists, chemists, engineers, and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university’s board of regents named both laboratories for him, as the Lawrence Radiation Laboratory.

Historically, the Berkeley and Livermore laboratories have had very close relationships on research projects, business operations, and staff. The Livermore Lab was established initially as a branch of the Berkeley laboratory. The Livermore lab was not officially severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, and LLNL’s remote test location as Site 300.[3]

The laboratory was renamed Lawrence Livermore Laboratory (LLL) in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had exclusively managed and operated the Laboratory since its inception 55 years before. The laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008.[4] The jury found that LLNS breached a contractual obligation to terminate the employees only for “reasonable cause.”[5] The five plaintiffs also have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial.[6] There are 125 co-plaintiffs awaiting trial on similar claims against LLNS.[7] The May 2008 layoff was the first layoff at the laboratory in nearly 40 years.[6]

On March 14, 2011, the City of Livermore officially expanded the city’s boundaries to annex LLNL and move it within the city limits. The unanimous vote by the Livermore city council expanded Livermore’s southeastern boundaries to cover 15 land parcels covering 1,057 acres (4.28 km2) that comprise the LLNL site. The site was formerly an unincorporated area of Alameda County. The LLNL campus continues to be owned by the federal government.

LLNL/NIF


DOE Seal
NNSA