From Lawrence Berkeley National Lab: “3 Sky Surveys Completed in Preparation for Dark Energy Spectroscopic Instrument”

Berkeley Logo

From Lawrence Berkeley National Lab

July 8, 2019

Glenn Roberts Jr.
geroberts@lbl.gov
(510) 486-5582

Researchers will pick 35 million galaxies and quasars to target during DESI’s 5-year mission

It took three sky surveys – conducted at telescopes in two continents, covering one-third of the visible sky, and requiring almost 1,000 observing nights – to prepare for a new project that will create the largest 3D map of the universe’s galaxies and glean new insights about the universe’s accelerating expansion.

This Dark Energy Spectroscopic Instrument (DESI) project will explore this expansion, driven by a mysterious property known as dark energy, in great detail. It could also make unexpected discoveries during its five-year mission.

LBNL/DESI spectroscopic instrument on the Mayall 4-meter telescope at Kitt Peak National Observatory starting in 2018

NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

The surveys, which wrapped up in March, have amassed images of more than 1 billion galaxies and are essential in selecting celestial objects to target with DESI, now under construction in Arizona.

The latest batch of imaging data from these surveys, known as DR8, was publicly released July 8, and an online Sky Viewer tool provides a virtual tour of this data. A final data release from the DESI imaging surveys is planned later this year.

It took three sky surveys – conducted at telescopes in two continents, covering one-third of the visible sky, and requiring almost 1,000 observing nights – to prepare for a new project that will create the largest 3D map of the universe’s galaxies and glean new insights about the universe’s accelerating expansion.

This Dark Energy Spectroscopic Instrument (DESI) project will explore this expansion, driven by a mysterious property known as dark energy, in great detail. It could also make unexpected discoveries during its five-year mission.

The surveys, which wrapped up in March, have amassed images of more than 1 billion galaxies and are essential in selecting celestial objects to target with DESI, now under construction in Arizona.

The latest batch of imaging data from these surveys, known as DR8, was publicly released July 8, and an online Sky Viewer tool provides a virtual tour of this data. A final data release from the DESI imaging surveys is planned later this year.

Scientists will select about 33 million galaxies and 2.4 million quasars from the larger set of objects imaged in the three surveys. Quasars are the brightest objects in the universe and are believed to contain supermassive black holes. DESI will target these selected objects for several measurements after its start, which is expected in February 2020.

DESI will measure each target across a range of different wavelengths of light, known as spectrum, from the selected set of galaxies repeatedly over the course of its mission. These measurements will provide details about their distance and acceleration away from Earth.

A collection of 5,000 swiveling robots, each carrying a fiber-optic cable, will point at sets of pre-selected sky objects to gather their light (see a related video [below]) so it can be split into different colors and analyzed using a series of devices called spectrographs.

Three surveys, 980 nights

“Typically, when you apply for time on a telescope you get up to five nights,” said David Schlegel, a DESI project scientist at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), which is the lead institution in the DESI collaboration. “These three imaging surveys totaled 980 nights, which is a pretty big number.”

The three imaging surveys for DESI include:

The Mayall z-band Legacy Survey (MzLS), carried out at the Mayall Telescope at the National Science Foundation’s Kitt Peak National Observatory near Tucson, Arizona, over 401 nights. DESI is now under installation at the Mayall Telescope.

The Dark Energy Camera Legacy Survey (DECaLS) at the Victor Blanco Telescope at NSF’s Cerro Tololo Inter-American Observatory in Chile, which lasted 204 nights.

Dark Energy Survey


Dark Energy Camera [DECam], built at FNAL


NOAO/CTIO Victor M Blanco 4m Telescope which houses the DECam at Cerro Tololo, Chile, housing DECam at an altitude of 7200 feet

Timeline of the Inflationary Universe WMAP

The Dark Energy Survey (DES) is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure that will reveal the nature of the mysterious dark energy that is accelerating the expansion of our Universe. DES began searching the Southern skies on August 31, 2013.

According to Einstein’s theory of General Relativity, gravity should lead to a slowing of the cosmic expansion. Yet, in 1998, two teams of astronomers studying distant supernovae made the remarkable discovery that the expansion of the universe is speeding up. To explain cosmic acceleration, cosmologists are faced with two possibilities: either 70% of the universe exists in an exotic form, now called dark energy, that exhibits a gravitational force opposite to the attractive gravity of ordinary matter, or General Relativity must be replaced by a new theory of gravity on cosmic scales.

DES is designed to probe the origin of the accelerating universe and help uncover the nature of dark energy by measuring the 14-billion-year history of cosmic expansion with high precision. More than 400 scientists from over 25 institutions in the United States, Spain, the United Kingdom, Brazil, Germany, Switzerland, and Australia are working on the project. The collaboration built and is using an extremely sensitive 570-Megapixel digital camera, DECam, mounted on the Blanco 4-meter telescope at Cerro Tololo Inter-American Observatory, high in the Chilean Andes, to carry out the project.

Over six years (2013-2019), the DES collaboration used 758 nights of observation to carry out a deep, wide-area survey to record information from 300 million galaxies that are billions of light-years from Earth. The survey imaged 5000 square degrees of the southern sky in five optical filters to obtain detailed information about each galaxy. A fraction of the survey time is used to observe smaller patches of sky roughly once a week to discover and study thousands of supernovae and other astrophysical transients.

The Beijing-Arizona Sky Survey (BASS), which used the Steward Observatory’s Bok telescope at Kitt Peak National Observatory and lasted 375 nights.

2.3-metre Bok Telescope at the Steward Observatory at Kitt Peak in Arizona, USA, altitude 2,096 m (6,877 ft)

4
This map shows the sky areas covered (blue) by three surveys conducted in preparation for DESI. (Credit: University of Arizona)

On-site survey crews – typically two DESI project researchers per observing night for each of the surveys – served in a sort of “lifeguard” role, Schlegel said. “When something went wrong they were there to fix it – to keep eyes on the sky,” and researchers working remotely also aided in troubleshooting.

On the final night of the final survey …

In early March, Eva-Maria Mueller, a postdoctoral researcher at the U.K.’s University of Portsmouth, and Robert Blum, former deputy director at the National Optical Astronomy Observatory (NOAO) that manages the survey sites, were on duty with a small team in the control room of the NSF’s Victor Blanco Telescope on a mile-high Chilean mountain for the final night of DECaLS survey imaging.

Seated several stories beneath the telescope, Mueller and Blum viewed images in real time to verify the telescope’s position and focus. Mueller, who was participating in a five-night shift that was her first observing stint for the DESI surveys, said, “This was always kind of a childhood dream.”

Blum, who had logged many evenings at the Blanco telescope for DECaLS, said, “It’s really exciting to think about finishing this phase.” He noted that this final night was focused on “cleaning up little holes” in the previous imaging. Blum is now serving in a new role as acting operations director for the Large Synoptic Survey Telescope under installation in Chile.

New software designed for the DESI surveys, and precise positioning equipment on the telescopes, has helped to automate the image-taking process, setting the exposure time and filters and compensating for atmospheric distortions and other factors that can affect the imaging quality, Blum noted. During a productive evening, it was common to produce about 150 to 200 images for the DECaLS survey.

Cool cosmic cartography experiment

The data from the surveys was routed to supercomputers at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), which will be the major storehouse for DESI data.

NERSC

NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

NERSC Hopper Cray XE6 supercomputer


LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

Future:

Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

NERSC is a DOE Office of Science User Facility.

More than 100 researchers participated in night shifts to conduct the surveys, said Arjun Dey, the NOAO project scientist for DESI. Dey served as a lead scientist for the MzLS survey and a co-lead scientist on the DECaLS survey with Schlegel.

“We are building a detailed map of the universe and measuring its expansion history over the last 10 to 12 billion years,” Dey said. “The DESI experiment represents the most detailed – and definitely the coolest – cosmic cartography experiment undertaken to date. Although the imaging was carried out for the DESI project, the data are publicly available so everyone can enjoy the sky and explore the cosmos.”

BASS survey supported by global team

Xiaohui Fan, a University of Arizona astronomy professor who was a co-lead on the BASS survey conducted at Kitt Peak’s Bok Telescope, coordinated viewing time by an international group that included co-leads Professor Zhou Xu and Associate Professor Zou Hu, other scientists from the National Astronomical Observatories of China (NAOC), and researchers from the University of Arizona and from across the DESI collaboration.

4
The Bok (left) and Mayall telescopes at Kitt Peak National Observatory near Tucson, Arizona. DESI is currently under installation at the Mayall telescope. (Credit: Michael A. Stecker)

BASS produced about 100,000 images during its four-year run. It scanned a section of sky about 13 times larger than the Big Dipper, part of the Ursa Major constellation.

“This is a good example of how a collaboration is done,” Fan said. “Through this international partnership we were bringing in people from around the world. This is a nice preview of what observing with DESI will be like.”

Fan noted the DESI team’s swift response in updating the telescope’s hardware and software during the course of the survey.

“It improved a lot in terms of automated controls and focusing and data reduction,” he said. Most of the BASS survey imaging concluded in February, with some final images taken in March.

Next steps toward DESI’s completion

All of the images gathered will be processed by a mathematical code, called Tractor, that helps to identify all of the galaxies surveyed and measure their brightness.

With the initial testing of the massive corrector barrel, which houses DESI’s package of six large mirrors, in early April, the next major milestone for the project will be the delivery, installation, and testing of its focal plane, which caps the telescope and houses the robotic positioners.

Dey, who participated in formative discussions about the need for an experiment like DESI almost 20 years ago, said, “It’s pretty amazing that our small and dedicated team was able to pull off such a large survey in such a short time. We are excited to be turning to the next phase of this project!”

NERSC is a DOE Office of Science User Facility.

More:

Explore Galaxies Far, Far Away at Internet Speeds

Scientists have released an “expansion pack” for a virtual tour of the universe that you can enjoy from the comfort of your own computer. The latest version of the publicly accessible images of the sky roughly doubles the size of the searchable universe from the project’s original release in May.

News Center


In this video, Dark Energy Spectroscopic Instrument (DESI) project participants share their insight and excitement about the project and its potential for new and unexpected discoveries.

DESI is supported by the U.S. Department of Energy’s Office of Science; the U.S. National Science Foundation, Division of Astronomical Sciences under contract to the National Optical Astronomy Observatory; the Science and Technologies Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the National Council of Science and Technology of Mexico; the Ministry of Economy of Spain; the French Alternative Energies and Atomic Energy Commission (CEA); and DESI member institutions. The DESI scientists are honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. View the full list of DESI collaborating institutions, and learn more about DESI here: desi.lbl.gov.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Bringing Science Solutions to the World

In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

A U.S. Department of Energy National Laboratory Operated by the University of California.

University of California Seal

DOE Seal

#astronomy, #astrophysics, #basic-research, #cosmology, #dark-energy-survey, #desi-spectroscopic-instrument, #lbnl, #nersc-national-energy-research-for-scientific-computing-center

From Lawrence Berkeley National Lab: “The ‘Little’ Computer Cluster That Could”

Berkeley Logo

From Lawrence Berkeley National Lab

May 1, 2019
Glenn Roberts Jr.
geroberts@lbl.gov
(510) 486-5582

Decades before “big data” and “the cloud” were a part of our everyday lives and conversations, a custom computer cluster based at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) enabled physicists around the world to remotely and simultaneously analyze and visualize data.

2
The PDSF computer cluster in 2003. (Credit: Berkeley lab)

The Parallel Distributed Systems Facility (PDSF) cluster, which had served as a steady workhorse in supporting groundbreaking and even Nobel-winning research around the world since the 1990s, switched off last month.

NERSC PDSF

During its lifetime the cluster and its dedicated support team racked up many computing achievements and innovations in support of large collaborative efforts in nuclear physics and high-energy physics. Some of these innovations have persevered and evolved in other systems.

The cluster handled data for experiments that produce a primordial “soup” of subatomic particles to teach us about the makings of matter, search for intergalactic particle signals deep within Antarctic ice, and hunt for dark matter in a mile-deep tank of liquid xenon at a former mine site. It also handled data for a space observatory mapping the universe’s earliest light, and for Earth-based observations of supernovas.

It supported research leading to the discoveries of the morphing abilities of ghostly particles called neutrinos, the existence of the Higgs boson and the related Higgs field that generates mass through particle interactions, and the accelerating expansion rate of the universe that is attributed to a mysterious force called dark energy.

CERN CMS Higgs Event


CERN ATLAS Higgs Event

Lambda-Cold Dark Matter, Accelerated Expansion of the Universe, Big Bang-Inflation (timeline of the universe) Date 2010 Credit: Alex Mittelmann Cold creation

Dark Energy Camera Enables Astronomers a Glimpse at the Cosmic Dawn. CREDIT National Astronomical Observatory of Japan

Some of PDSF’s collaboration users have transitioned to the Cori supercomputer at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), with other participants moving to other systems. The transition to Cori gives users access to more computing power in an era of increasingly hefty and complex datasets and demands.

NERSC

NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

NERSC Hopper Cray XE6 supercomputer


LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

Future:

Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

“A lot of great physics and science was done at PDSF,” said Richard Shane Canon, a project engineer at NERSC who served as a system lead for PDSF from 2003-05. “We learned a lot of cool things from it, and some of those things even became part of how we run our supercomputers today. It was also a unique partnership between experiments and a supercomputing facility – it was the first of its kind.”

PDSF was small when compared to its supercomputer counterparts that handle a heavier load of computer processors, data, and users, but it had developed a reputation for being responsive and adaptable, and its support crew over the years often included physicists who understood the science as well as the hardware and software capabilities and limitations.

“It was ‘The Little Engine That Could,’” said Iwona Sakrejda, a nuclear physicist who supported PDSF and its users for over a decade in a variety of roles at NERSC and retired from Berkeley Lab in 2015. “It was the ‘boutique’ computer cluster.”

PDSF, because it was small and flexible, offered an R&D environment that allowed researchers to test out new ideas for analyzing and visualizing data. Such an environment may have been harder to find on larger systems, she said. Its size also afforded a personal touch.

“When things didn’t work, they had more handholding,” she added, recalling the numerous researchers that she guided through the PDSF system – including early career researchers working on their theses.

“It was gratifying. I developed a really good relationship with the users,” Sakrejda said. “I understood what they were trying to do and how their programs worked, which was important in creating the right architecture for what they were trying to accomplish.”

She noted that because the PDSF system was constantly refreshed, it sometimes led to an odd assortment of equipment put together from different generations of hardware, in sharp contrast to the largely homogenous architecture of today’s supercomputers.

PDSF participants included collaborations for the Sudbury Neutrino Observatory (SNO) in Canada, the Solenoid Tracker at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (STAR), IceCube near the South Pole, Daya Bay in China, the Cryogenic Underground Observatory for Rare Events (CUORE) in Italy, the Large Underground Xenon (LUX), LUX-ZEPLIN (LZ), and MAJORANA experiments in South Dakota, the Collider Detector at Fermilab (CDF), and the ATLAS Experiment and A Large Ion Collider Experiment (ALICE) at Europe’s CERN laboratory, among others. The most data-intensive experiments use a distributed system of clusters like PDSF.

SNOLAB, a Canadian underground physics laboratory at a depth of 2 km in Vale’s Creighton nickel mine in Sudbury, Ontario

BNL/RHIC Star Detector

U Wisconsin ICECUBE neutrino detector at the South Pole

Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen, China

CUORE experiment,at the Italian National Institute for Nuclear Physics’ (INFN’s) Gran Sasso National Laboratories (LNGS) in Italy,a search for neutrinoless double beta decay

LBNL LZ project at SURF, Lead, SD, USA

U Washington Majorana Demonstrator Experiment at SURF

FNAL/Tevatron CDF detector

CERN ATLAS Image Claudia Marcelloni ATLAS CERN

CERN/ALICE Detector

3
This chart shows the physics collaborations that used PDSF over the years, with the heaviest usage by the STAR and ALICE collaborations. (Credit: Berkeley Lab)

The STAR collaboration was the original participant and had by far the highest overall use of PDSF, and the ALICE collaboration had grown to become one of the largest PDSF users by 2010. Both experiments have explored the formation and properties of an exotic superhot particle soup known as the quark-gluon plasma by colliding heavy particles.

SNO researchers’ findings about neutrinos’ mass and ability to change into different forms or flavors led to the 2015 Nobel Prize in physics. And PDSF played a notable role in the early analyses of SNO data.

Art McDonald, who shared that Nobel as director of the SNO Collaboration, said, “The PDSF computing facility was used extensively by the SNO Collaboration, including our collaborators at Berkeley Lab.”

He added, “This resource was extremely valuable in simulations and data analysis over many years, leading to our breakthroughs in neutrino physics and resulting in the award of the 2015 Nobel Prize and the 2016 Breakthrough Prize in Fundamental Physics to the entire SNO Collaboration. We are very grateful for the scientific opportunities provided to us through access to the PDSF facility.”

PDSF’s fast processing of data from the Daya Bay nuclear reactor-based experiment was also integral in precise measurements of neutrino properties.

The cluster was a trendsetter for a so-called condo model in shared computing. This model allowed collaborations to buy a share of computing power and dedicated storage space that was customized for their own needs, and a participant’s allocated computer processors on the system could also be temporarily co-opted by other cluster participants when they were not active.

In this condo analogy, “You could go use your neighbor’s house if your neighbor wasn’t using it,” said Canon, a former experimental physicist. “If everybody else was idle you could take advantage of the free capacity.” Canon noted that many universities have adopted this kind of model for their computer users.

Importantly, the PDSF system was also designed to provide easy access and support for individual collaboration members rather than requiring access to be funneled through one account per project or experiment. “If everybody had to log in to submit their jobs, it just wouldn’t work in these big collaborations,” Canon said.

The original PDSF cluster, called the Physics Detector Simulation Facility, was launched in March 1991 to support analyses and simulations for a planned U.S. particle collider project known as the Superconducting Super Collider. It was set up in Texas, the planned home for the collider, though the collider project was ultimately canceled in 1993.

Superconducting Super Collider map, in the vicinity of Waxahachie, Texas, Cancelled by The U.S. Congress in 1993 because it showed no “immediate economic benefit”

5
A diagram showing the Phase 3 design of the original PDSF system. (Credit: “Superconducting Super Collider: A Retrospective Summary 1989-1993,” Superconducting Super Collider Laboratory, Dallas, Texas)

A 1994 retrospective report on the collider project notes that the original PDSF had been built up to perform a then-impressive 7 billion instructions per second and that the science need for PDSF to simulate complex particle collisions had driven “substantial technological advances” in the nation’s computer industry.

At the time, PDSF was “the world’s most powerful high-energy physics computing facility,” the report also noted, and was built using non-proprietary systems and equipment from different manufacturers “at a fraction of the cost” of supercomputers.

Longtime Berkeley Lab physicist Stu Loken, who had led the Lab’s Information and Computing Sciences Division from 1988-2000, had played a pivotal role in PDSF’s development and in siting the cluster at Berkeley Lab.

7
PDSF moved to Berkeley Lab’s Oakland Scientific Facility in 2000 before returning to the lab’s main site. (Credit: Berkeley Lab)

PDSF moved to Berkeley Lab in 1996 with a new name and a new role. It was largely rebuilt with new hardware and was moved to a computer center in Oakland, Calif., in 2000 before returning once again to the Berkeley Lab site.

“A lot of the tools that we deployed to facilitate the data processing on PDSF are now being used by data users at NERSC,” said Lisa Gerhardt, a big-data architect at NERSC who worked on the PDSF system. She previously had served as a neutrino astrophysicist for the IceCube experiment.

Gerhardt noted that the cluster was nimble and responsive because of its focused user community. “Having a smaller and cohesive user pool made it easier to have direct relationships,” she said.

And Jan Balewski, computing systems engineer at NERSC who worked to transition PDSF users to the new system, said the scientific background of PDSF staff through the years was beneficial for the cluster’s users.

Balewski, a former experimental physicist, said, “Having our background, we were able to discuss with users what they really needed. And maybe, in some cases, what they were asking for was not what they really needed. We were able to help them find a solution.”

R. Jefferson “Jeff” Porter, a computer systems engineer and physicist in Berkeley Lab’s Nuclear Science Division who began working with the PDSF cluster and users as a postdoctoral researcher at Berkeley Lab in the mid-1990s, said, “PDSF was a resource that dealt with big data – many years before big data became a big thing for the rest of the world.”

It had always used off-the-shelf hardware and was steadily upgraded – typically twice a year. Even so, it was dwarfed by its supercomputer counterparts. About seven years ago the PDSF cluster had about 1,500 computer cores, compared to about 100,000 on a neighboring supercomputer at NERSC at the time. A core is the part of a computer processor that performs calculations

Porter was later hired by NERSC to support grid computing, a distributed form of computing in which computers in different locations can work together to perform larger tasks. He returned to the Nuclear Science Division to lead the ALICE USA computing project, which established PDSF as one of about 80 grid sites for CERN’s ALICE experiment. Use of PDSF by ALICE was an easy fit, since the PDSF community “was at the forefront of grid computing,” Porter said.

In some cases, the unique demands of PDSF cluster users would also lead to the adoption of new tools at supercomputer systems. “Our community would push NERSC in ways they hadn’t been thinking,” he said. CERN developed a system to distribute software that was adopted by PDSF about five years ago, and that has also been adopted by many scientific collaborations. NERSC put in a big effort, Porter said, to integrate this system into larger machines: Cori and Edison.

8
PDSF’s configuration in 2017. (Credit: Berkeley Lab)

Supporting multiple projects on a single system was a challenge for PDSF since each project had unique software needs, so Canon led the development of a system known as Chroot OS (CHOS) to enable each project to have a custom computing environment.

Porter explained that CHOS was an early form of “container computing” that has since enjoyed widespread adoption.

PDSF was run by a Berkeley Lab-based steering committee that typically had a member from each participating experiment and a member from NERSC, and Porter had served for about five years as the committee chair. He had been focused for the past year on how to transition users to the Cori supercomputer and other computing resources, as needed.

Balewski said that the leap of users from PDSF to Cori brings them access to far greater computing power, and allows them to “ask questions they could never ask on a smaller system.”

He added, “It’s like moving from a small town – where you know everyone but resources are limited – to a big city that is more crowded but also offers more opportunities.”

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Bringing Science Solutions to the World

In the world of science, Lawrence Berkeley National Laboratory (Berkeley Lab) is synonymous with “excellence.” Thirteen Nobel prizes are associated with Berkeley Lab. Seventy Lab scientists are members of the National Academy of Sciences (NAS), one of the highest honors for a scientist in the United States. Thirteen of our scientists have won the National Medal of Science, our nation’s highest award for lifetime achievement in fields of scientific research. Eighteen of our engineers have been elected to the National Academy of Engineering, and three of our scientists have been elected into the Institute of Medicine. In addition, Berkeley Lab has trained thousands of university science and engineering students who are advancing technological innovations across the nation and around the world.

Berkeley Lab is a member of the national laboratory system supported by the U.S. Department of Energy through its Office of Science. It is managed by the University of California (UC) and is charged with conducting unclassified research across a wide range of scientific disciplines. Located on a 202-acre site in the hills above the UC Berkeley campus that offers spectacular views of the San Francisco Bay, Berkeley Lab employs approximately 3,232 scientists, engineers and support staff. The Lab’s total costs for FY 2014 were $785 million. A recent study estimates the Laboratory’s overall economic impact through direct, indirect and induced spending on the nine counties that make up the San Francisco Bay Area to be nearly $700 million annually. The Lab was also responsible for creating 5,600 jobs locally and 12,000 nationally. The overall economic impact on the national economy is estimated at $1.6 billion a year. Technologies developed at Berkeley Lab have generated billions of dollars in revenues, and thousands of jobs. Savings as a result of Berkeley Lab developments in lighting and windows, and other energy-efficient technologies, have also been in the billions of dollars.

Berkeley Lab was founded in 1931 by Ernest Orlando Lawrence, a UC Berkeley physicist who won the 1939 Nobel Prize in physics for his invention of the cyclotron, a circular particle accelerator that opened the door to high-energy physics. It was Lawrence’s belief that scientific research is best done through teams of individuals with different fields of expertise, working together. His teamwork concept is a Berkeley Lab legacy that continues today.

A U.S. Department of Energy National Laboratory Operated by the University of California.

University of California Seal

DOE Seal

#the-little-computer-cluster-that-could-the-parallel-distributed-systems-facility-pdsf-cluster, #applied-research-technology, #basic-research, #lbnl, #nersc-national-energy-research-for-scientific-computing-center, #supercomputing

From insideHPC: “ExaLearn Project to bring Machine Learning to Exascale”

From insideHPC

March 24, 2019

As supercomputers become ever more capable in their march toward exascale levels of performance, scientists can run increasingly detailed and accurate simulations to study problems ranging from cleaner combustion to the nature of the universe. Enter ExaLearn, a new machine learning project supported by DOE’s Exascale Computing Project (ECP), aims to develop new tools to help scientists overcome this challenge by applying machine learning to very large experimental datasets and simulations.

1
The first research area for ExaLearn’s surrogate models will be in cosmology to support projects such a the LSST (Large Synoptic Survey Telescope) now under construction in Chile and shown here in an artist’s rendering. (Todd Mason, Mason Productions Inc. / LSST Corporation)

“The challenge is that these powerful simulations require lots of computer time. That is, they are “computationally expensive,” consuming 10 to 50 million CPU hours for a single simulation. For example, running a 50-million-hour simulation on all 658,784 compute cores on the Cori supercomputer NERSC would take more than three days.

NERSC

NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

NERSC Hopper Cray XE6 supercomputer


LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

Future:

Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

Running thousands of these simulations, which are needed to explore wide ranges in parameter space, would be intractable.

One of the areas ExaLearn is focusing on is surrogate models. Surrogate models, often known as emulators, are built to provide rapid approximations of more expensive simulations. This allows a scientist to generate additional simulations more cheaply – running much faster on many fewer processors. To do this, the team will need to run thousands of computationally expensive simulations over a wide parameter space to train the computer to recognize patterns in the simulation data. This then allows the computer to create a computationally cheap model, easily interpolating between the parameters it was initially trained on to fill in the blanks between the results of the more expensive models.

“Training can also take a long time, but then we expect these models to generate new simulations in just seconds,” said Peter Nugent, deputy director for science engagement in the Computational Research Division at LBNL.

From Cosmology to Combustion

Nugent is leading the effort to develop the so-called surrogate models as part of ExaLearn. The first research area will be cosmology, followed by combustion. But the team expects the tools to benefit a wide range of disciplines.

“Many DOE simulation efforts could benefit from having realistic surrogate models in place of computationally expensive simulations,” ExaLearn Principal Investigator Frank Alexander of Brookhaven National Lab said at the recent ECP Annual Meeting.

“These can be used to quickly flesh out parameter space, help with real-time decision making and experimental design, and determine the best areas to perform additional simulations.”

The surrogate models and related simulations will aid in cosmological analyses to reduce systematic uncertainties in observations by telescopes and satellites. Such observations generate massive datasets that are currently limited by systematic uncertainties. Since we only have a single universe to observe, the only way to address these uncertainties is through simulations, so creating cheap but realistic and unbiased simulations greatly speeds up the analysis of these observational datasets. A typical cosmology experiment now requires sub-percent level control of statistical and systematic uncertainties. This then requires the generation of thousands to hundreds of thousands of computationally expensive simulations to beat down the uncertainties.

These parameters are critical in light of two upcoming programs:

The Dark Energy Spectroscopic Instrument, or DESI, is an advanced instrument on a telescope located in Arizona that is expected to begin surveying the universe this year.

LBNL/DESI Dark Energy Spectroscopic Instrument for the Nicholas U. Mayall 4-meter telescope at Kitt Peak National Observatory near Tucson, Ariz, USA


NOAO/Mayall 4 m telescope at Kitt Peak, Arizona, USA, Altitude 2,120 m (6,960 ft)

DESI seeks to map the large-scale structure of the universe over an enormous volume and a wide range of look-back times (based on “redshift,” or the shift in the light of distant objects toward redder wavelengths of light). Targeting about 30 million pre-selected galaxies across one-third of the night sky, scientists will use DESI’s redshifts data to construct 3D maps of the universe. There will be about 10 terabytes (TB) of raw data per year transferred from the observatory to NERSC. After running the data through the pipelines at NERSC (using millions of CPU hours), about 100 TB per year of data products will be made available as data releases approximately once a year throughout DESI’s five years of operations.

The Large Synoptic Survey Telescope, or LSST, is currently being built on a mountaintop in Chile.

LSST


LSST Camera, built at SLAC



LSST telescope, currently under construction on the El Peñón peak at Cerro Pachón Chile, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes.


LSST Data Journey, Illustration by Sandbox Studio, Chicago with Ana Kova

When completed in 2021, the LSST will take more than 800 panoramic images each night with its 3.2 billion-pixel camera, recording the entire visible sky twice each week. Each patch of sky it images will be visited 1,000 times during the survey, and each of its 30-second observations will be able to detect objects 10 million times fainter than visible with the human eye. A powerful data system will compare new with previous images to detect changes in brightness and position of objects as big as far-distant galaxy clusters and as small as nearby asteroids.

For these programs, the ExaLearn team will first target large-scale structure simulations of the universe since the field is more developed than others and the scale of the problem size can easily be ramped up to an exascale machine learning challenge.

As an example of how ExaLearn will advance the field, Nugent said a researcher could run a suite of simulations with the parameters of the universe consisting of 30 percent dark energy and 70 percent dark matter, then a second simulation with 25 percent and 75 percent, respectively. Each of these simulations generates three-dimensional maps of tens of billions of galaxies in the universe and how the cluster and spread apart as time goes by. Using a surrogate model trained on these simulations, the researcher could then quickly run another surrogate model that would generate the output of a simulation in between these values, at 27.5 and 72.5 percent, without needing to run a new, costly simulation — that too would show the evolution of the galaxies in the universe as a function of time. The goal of the ExaLearn software suite is that such results, and their uncertainties and biases, would be a byproduct of the training so that one would know the generated models are consistent with a full simulation.

Toward this end, Nugent’s team will build on two projects already underway at Berkeley Lab: CosmoFlow and CosmoGAN. CosmoFlow is a deep learning 3D convolutional neural network that can predict cosmological parameters with unprecedented accuracy using the Cori supercomputer at NERSC. CosmoGAN is exploring the use of generative adversarial networks to create cosmological weak lensing convergence maps — maps of the matter density of the universe as would be observed from Earth — at lower computational costs.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

insideHPC
2825 NW Upshur
Suite G
Portland, OR 97239

Phone: (503) 877-5048

#astronomy, #astrophysics, #basic-research, #cosmology, #desi-dark-energy-spectroscopic-instrument, #ecp-exascale-computing-project, #exalearn, #insidehpc, #lsst-large-synoptic-survey-telescope, #nersc-national-energy-research-for-scientific-computing-center, #particle-physics, #physics

From insideHPC: “NERSC taps NVIDIA compiler team for Perlmutter Supercomputer”

From insideHPC

March 22, 2019

1
Dr. Saul Perlmutter (left) holds an animated conservation with John Kirkley at SC13. Photo by Sharan Kalwani, Fermilab

NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer.

Cray Shasta Perlmutter SC18 AMD Epyc Nvidia pre-exascale supeercomputer

NERSC

NERSC Cray Cori II supercomputer at NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

NERSC Hopper Cray XE6 supercomputer


LBL NERSC Cray XC30 Edison supercomputer


The Genepool system is a cluster dedicated to the DOE Joint Genome Institute’s computing needs. Denovo is a smaller test system for Genepool that is primarily used by NERSC staff to test new system configurations and software.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

2
DOE and Cray announced on Oct. 30, 2018 that NERSC’s next supercomputer will be a Cray pre-exascale system to be delivered in 2020.

To highlight NERSC’s commitment to advancing research, the new system will be named “Perlmutter” in honor of Saul Perlmutter, an astrophysicist at Berkeley Lab and a professor of physics at the University of California, Berkeley who shared the 2011 Nobel Prize in Physics for his contributions to research showing that the expansion of the universe is accelerating. Dr. Perlmutter is also director of the Berkeley Institute for Data Science and leads the international Supernova Cosmology Project. He has been a NERSC user for many years, and part of his Nobel Prize winning work was carried out on NERSC machines.

Perlmutter, a Cray system code-named “Shasta”, will be a heterogeneous system comprising both CPU-only and GPU-accelerated nodes, with a performance of more than 3 times Cori, NERSC’s current platform. It will include a number of innovations designed to meet the diverse computational and data analysis needs of NERSC’s user base and speed their scientific productivity. The new system derives performance from advances in hardware and software, including a new Cray system interconnect, code-named Slingshot, which is designed for data-centric computing. Slingshot’s Ethernet compatibility, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality of service capabilities improve system utilization and performance and scalability of supercomputing and AI applications and workflows. The system will also feature NVIDIA GPUs with new Tensor Core technology, direct liquid cooling and will be NERSC’s first supercomputer with an all-flash scratch filesystem. Developed by Cray to accelerate I/O, the 30-petabyte Lustre filesystem will move data at a rate of more than 4 terabytes/sec.

“We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,” said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals.”

Under the new non-recurring engineering contract with NVIDIA, worth approximately $4 million, Berkeley Lab researchers will work with NVIDIA engineers to enhance the NVIDIA’s PGI C, C++ and Fortran compilers to enable OpenMP applications to run on NVIDIA GPUs. This collaboration will help NERSC users, and the HPC community as a whole, efficiently port suitable applications to target GPU hardware in the Perlmutter system.

Programming using compiler directives of any form are an important part of code portability and developer productivity. NERSC participation in both OpenMP and OpenACC organizations helps advance the entire ecosystem of important tools and the specifications on which they rely.

“Together with OpenACC, this OpenMP collaboration gives HPC developers more options for directives-based programming from a single compiler on GPUs and CPUs,” said Doug Miles, senior director of PGI compilers and tools at NVIDIA. “Our joint effort on programming tools for the Perlmutter supercomputer highlights how NERSC and NVIDIA are simplifying migration and development of science and engineering applications to pre-exascale systems and beyond.”

2
The Perlmutter Supercomputer will be based on the Cray Shasta architecture.

In addition, through this partnership, NERSC and NVIDIA will develop a set of GPU-based high performance data analytic tools using Python, the primary language used for data analytics at NERSC and a robust platform for machine learning and deep learning libraries. The new Python tools will allow NERSC to train staff and users through hack-a-thons where NERSC users will be able to work directly with NVIDIA personnel on their codes.

“NERSC supports thousands of researchers in diverse sciences at universities, national laboratories, and in industry,” commented Data Architect Rollin Thomas, who is leading the partnership at NERSC. “Our users increasingly want productive high-performance tools for interacting with their data, whether it comes from a massively parallel simulation or an experimental or observational science facility like a particle accelerator, astronomical observatory, or genome sequencer. We look forward to working with NVIDIA to accelerate discovery across all these disciplines.”

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

Founded on December 28, 2006, insideHPC is a blog that distills news and events in the world of HPC and presents them in bite-sized nuggets of helpfulness as a resource for supercomputing professionals. As one reader said, we’re sifting through all the news so you don’t have to!

If you would like to contact me with suggestions, comments, corrections, errors or new company announcements, please send me an email at rich@insidehpc.com. Or you can send me mail at:

insideHPC
2825 NW Upshur
Suite G
Portland, OR 97239

Phone: (503) 877-5048

#nersc-taps-nvidia-compiler-team-for-perlmutter-supercomputer, #cray-shasta-architecture, #dr-saul-perlmutter-uc-berkeley-nobel-laureate, #insidehpc, #nersc-national-energy-research-for-scientific-computing-center

From Motherboard: “New Supercomputer Simulations Show How Plasma Jets Escape Black Holes”

motherboard

From Motherboard

Jan 30 2019
Daniel Oberhaus

Black holes swallow everything that comes in contact with them, so how do plasma jets manage to escape their intense gravity?

1
Visualization of a general-relativistic collisionless plasma simulation. Image: Parfrey/LBNL

Researchers used one of the world’s most powerful supercomputers to better understand how jets of high energy plasma escapes the intense gravity of a black hole, which swallows everything else in its path—including light.

Before stars and other matter cross a black hole’s point of no return—a boundary known as the “event horizon”—and get consumed by the black hole, they get swept up in the black hole’s rotation. A question that has vexed physicists for decades was how some energy managed to escape the process and get channeled into streams of plasma that travel through space near the speed of light.

As detailed in a paper published last week in Physical Review Letters, researchers affiliated with the Department of Energy and the University of California Berkeley used a supercomputer at the DoE’s Lawrence Berkeley National Laboratory to simulate the jets of plasma, an electrically charged gas-like substance.

NERSC PDSF


PDSF is a networked distributed computing cluster designed primarily to meet the detector simulation and data analysis requirements of physics, astrophysics and nuclear science collaborations.

The simulations ultimately reconciled two decades-old theories that attempt to explain how energy can be extracted from a rotating black hole.

The first theory describes how electric currents around a black hole twist its magnetic field to create a jet, which is known as the Blandford-Znajek mechanism. This theory posits that material caught in the gravity of a rotating black hole will become increasingly magnetized the closer it gets to the event horizon. The black hole acts like a massive conductor spinning in a huge magnetic field, which will cause an energy difference (voltage) between the poles of the black hole and its equator. This energy difference is then diffused as jets at the poles of the black hole.

“There is a region around a rotating black hole, called the ergosphere, inside of which all particles are forced to rotate in the same direction as the black hole,” Kyle Parfrey, the lead author of the paper and a theoretical astrophysicist at NASA, told me in an email. “In this region it’s possible for a particle to effectively have negative energy in some sense, if it tries to orbit against the hole’s rotation.”

In other words, if one half of the split particle is launched against the spin of the black hole, it will reduce the black hole’s angular momentum or rotation. But that rotational energy has to go somewhere. In this case, it’s converted into energy that propels the other half of the particle away from the black hole.

According to Parfrey, the Penrose process observed in their simulations was a bit different from the classical situation of a particle splitting that was described above, however. Rather than particles splitting, charged particles in the plasma are acted on by electromagnetic forces, some of which are propelled against the rotation of the black hole on a negative energy trajectory. It is in this sense, Parfrey told me, that they are still considered a type of Penrose process.

The surprising part of the simulation, Parfrey told me, was that it appeared to establish a link between the Penford process and Blandford-Znajek mechanism, which had never been seen before.

To create the twisting magnetic fields that extract energy from the black hole in the Blandford-Znajek mechanism requires the electric current carried by particles inside the plasma and a substantial number of these particles had the negative energy property characteristic of the Penrose process.

“So it appears that, at least in some cases, the two mechanisms are linked,” Parfrey said.

Parfrey and his colleagues hope that their models will provide much needed context for photos from the Event Horizon Telescope, an array of telescopes that aim to directly image the event horizon where these plasma jets form. Until that first image is produced, however, Parfery said he and his colleagues want to refine these simulations so that they conform even better to existing observations.

See the full article here .

five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

The future is wonderful, the future is terrifying. We should know, we live there. Whether on the ground or on the web, Motherboard travels the world to uncover the tech and science stories that define what’s coming next for this quickly-evolving planet of ours.

Motherboard is a multi-platform, multimedia publication, relying on longform reporting, in-depth blogging, and video and film production to ensure every story is presented in its most gripping and relatable format. Beyond that, we are dedicated to bringing our audience honest portraits of the futures we face, so you can be better informed in your decision-making today.

#astronomy, #astrophysics, #basic-research, #blandford-znajek-mechanism, #cosmology, #motherboard, #nersc-national-energy-research-for-scientific-computing-center, #new-supercomputer-simulations-show-how-plasma-jets-escape-black-holes, #penrose-process

From Lawrence Berkeley National Lab: “Toward a New Light: Advanced Light Source Upgrade Project Moves Forward”

Berkeley Logo

From Lawrence Berkeley National Lab

September 25, 2018
Glenn Roberts Jr.
geroberts@lbl.gov
(510) 486-5582


VIDEO: Berkeley Lab’s Advanced Light Source takes a next step toward a major upgrade. (Credit: Berkeley Lab)

The Advanced Light Source (ALS), a scientific user facility at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab), has received federal approval to proceed with preliminary design, planning and R&D work for a major upgrade project that will boost the brightness of its X-ray beams at least a hundredfold.

LBNL/ALS

The upgrade will give the ALS, which this year celebrates its 25th anniversary, brighter beams with a more ordered structure – like evenly spaced ripples in a pond – that will better reveal nanoscale details in complex chemical reactions and in new materials, expanding the envelope for scientific exploration.

“This upgrade will make it possible for Berkeley Lab to be the leader in soft X-ray research for another 25 years, and for the ALS to remain at the center of this Laboratory for that time,” said Berkeley Lab Director Mike Witherell.

Steve Kevan, ALS Director, added, “The upgrade will transform the ALS. It will expand our scientific frontiers, enabling studies of materials and phenomena that are at the edge of our understanding today. And it will renew the ALS’s innovative spirit, attracting the best researchers from around the world to our facility to conduct their experiments in collaboration with our scientists.”

2
This computer rendering provides a top view of the ALS and shows equipment that will be installed during the ALS-U project. (Credit: Berkeley Lab)

The latest approval by the DOE, known as Critical Decision 1 or CD-1, authorizes the start of engineering and design work to increase the brightness and to more precisely focus the beams of light produced at the ALS that drive a broad range of science experiments. The upgrade project is dubbed ALS-U.

The dozens of beamlines maintained and operated by Berkeley Lab staff and scientists at the ALS conduct experiments simultaneously at all hours, attracting more than 2,000 researchers each year from across the country and around the globe through its role in a network of DOE Office of Science User Facilities.

This upgrade is intended to make the ALS the brightest storage ring-based source of soft X-rays in the world. Soft X-rays have an energy range that is especially useful for observing chemistry in action and for studying a material’s electronic and magnetic properties in microscopic detail.

3
4
Click the play button on the full article at bottom left to view a slideshow. This slideshow chronicles the history of the Advanced Light Source and the building that houses it, which was formerly home to a 184-inch cyclotron – another type of particle accelerator. It also shows the science conducted at the ALS and includes computer renderings of new equipment that will be installed as a part of the ALS-U project. (Credit: Berkeley Lab)

The planned upgrade will significantly increase the brightness of the ALS by focusing more light on a smaller spot. X-ray beams that today are about 100 microns (thousandths of an inch) across – smaller than the diameter of a human hair – will be squeezed down to just a few microns after the upgrade.

“That’s very exciting for us,” said Elke Arenholz, a senior staff scientist at the ALS. The upgrade will imbue the X-rays with a property known as “coherence” that will allow scientists to explore more complex and disordered samples with high precision. The high coherence of the soft X-ray light generated by the ALS-U will approach a theoretical limit.

“We can take materials that are more in their natural state, resolve any fluctuations, and look much more closely at the structure of materials, down to the nanoscale,” Arenholz said.

Among the many applications of these more precise beams are smaller-scale explorations of magnetic properties in multilayer data-storage materials, she said, and new observations of battery chemistry and other reactions as they occur. The upgrade should also enable faster data collection, which can allow researchers to speed up their experiments, she noted.

“We will have a lot of very interesting, new data that we couldn’t acquire before,” she said. Analyzing that data and feeding it back into new experiments will also draw upon other Berkeley Lab capabilities, including sample fabrication, complementary study techniques, and theory work at the Lab’s Molecular Foundry; as well as data processing, simulation and analysis work at the Lab’s National Energy Research Scientific Computing Center (NERSC).

William Chueh, an assistant professor of materials science at Stanford University who also heads up the users’ association for researchers who use the ALS or are interested in using the ALS, said that the upgrade will aid his studies by improving the resolution in tracking how charged particles move through batteries and fuel cells, for example.

“I am very excited by the science that the ALS-U project will enable. Such a tool will provide insights and design rules that help us to develop tomorrow’s materials,” Chueh said.

The upgrade project is a massive undertaking that will draw upon most areas at the Lab, said ALS-U Project Director David Robin, requiring the expertise of accelerator physicists, mechanical and electrical engineers, computer scientists, beamline optics and controls specialists, and safety and project management personnel, among a long list.

Berkeley Lab’s pioneering history of innovation and achievements in accelerator science, beginning with Lab founder Ernest Lawrence’s construction of the first cyclotron particle accelerator in 1930, have well-prepared the Lab for this latest project, Robin said.

He noted the historic contribution by the late Klaus Halbach, a Berkeley Lab scientist whose design of compact, powerful magnetic instruments known as permanent magnet insertion devices paved the way for the design of the current ALS and other so-called third-generation light sources of its kind.

4
An interior view of the Advanced Light Source. (Credit: Berkeley Lab)

The ALS-U project will remove more than 400 tons of equipment associated with the existing ALS storage ring, which is used to circulate electrons at nearly the speed of light to generate the synchrotron radiation that is ultimately emitted as X-rays and other forms of light.

A new magnetic array known as a “multi-bend achromat lattice” will take its place, and a secondary, “accumulator” ring will be added that will enhance beam brightness. Also, several new ALS beamlines are already optimized for the high brightness and coherence of the ALS-U beams, and there are plans for additional beamline upgrades.

5
This 1940s photograph shows the original building that housed a 184-inch cyclotron and that now contains the ALS. (Credit: Berkeley Lab)

The iconic domed building that houses the ALS – which was designed in the 1930s by Arthur Brown Jr., the architect for San Francisco landmark Coit Tower – will be preserved in the upgrade project. The ALS dome originally housed an accelerator known as the 184-inch cyclotron.

Robin credited the ALS-U project team, with support from all areas of the Lab, in the continuing progress toward the upgrade. “They have done a tremendous job in getting us to the point that we are at today,” he said.

Witherell said, “The fact that we will have this upgraded Advanced Light Source is an enormous vote of confidence in us by the federal government and the taxpayers.”

Berkeley Lab’s ALS, Molecular Foundry, and NERSC are all DOE Office of Science user facilities.

More information:

ALS-U Overview
Transformational X-ray Project Takes a Step Forward, Oct. 3, 2016
A Brief History of the ALS

See the full article here .


five-ways-keep-your-child-safe-school-shootings
Please help promote STEM in your local schools.

Stem Education Coalition

A U.S. Department of Energy National Laboratory Operated by the University of California

University of California Seal

DOE Seal

#applied-research-technology, #basic-research, #berkeley-lab-molecular-foundry, #coherence, #critical-decision-1-or-cd-1, #lab-founder-ernest-lawrences-construction-of-the-first-cyclotron-particle-accelerator-in-1930, #lbnl-als, #nanotechnology, #nersc-national-energy-research-for-scientific-computing-center, #smaller-scale-explorations-of-magnetic-properties-in-multilayer-data-storage-materials, #the-dozens-of-beamlines-maintained-and-operated-by-berkeley-lab-staff-and-scientists-at-the-als-conduct-experiments-simultaneously-at-all-hours, #the-upgrade-project-is-dubbed-als-u, #toward-a-new-light-advanced-light-source-upgrade-project-moves-forward, #x-ray-technology

From Fermilab: “Fermilab computing experts bolster NOvA evidence, 1 million cores consumed”

FNAL II photo

FNAL Art Image
FNAL Art Image by Angela Gonzales

From Fermilab , an enduring source of strength for the US contribution to scientific research world wide.

July 3, 2018
No writer credit found

How do you arrive at the physical laws of the universe when you’re given experimental data on a renegade particle that interacts so rarely with matter, it can cruise through light-years of lead? You call on the power of advanced computing.

The NOvA neutrino experiment, in collaboration with the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC-4) program and the HEPCloud program at DOE’s Fermi National Accelerator Laboratory, was able to perform the largest-scale analysis ever to support the recent evidence of antineutrino oscillation, a phenomenon that may hold clues to how our universe evolved.

FNAL/NOvA experiment map


FNAL NOvA detector in northern Minnesota


NOvA Far detector 15 metric-kiloton far detector in Minnesota just south of the U.S.-Canada border schematic


NOvA Far Detector Block


FNAL Near Detector

Using Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), located at Lawrence Berkeley National Laboratory, NOvA used over 1 million computing cores, or CPUs, between May 14 and 15 and over a short timeframe one week later.

1
The Cori supercomputer at NERSC was used to perform a complex computational analysis for NOvA. NOvA used over 1 million computing cores, the largest amount ever used concurrently in a 54-hour period. Photo: Roy Kaltschmidt, Lawrence Berkeley National Laboratory
NERSC CRAY Cori II supercomputerat NERSC at LBNL, named after Gerty Cori, the first American woman to win a Nobel Prize in science

This is the largest number of CPUs ever used concurrently over this duration — about 54 hours — for a single high-energy physics experiment. This unprecedented amount of computing enabled scientists to carry out some of the most complicated techniques used in neutrino physics, allowing them to dig deeper into the seldom seen interactions of neutrinos. This Cori allocation was more than 400 times the amount of Fermilab computing allocated to the NOvA experiment and 50 times the total computing capacity at Fermilab allocated for all of its rare-physics experiments. A continuation of the analysis was performed on NERSC’s Cori and Edison supercomputers one week later.

LBL NERSC Cray XC30 Edison supercomputer

In total, nearly 35 million core-hours were consumed by NOvA in the 54-hour period. Executing the same analysis on a single desktop computer would take 4,000 years.

“The special thing about NERSC is that it enabled NOvA to do the science at a new level of precision, a much finer resolution with greater statistical accuracy within a finite amount of time,” said Andrew Norman, NOvA physicist at Fermilab. “It facilitated doing analysis of real data coming off the detector at a rate 50 times faster than that achieved in the past. The first round of analysis was done within 16 hours. Experimenters were able to see what was coming out of the data, and in less than six hours everyone was looking at it. Without these types of resources, we, as a collaboration, could not have turned around results as quickly and understood what we were seeing.”

The experiment presented the latest finding from the recently collected data at the Neutrino 2018 conference in Germany on June 4.

“The speed with which NERSC allowed our analysis team to run sophisticated and intense calculations needed to produce our final results has been a game-changer,” said Fermilab scientist Peter Shanahan, NOvA co-spokesperson. “It accelerated our time-to-results on the last step in our analysis from weeks to days, and that has already had a huge impact on what we were able to show at Neutrino 2018.”

In addition to the state-of-the-art NERSC facility, NOvA relied on work done within the SciDAC HEP Data Analytics on HPC (high-performance computers) project and the Fermilab HEPCloud facility. Both efforts are led by Fermilab scientific computing staff, and both worked together with researchers at NERSC to be able to support NOvA’s antineutrino oscillation evidence.

The current standard practice for Fermilab experimenters is to perform similar analyses using less complex calculations through a combination of both traditional high-throughput computing and the distributed computing provided by Open Science Grid, a national partnership between laboratories and universities for data-intensive research. These are substantial resources, but they use a different model: Both use a large amount of computing resources over a long period of time. For example, some resources are offered only at a low priority, so their use may be preempted by higher-priority demands. But for complex, time-sensitive analyses such as NOvA’s, researchers need the faster processing enabled by modern, high-performance computing techniques.

SciDAC-4 is a DOE Office of Science program that funds collaboration between experts in mathematics, physics and computer science to solve difficult problems. The HEP on HPC project was funded specifically to explore computational analysis techniques for doing large-scale data analysis on DOE-owned supercomputers. Running the NOvA analysis at NERSC, the mission supercomputing facility for the DOE Office of Science, was a task perfectly suited for this project. Fermilab’s Jim Kowalkowski is the principal investigator for HEP on HPC, which also has collaborators from DOE’s Argonne National Laboratory, Berkeley Lab, University of Cincinnati and Colorado State University.

“This analysis forms a kind of baseline. We’re just ramping up, just starting to exploit the other capabilities of NERSC at an unprecedented scale,” Kowalkowski said.

The project’s goal for its first year is to take compute-heavy analysis jobs like NOvA’s and enable it on supercomputers. That means not just running the analysis, but also changing how calculations are done and learning how to revamp the tools that manipulate the data, all in an effort to improve techniques used for doing these analyses and to leverage the full computational power and unique capabilities of modern high-performance computing facilities. In addition, the project seeks to consume all computing cores at once to shorten that timeline.

The Fermilab HEPCloud facility provides cost-effective access to compute resources by optimizing usage across all available types and elastically expanding the resource pool on short notice by, for example, renting temporary resources on commercial clouds or using high-performance computers. HEPCloud enables NOvA and physicists from other experiments to use these compute resources in a transparent way.

For this analysis, “NOvA experimenters didn’t have to change much in terms of business as usual,” said Burt Holzman, HEPCloud principal investigator. “With HEPCloud, we simply expanded our local on-site-at-Fermilab facilities to include Cori and Edison at NERSC.”

3
At the Neutrino 2018 conference, Fermilab’s NOvA neutrino experiment announced that it had seen strong evidence of muon antineutrinos oscillating into electron antineutrinos over long distances. NOvA collaborated with the Department of Energy’s Scientific Discovery through Advanced Computing program and Fermilab’s HEPCloud program to perform the largest-scale analysis ever to support the recent evidence. Photo: Reidar Hahn

Building on work the Fermilab HEPCloud team has been doing with researchers at NERSC to optimize high-throughput computing in general, the HEPCloud team was able to leverage the facility to achieve the million-core milestone. Thus, it holds the record for the most resources ever provisioned concurrently at a single facility to run experimental HEP workflows.

“This is the culmination of more than a decade of R&D we have done at Fermilab under SciDAC and the first taste of things to come, using these capabilities and HEPCloud,” said Panagiotis Spentzouris, head of the Fermilab Scientific Computing Division and HEPCloud sponsor.

“NOvA is an experimental facility located more than 2,000 miles away from Berkeley Lab, where NERSC is located. The fact that we can make our resources available to the experimental researchers near real-time to enable their time-sensitive science that could not be completed otherwise is very exciting,” said Wahid Bhimji, a NERSC data architect at Berkeley Lab who worked with the NOvA team. “Led by colleague Lisa Gerhardt, we’ve been working closely with the HEPCloud team over the last couple of years, also to support physics experiments at the Large Hadron Collider. The recent NOvA results are a great example of how the infrastructure and capabilities that we’ve built can benefit a wide range of high energy experiments.”

Going forward, Kowalkowski, Holzman and their associated teams will continue building on this achievement.

“We’re going to keep iterating,” Kowalkowski said. “The new facilities and procedures were enthusiastically received by the NOvA collaboration. We will accelerate other key analyses.”

NERSC is a DOE Office of Science user facility.

See the full article here .


five-ways-keep-your-child-safe-school-shootings

Please help promote STEM in your local schools.

Stem Education Coalition

FNAL Icon

Fermi National Accelerator Laboratory (Fermilab), located just outside Batavia, Illinois, near Chicago, is a US Department of Energy national laboratory specializing in high-energy particle physics. Fermilab is America’s premier laboratory for particle physics and accelerator research, funded by the U.S. Department of Energy. Thousands of scientists from universities and laboratories around the world
collaborate at Fermilab on experiments at the frontiers of discovery.


FNAL/MINERvA

FNAL DAMIC

FNAL Muon g-2 studio

FNAL Short-Baseline Near Detector under construction

FNAL Mu2e solenoid

Dark Energy Camera [DECam], built at FNAL

FNAL DUNE Argon tank at SURF

FNAL/MicrobooNE

FNAL Don Lincoln

FNAL/MINOS

FNAL Cryomodule Testing Facility

FNAL Minos Far Detector

FNAL LBNF/DUNE from FNAL to SURF, Lead, South Dakota, USA

FNAL/NOvA experiment map

FNAL NOvA Near Detector

FNAL ICARUS

FNAL Holometer

#fnal, #fnal-nova, #hep, #nersc-national-energy-research-for-scientific-computing-center, #neutrinos, #particle-physics