Tagged: CERN HL-LHC-High-Luminosity LHC Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 3:39 pm on September 25, 2018 Permalink | Reply
    Tags: , , , Argonne's Theta supercomputer, Aurora exascale supercomputer, CERN HL-LHC-High-Luminosity LHC, , , , , ,   

    From Argonne National Laboratory ALCF: “Argonne team brings leadership computing to CERN’s Large Hadron Collider” 

    Argonne Lab
    News from Argonne National Laboratory

    From Argonne National Laboratory ALCF

    ANL ALCF Cetus IBM supercomputer

    ANL ALCF Theta Cray supercomputer

    ANL ALCF Cray Aurora supercomputer

    ANL ALCF MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    September 25, 2018
    Madeleine O’Keefe

    CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, expects to produce around 50 petabytes of data this year. This is equivalent to nearly 15 million high-definition movies—an amount so enormous that analyzing it all poses a serious challenge to researchers.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    A team of collaborators from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to address this issue with computing resources at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility. Since 2015, this team has worked with the ALCF on multiple projects to explore ways supercomputers can help meet the growing needs of the LHC’s ATLAS experiment.

    The efforts are especially important given what is coming up for the accelerator. In 2026, the LHC will undergo an ambitious upgrade to become the High-Luminosity LHC (HL-LHC). The aim of this upgrade is to increase the LHC’s luminosity—the number of events detected per second—by a factor of 10. “This means that the HL-LHC will be producing about 20 times more data per year than what ATLAS will have on disk at the end of 2018,” says Taylor Childers, a member of the ATLAS collaboration and computer scientist at the ALCF who is leading the effort at the facility. “CERN’s computing resources are not going to grow by that factor.”

    Luckily for CERN, the ALCF already operates some of the world’s most powerful supercomputers for science, and the facility is in the midst of planning for an upgrade of its own. In 2021, Aurora—the ALCF’s next-generation system, and the first exascale machine in the country—is scheduled to come online.

    It will provide the ATLAS experiment with an unprecedented resource for analyzing the data coming out of the LHC—and soon, the HL-LHC.

    CERN/ATLAS detector

    Why ALCF?

    CERN may be best known for smashing particles, which physicists do to study the fundamental laws of nature and gather clues about how the particles interact. This involves a lot of computationally intense calculations that benefit from the use of the DOE’s powerful computing systems.

    The ATLAS detector is an 82-foot-tall, 144-foot-long cylinder with magnets, detectors, and other instruments layered around the central beampipe like an enormous 7,000-ton Swiss roll. When protons collide in the detector, they send a spray of subatomic particles flying in all directions, and this particle debris generates signals in the detector’s instruments. Scientists can use these signals to discover important information about the collision and the particles that caused it in a computational process called reconstruction. Childers compares this process to arriving at the scene of a car crash that has nearly completely obliterated the vehicles and trying to figure out the makes and models of the cars and how fast they were going. Reconstruction is also performed on simulated data in the ATLAS analysis framework, called Athena.

    An ATLAS physics analysis consists of three steps. First, in event generation, researchers use the physics that they know to model the kinds of particle collisions that take place in the LHC. In the next step, simulation, they generate the subsequent measurements the ATLAS detector would make. Finally, reconstruction algorithms are run on both simulated and real data, the output of which can be compared to see differences between theoretical prediction and measurement.

    “If we understand what’s going on, we should be able to simulate events that look very much like the real ones,” says Tom LeCompte, a physicist in Argonne’s High Energy Physics division and former physics coordinator for ATLAS.

    “And if we see the data deviate from what we know, then we know we’re either wrong, we have a bug, or we’ve found new physics,” says Childers.

    Some of these simulations, however, are too complicated for the Worldwide LHC Computing Grid, which LHC scientists have used to handle data processing and analysis since 2002.

    MonALISA LHC Computing GridMap http:// monalisa.caltech.edu/ml/_client.beta

    The Grid is an international distributed computing infrastructure that links 170 computing centers across 42 countries, allowing data to be accessed and analyzed in near real-time by an international community of more than 10,000 physicists working on various LHC experiments.

    The Grid has served the LHC well so far, but as demand for new science increases, so does the required computing power.

    That’s where the ALCF comes in.

    In 2011, when LeCompte returned to Argonne after serving as ATLAS physics coordinator, he started looking for the next big problem he could help solve. “Our computing needs were growing faster than it looked like we would be able to fulfill them, and we were beginning to notice that there were problems we were trying to solve with existing computing that just weren’t able to be solved,” he says. “It wasn’t just an issue of having enough computing; it was an issue of having enough computing in the same place. And that’s where the ALCF really shines.”

    LeCompte worked with Childers and ALCF computer scientist Tom Uram to use Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer, to carry out calculations to improve the performance of the ATLAS software.

    MIRA IBM Blue Gene Q supercomputer at the Argonne Leadership Computing Facility

    Together they scaled Alpgen, a Monte Carlo-based event generator, to run efficiently on Mira, enabling the generation of millions of particle collision events in parallel. “From start to finish, we ended up processing events more than 20 times as fast, and used all of Mira’s 49,152 processors to run the largest-ever event generation job,” reports Uram.

    But they weren’t going to stop there. Simulation, which takes up around five times more Grid computing than event generation, was the next challenge to tackle.
    Moving forward with Theta

    In 2017, Childers and his colleagues were awarded a two-year allocation from the ALCF Data Science Program (ADSP), a pioneering initiative designed to explore and improve computational and data science methods that will help researchers gain insights into very large datasets produced by experimental, simulation, or observational methods. The goal is to deploy Athena on Theta, the ALCF’s 11.69-petaflops Intel-Cray supercomputer, and develop an end-to-end workflow to couple all the steps together to improve upon the current execution model for ATLAS jobs which involves a many­step workflow executed on the Grid.

    ANL ALCF Theta Cray XC40 supercomputer

    “Each of those steps—event generation, simulation, and reconstruction—has input data and output data, so if you do them in three different locations on the Grid, you have to move the data with it,” explains Childers. “Ideally, you do all three steps back-to-back on the same machine, which reduces the amount of time you have to spend moving data around.”

    Enabling portions of this workload on Theta promises to expedite the production of simulation results, discovery, and publications, as well as increase the collaboration’s data analysis reach, thus moving scientists closer to new particle physics.

    One challenge the group has encountered so far is that, unlike other computers on the Grid, Theta cannot reach out to the job server at CERN to receive computing tasks. To solve this, the ATLAS software team developed Harvester, a Python edge service that can retrieve jobs from the server and submit them to Theta. In addition, Childers developed Yoda, an MPI-enabled wrapper that launches these jobs on each compute node.

    Harvester and Yoda are now being integrated into the ATLAS production system. The team has just started testing this new workflow on Theta, where it has already simulated over 12 million collision events. Simulation is the only step that is “production-ready,” meaning it can accept jobs from the CERN job server.

    The team also has a running end-to-end workflow—which includes event generation and reconstruction—for ALCF resources. For now, the local ATLAS group is using it to run simulations investigating if machine learning techniques can be used to improve the way they identify particles in the detector. If it works, machine learning could provide a more efficient, less resource-intensive method for handling this vital part of the LHC scientific process.

    “Our traditional methods have taken years to develop and have been highly optimized for ATLAS, so it will be hard to compete with them,” says Childers. “But as new tools and technologies continue to emerge, it’s important that we explore novel approaches to see if they can help us advance science.”
    Upgrade computing, upgrade science

    As CERN’s quest for new science gets more and more intense, as it will with the HL-LHC upgrade in 2026, the computational requirements to handle the influx of data become more and more demanding.

    “With the scientific questions that we have right now, you need that much more data,” says LeCompte. “Take the Higgs boson, for example. To really understand its properties and whether it’s the only one of its kind out there takes not just a little bit more data but takes a lot more data.”

    This makes the ALCF’s resources—especially its next-generation exascale system, Aurora—more important than ever for advancing science.

    Depiction of ANL ALCF Cray Shasta Aurora exascale supercomputer

    Aurora, scheduled to come online in 2021, will be capable of one billion billion calculations per second—that’s 100 times more computing power than Mira. It is just starting to be integrated into the ATLAS efforts through a new project selected for the Aurora Early Science Program (ESP) led by Jimmy Proudfoot, an Argonne Distinguished Fellow in the High Energy Physics division. Proudfoot says that the effective utilization of Aurora will be key to ensuring that ATLAS continues delivering discoveries on a reasonable timescale. Since increasing compute resources increases the analyses that are able to be done, systems like Aurora may even enable new analyses not yet envisioned.

    The ESP project, which builds on the progress made by Childers and his team, has three components that will help prepare Aurora for effective use in the search for new physics: enable ATLAS workflows for efficient end-to-end production on Aurora, optimize ATLAS software for parallel environments, and update algorithms for exascale machines.

    “The algorithms apply complex statistical techniques which are increasingly CPU-intensive and which become more tractable—and perhaps only possible—with the computing resources provided by exascale machines,” explains Proudfoot.

    In the years leading up to Aurora’s run, Proudfoot and his team, which includes collaborators from the ALCF and Lawrence Berkeley National Laboratory, aim to develop the workflow to run event generation, simulation, and reconstruction. Once Aurora becomes available in 2021, the group will bring their end-to-end workflow online.

    The stated goals of the ATLAS experiment—from searching for new particles to studying the Higgs boson—only scratch the surface of what this collaboration can do. Along the way to groundbreaking science advancements, the collaboration has developed technology for use in fields beyond particle physics, like medical imaging and clinical anesthesia.

    These contributions and the LHC’s quickly growing needs reinforce the importance of the work that LeCompte, Childers, Proudfoot, and their colleagues are doing with ALCF computing resources.

    “I believe DOE’s leadership computing facilities are going to play a major role in the processing and simulation of the future rounds of data that will come from the ATLAS experiment,” says LeCompte.

    This research is supported by the DOE Office of Science. ALCF computing time and resources were allocated through the ASCR Leadership Computing Challenge, the ALCF Data Science Program, and the Early Science Program for Aurora.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science. For more visit http://www.anl.gov.

    About ALCF

    The Argonne Leadership Computing Facility’s (ALCF) mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

    We help researchers solve some of the world’s largest and most complex problems with our unique combination of supercomputing resources and expertise.

    ALCF projects cover many scientific disciplines, ranging from chemistry and biology to physics and materials science. Examples include modeling and simulation efforts to:

    Discover new materials for batteries
    Predict the impacts of global climate change
    Unravel the origins of the universe
    Develop renewable energy technologies

    Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science

    Argonne Lab Campus

     
  • richardmitnick 12:47 pm on September 4, 2018 Permalink | Reply
    Tags: , , CERN HL-LHC-High-Luminosity LHC, , CISE-NSF's Office of Advanced Cyberinfrastructure in the Directorate for Computer and Information Science and Engineering, , IRIS-HEP-Institute for Research and Innovation in Software for High-Energy Physics, Molecular Sciences Software Institute and the Science Gateways Community Institute, MPS-NSF Division of Physics in the Directorate for Mathematical and Physical Sciences, , SCAILFIN-Scalable Cyberinfrastructure for Artificial Intelligence and Likelihood-Free Inference   

    From University of Illinois Physics: “University of Illinois part of $25 million software institute to enable discoveries in high-energy physics” 

    U Illinois bloc

    From University of Illinois Physics

    U Illinois Physics bloc

    9/4/2018
    Siv Schwink

    1
    A data visualization from a simulation of collision between two protons that will occur at the High-Luminosity Large Hadron Collider (HL-LHC). On average, up to 200 collisions will be visible in the collider’s detectors at the same time. Shown here is a design for the Inner Tracker of the ATLAS detector, one of the hardware upgrades planned for the HL-LHC. Image courtesy of the ATLAS Experiment © 2018 CERN

    CERN/ATLAS detector

    Today, the National Science Foundation (NSF) announced its launch of the Institute for Research and Innovation in Software for High-Energy Physics (IRIS-HEP).

    The $25 million software-focused institute will tackle the unprecedented torrent of data that will come from the high-luminosity running of the Large Hadron Collider (LHC), the world’s most powerful particle accelerator located at CERN near Geneva, Switzerland.

    LHC

    CERN map


    CERN LHC Tunnel

    CERN LHC particles

    The High-Luminosity LHC (HL-LHC) will provide scientists with a unique window into the subatomic world to search for new phenomena and to study the properties of the Higgs boson in great detail.

    CERN CMS Higgs Event


    CERN ATLAS Higgs Event

    The 2012 discovery at the LHC of the Higgs boson—a particle central to our fundamental theory of nature—led to the Nobel Prize in physics a year later and has provided scientists with a new tool for further discovery.

    The HL-LHC will begin operations around 2026, continuing into the 2030s. It will produce more than 1 billion particle collisions every second, from which only a tiny fraction will reveal new science, because the phenomena that physicists want to study have a very low probability per collision of occurring. The HL-LHC’s tenfold increase in luminosity—a measure of the number of particle collisions occurring in a given amount of time—will enable physicists to study familiar processes at an unprecedented level of detail and observe rare new phenomena present in nature.

    But the increased luminosity also leads to more complex collision data. A tenfold increase in the required data processing and storage can not be achieved without new software tools for intelligent data filtering that record only the most interesting collision events, to enable scientists to analyze the data more efficiently.

    Over the next five years, IRIS-HEP will focus on developing innovative software for use in particle physics research with the HL-LHC as the key science driver. It will also create opportunities for training and education in related areas of computational and data science and outreach to the general public. The institute will also work to increase participation from women and minorities who are underrepresented in high-energy physics research.

    IRIS-HEP brings together multidisciplinary teams of researchers and educators from 17 universities, including Mark Neubauer, a professor of physics at the University of Illinois at Urbana-Champaign and a faculty affiliate with the National Center for Supercomputing Applications (NCSA) in Urbana.

    2

    Neubauer is a member of the ATLAS Experiment, which generates and analyzes data from particle collisions at the LHC. Neubauer will serve on the IRIS-HEP Executive Committee and coordinate the institute’s activities to develop and evolve the strategic vision of the institute.

    Neubauer, along with colleagues Peter Elmer (Princeton) and Michael Sokoloff (Cincinnati), led a community-wide effort to conceptualize the institute with funding from the NSF and was a key member of the group that developed the IRIS-HEP proposal. Through a process to conceptualize the institute involving 18 workshops over the last two years, key national and international partners from high-energy physics, computer science, industry, and data-science communities were brought together to generate more than eight community position papers, most notably a strategic plan for the institute and a roadmap for HEP software and computing R&D over the next decade. They reviewed two decades of approaches to LHC data processing and analysis and developed strategies to address the challenges and opportunities that lay ahead. IRIS-HEP emerged from that effort.

    “IRIS-HEP will serve as a new intellectual hub of software development for the international high-energy physics community,” comments Neubauer. “The founding of this Institute will do much more than fund software development to support the HL-LHC science; it will provide fertile ground for new ideas and innovation, empower early-career researchers interested in software and computing aspects of data-enabled science through mentoring and training to support their professional development, and will redefine the traditional boundaries of the high-energy physics community.”

    Neubauer will receive NSF funding through IRIS-HEP to contribute to the institute’s efforts in software research and innovation. He plans to collaborate with Daniel S. Katz, NCSA’s assistant director for scientific software and applications, to put together a team to research new approaches and systems for data analysis and innovative algorithms that apply machine learning and other approaches to accelerate computation on modern computing architectures.

    In related research also beginning in the current Fall semester, Neubauer and Katz through a separate NSF award with Kyle Cranmer (NYU), Heiko Mueller (NYU) and Michael Hildreth (Notre Dame) will be collaborating on the Scalable Cyberinfrastructure for Artificial Intelligence and Likelihood-Free Inference (SCAILFIN) Project. SCAILFIN aims to maximize the potential of artificial intelligence and machine learning to improve new physics searches at the LHC, while addressing current issues in software and data sustainability by making data analyses more reusable and reproducible.

    Katz says he is looking forward to delving into these projects: “How to build tools that make more sense of the data, how to make the software more sustainable so there is less rewriting, how to write software that is portable across different systems and compatible with future hardware changes—these are tremendous challenges. And these questions really are timely. They fit into the greater dialogue that is ongoing in both the computer science and the information science communities. I’m excited for this opportunity to meld the most recent work from these complementary fields together with work in physics.”

    Neubauer concludes, “The quest to understand the fundamental building blocks of nature and their interactions is one of the oldest and most ambitious of human scientific endeavors. The HL-LHC will represent a big step forward in this quest and is a top priority for the US particle physics community. As is common in frontier-science experiments pushing at the boundaries of knowledge, it comes with daunting challenges. The LHC experiments are making large investments to upgrade their detectors to be able to operate in the challenging HL-LHC environment.

    “A significant investment in R&D for software used to acquire, manage, process and analyze the huge volume of data that will be generated during the HL-LHC era will be critical to maximize the scientific return on investment in the accelerator and detectors. This is not a problem that could be solved by gains from hardware technology evolution or computing resources alone. The institute will support early-career scientists to develop innovative software over the next five to ten years, to get us where we need to be to do our science during the HL-LHC era. I am elated to see such a large investment by the NSF in this area for high-energy physics.”

    IRIS-HEP is co-funded by NSF’s Office of Advanced Cyberinfrastructure in the Directorate for Computer and Information Science and Engineering (CISE) and the NSF Division of Physics in the Directorate for Mathematical and Physical Sciences (MPS). IRIS-HEP is the latest NSF contribution to the 40-nation LHC effort. It is the third OAC software institute, following the Molecular Sciences Software Institute and the Science Gateways Community Institute.

    See the full University of Illinois article on this subject here .
    See the full Cornell University article on the subject here.
    See the full Princeton University article on this subject here.

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition

    U Illinois campus

    The University of Illinois at Urbana-Champaign community of students, scholars, and alumni is changing the world.

    With our land-grant heritage as a foundation, we pioneer innovative research that tackles global problems and expands the human experience. Our transformative learning experiences, in and out of the classroom, are designed to produce alumni who desire to make a significant, societal impact.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: