Tagged: NERSC-National Energy Research Scientific Computing Center Toggle Comment Threads | Keyboard Shortcuts

  • richardmitnick 4:33 pm on July 1, 2021 Permalink | Reply
    Tags: "Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research", Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) program, , , , , , NERSC-National Energy Research Scientific Computing Center, , , ,   

    From U.S. Department of Energy Office of Science: “Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research” 

    DOE Main

    From U.S. Department of Energy Office of Science

    Department of Energy Awards 22 Million Node-Hours of Computing Time to Support Cutting-Edge Research
    The U.S. Department of Energy’s (DOE) Office of Science today announced that 22 million node-hours for 41 scientific projects under the Advanced Scientific Computing Research (ASCR) Leadership Computing Challenge (ALCC) program. The projects, with applications ranging from nuclear forensics to advanced energy systems to climate change, will use DOE supercomputers to uncover unique insights about scientific problems that would otherwise be impossible to solve using other experimental approaches.

    Selected projects will receive computational time, also known as node-hours, on one or multiple DOE supercomputers to conduct research that would take years to complete on a standard desktop computer. A node-hour is the usage of one node (or computing unit) on a supercomputer for one hour. A project allocated 1,000,000 node-hours could run a simulation on 1,000 compute nodes for 1,000 hours – vastly reducing the total amount of time required to complete the simulation. These three supercomputers – The Oak Ridge Leadership Computing Facility’s “Summit” system at DOE’s Oak Ridge National Laboratory (US), The Argonne Leadership Computing Facility’s “Theta” system at DOE’s Argonne National Laboratory (US), and the DOE’s National Energy Research Scientific Computing Center’s “Cori” system at DOE’s Lawrence Berkeley National Laboratory (US) – are among the fastest computers in the nation. Oak Ridge National Laboratory’s “Summit” currently performs as the second fastest computer in the world.

    “The Department of Energy is committed to providing the advanced scientific tools needed to move U.S. science forward. Supercomputers allow us to explore scientific problems in ways we haven’t been able to in the past – modeling dangerous, large, or costly experiments, safely and quickly,” said Barb Helland, DOE Associate Director for DOE Office of Science Advanced Scientific Computing Research (US). “The ALCC awards are just one example of how the DOE’s investments in supercomputing benefit researchers all across our nation to advance our nation’s scientific competitiveness, accelerate clean energy options, and to understand and mitigate the impacts of climate change.”

    The ASCR Leadership Computing Challenge (ALCC) program supports efforts to broaden community access to DOE’s computing facilities. ALCC focuses on high-risk, high-payoff simulations in areas directly related to the DOE mission and seeks to broaden the community of researchers who use DOE’s advanced computing resources. The 2021 awardees are awarded compute time at DOE’s high-performance computing facilities at Oak Ridge National Laboratory in Tennessee, Argonne National Laboratory in Illinois, and the National Energy Research Scientific Computing Center (US) at Lawrence Berkeley National Laboratory in California. Of the 41 projects, 3 are from industry, 19 are led by universities and 19 are led by national laboratories.
    The projects cover a variety of topics, including:
    • Climate change research, including improving climate models, studying the effects of turbulence in oceans, characterizing the impact of low-level jets on wind farms, improving the simulation of biochemical processes, and simulating clouds on a global scale.
    • Energy research, including AI and deep learning prediction for fusion energy systems, modeling materials for energy storage, studying wind turbine mechanics, and research into the properties of lithium battery electrolytes.
    • Medical research, such as deep learning for medical natural language processing, modeling cancer screening strategies, and modeling cancer initiation pathways.
    Learn more about the 2021 ALCC awardees by visiting the ASCR website. The ALCC application period will re-open for the 2022-23 allocation cycle in Fall 2021.

    See the full article here .

    five-ways-keep-your-child-safe-school-shootings

    Please help promote STEM in your local schools.

    Stem Education Coalition
    The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.

    Science Programs Organization

    The Office of Science manages its research portfolio through six program offices:

    Advanced Scientific Computing Research
    Basic Energy Sciences
    Biological and Environmental Research
    Fusion Energy Sciences
    High Energy Physics
    Nuclear Physics

    The Science Programs organization also includes the following offices:

    The Department of Energy’s Small Business Innovation Research and Small Business Technology Transfer Programs, which the Office of Science manages for the Department;
    The Workforce Development for Teachers and Students program sponsors programs helping develop the next generation of scientists and engineers to support the DOE mission, administer programs, and conduct research; and
    The Office of Project Assessment provides independent advice to the SC leadership regarding those activities essential to constructing and operating major research facilities.

     
  • richardmitnick 9:22 am on April 6, 2018 Permalink | Reply
    Tags: , Hackathon for research and computational scientists code developers and computing hardware experts, NERSC-National Energy Research Scientific Computing Center,   

    From BNL: “Accelerating Scientific Discovery Through Code Optimization on Many-Core Processors” 

    Brookhaven Lab

    April 6, 2018
    Ariana Tantillo
    atantillo@bnl.gov

    Brookhaven Lab hosted a hackathon for research and computational scientists, code developers, and computing hardware experts to optimize scientific application codes for high-performance computing.

    1
    At the Brookhaven Lab-hosted Xeon Phi hackathon, (left to right) mentor Bei Wang, a high-performance-computing software engineer at Princeton University; mentor Hideki Saito, a principal engineer at Intel; and participant Han Aung, a graduate student in the Department of Physics at Yale University, optimize an application code that simulates the formation of structures in the universe. Aung and his fellow team members sought to increase the numerical resolution of their simulations so they can more realistically model the astrophysical processes in galaxy clusters.

    Supercomputers are enabling scientists to study problems they could not otherwise tackle—from understanding what happens when two black holes collide and figuring out how to make tiny carbon nanotubes that clean up oil spills to determining the binding sites of proteins associated with cancer. Such problems involve datasets that are too large or complex for human analysis.

    2
    The Intel Xeon Phi processor is patterned using a 14-nanometer (nm) lithography process. The 14 nm refers to the size of the transistors on the chip—only 14 times wider than DNA molecules.

    In 2016, Intel released the second generation of its many-integrated-core architecture targeting high-performance-computing (HPC): the Intel Xeon Phi processor (formerly code-named “Knights Landing”). With up to 72 processing units, or cores, per chip, Xeon Phi is designed to carry out multiple calculations at the same time (in parallel). This architecture is ideal for handling the large, complex computations that are characteristic of scientific applications.

    Other features that make Xeon Phi appealing for such applications include its fast memory access; its ability to simultaneously execute multiple processes, or threads, that follow the same instructions while sharing some computing resources (multithreading); and its support of efficient vectorization, a form of parallel programming in which the processor performs the same operation on multiple elements (vectors) of independent data in a single processing cycle. All of these features can greatly enhance performance, enabling scientists to solve problems more quickly and with greater efficiency than ever before.

    NERSC Cray Cori II supercomputer at NERSC at LBNL named after Gerty Cori, the first American woman to win a Nobel Prize in science

    Currently, several supercomputers in the United States are based on Intel’s Xeon Phi processors, including Cori at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) Office of Science User Facility at Lawrence Berkeley National Laboratory; Theta at Argonne Leadership Computing Facility, another DOE Office of Science User Facility; and Stampede2 at the University of Texas at Austin’s Texas Advanced Computing Center. Smaller-scale systems, such as the computing cluster at DOE’s Brookhaven National Laboratory, also rely on this architecture. But in order to take full advantage of its capabilities, users need to adapt and optimize their applications accordingly.

    ANL ALCF Theta Cray XC40 supercomputer

    TACC DELL EMC Stampede2 supercomputer

    To facilitate that process, Brookhaven Lab’s Computational Science Initiative (CSI) hosted a five-day coding marathon, or hackathon, in partnership with the High-Energy Physics (HEP) Center for Computational Excellence—which Brookhaven joined last July—and collaborators from the SOLLVE software development project funded by DOE’s Exascale Computing Project.

    “The goal of this hands-on workshop was to help participants optimize their application codes to exploit the different levels of parallelism and memory hierarchies in the Xeon Phi architecture,” said CSI computational scientist Meifeng Lin, who co-organized the hackathon with CSI Director Kerstin Kleese van Dam, CSI Computer Science and Mathematics Department Head Barbara Chapman, and CSI computational scientist Martin Kong. “By the end of the hackathon, the participants had not only made their codes run more efficiently on Xeon Phi–based systems, but also learned about strategies that could be applied to other CPU [central processing unit]-based systems to improve code performance.”

    Last year, Lin was part of the committee that organized Brookhaven’s first hackathon, at which teams learned how to program their scientific applications on computing devices called graphics processing units (GPUs). As was the case for that hackathon, this one was open to any current or potential user of the hardware. In the end, five teams of three to four members each—representing Brookhaven Lab, the Institute for Mathematical Sciences in India, McGill University, Stony Brook University, University of Miami, University of Washington, and Yale University—were accepted to participate in the Intel Xeon Phi hackathon.

    3
    Xinmin Tian, a senior principal engineer at Intel, gives a presentation on vector programming to help the teams optimize their scientific codes for the Xeon Phi processors.

    From February 26 through March 2, nearly 20 users of Xeon Phi–based supercomputers came together at Brookhaven Lab to be mentored by computing experts from Brookhaven and Lawrence Berkeley national labs, Indiana University, Princeton University, University of Bielefeld in Germany, and University of California–Berkeley. The hackathon organizing committee selected the mentors based on their experience in Xeon Phi optimization and shared-memory parallel programming with the OpenMP (for Multi-Processing) industry standard.

    Participants did not need to have prior Xeon Phi experience to attend. Several weeks prior to the hackathon, the teams were assigned to mentors with scientific backgrounds relevant to the respective application codes. The mentors and teams then held a series of meetings to discuss the limitations of their existing codes and goals at the hackathon. In addition to their specific mentors, the teams had access to four Intel technical experts with backgrounds in programming and scientific domains. These Intel experts served as floating mentors during the event to provide expertise in hardware architecture and performance optimization.

    “The hackathon provided an excellent opportunity for application developers to talk and work with Intel experts directly,” said mentor Bei Wang, a HPC software engineer at Princeton University. “The result was a significant speed up in the time it takes to optimize code, thus helping application teams achieve their science goals at a faster pace. Events like this hackathon are of great value to both scientists and vendors.”

    The five codes that were optimized cover a wide variety of applications:

    A code for tracking particle-device and particle-particle interactions that has the potential to be used as the design platform for future particle accelerators
    A code for simulating the evolution of the quark-gluon plasma (a hot, dense state of matter thought to have been present for a few millionths of a second after the Big Bang) produced through high-energy collisions at Brookhaven’s Relativistic Heavy Ion Collider (RHIC)—a DOE Office of Science User Facility
    An algorithm for sorting records from databases, such as DNA sequences to identify inherited genetic variations and disorders
    A code for simulating the formation of structures in the universe, particularly galaxy clusters
    A code for simulating the interactions between quarks and gluons in real time

    “Large-scale numerical simulations are required to describe the matter created at the earliest times after the collision of two heavy ions,” said team member Mark Mace, a PhD candidate in the Nuclear Theory Group in the Physics and Astronomy Department at Stony Brook University and the Nuclear Theory Group in the Physics Department at Brookhaven Lab. “My team had a really successful week—we were able to make our code run much faster (20x), and this improvement is a game changer as far as the physics we can study with the resources we have. We will now be able to more accurately describe the matter created after heavy-ion collisions, study a larger array of macroscopic phenomena observed in such collisions, and make quantitative predictions for experiments at RHIC and the Large Hadron Collider in Europe.”

    “With the new memory subsystem recently released by Intel, we can order a huge number of elements faster than with conventional memory because more data can be transferred at a time,” said team member Sergey Madaminov, who is pursuing his PhD in computer science in the Computer Architecture at Stony Brook (COMPAS) Lab at Stony Brook University. “However, this high-bandwidth memory is physically located close to the processor, limiting its capacity. To mitigate this limitation, we apply smart algorithms that split data into smaller chunks that can then fit into high-bandwidth memory and be sorted inside it. At the hackathon, our goal was to demonstrate our theoretical results—our algorithms speed up sorting—in practice. We ended up finding many weak places in our code and were able to fix them with the help of our mentor and experts from Intel, improving our initial code more than 40x. With this improvement, we expect to sort much larger datasets faster.”

    4
    One hackathon team worked on taking advantage of the high-bandwidth memory in Xeon Phi processors to optimize their code to more quickly sort datasets of increasing size. The team members applied smart algorithms that split the original data into “blocks” (equally sized chunks), which are moved into “buckets” (sets of elements) that can fit inside high-bandwidth memory for sorting, as shown in the illustration above.

    According to Lin, the hackathon was highly successful—all five teams improved the performance of their codes, achieving from 2x to 40x speedups.

    “It is expected that Intel Xeon Phi–based computing resources will continue operating until the next-generation exascale computers come online,” said Lin. “It is important that users can make these systems work to their full potential for their specific applications.”

    Follow @BrookhavenLab on Twitter or find us on Facebook.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition
    BNL Campus

    BNL RHIC Campus

    BNL/RHIC Star Detector

    BNL RHIC PHENIX

    One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. The Laboratory’s almost 3,000 scientists, engineers, and support staff are joined each year by more than 5,000 visiting researchers from around the world. Brookhaven is operated and managed for DOE’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit, applied science and technology organization.
    i1

     
  • richardmitnick 12:55 pm on June 26, 2017 Permalink | Reply
    Tags: 1T’-WTe2, , , , , , NERSC-National Energy Research Scientific Computing Center, , ,   

    From LBNL: “2-D Material’s Traits Could Send Electronics R&D Spinning in New Directions” 

    Berkeley Logo

    Berkeley Lab

    June 26, 2017
    Glenn Roberts Jr
    geroberts@lbl.gov
    (510) 486-5582

    1
    This animated rendering shows the atomic structure of a 2-D material known as 1T’-WTe2 that was created and studied at Berkeley Lab’s Advanced Light Source. (Credit: Berkeley Lab.)

    An international team of researchers, working at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, fabricated an atomically thin material and measured its exotic and durable properties that make it a promising candidate for a budding branch of electronics known as “spintronics.”

    The material – known as 1T’-WTe2 – bridges two flourishing fields of research: that of so-called 2-D materials, which include monolayer materials such as graphene that behave in different ways than their thicker forms; and topological materials, in which electrons can zip around in predictable ways with next to no resistance and regardless of defects that would ordinarily impede their movement.

    At the edges of this material, the spin of electrons – a particle property that functions a bit like a compass needle pointing either north or south – and their momentum are closely tied and predictable.

    2
    A scanning tunneling microscopy image of a 2-D material created and studied at Berkeley Lab’s Advanced Light Source (orange, background). In the upper right corner, the blue dots represent the layout of tungsten atoms and the red dots represent tellurium atoms. (Credit: Berkeley Lab.)

    This latest experimental evidence could elevate the material’s use as a test subject for next-gen applications, such as a new breed of electronic devices that manipulate its spin property to carry and store data more efficiently than present-day devices. These traits are fundamental to spintronics.

    The material is called a topological insulator because its interior surface does not conduct electricity, and its electrical conductivity (the flow of electrons) is restricted to its edges.

    “This material should be very useful for spintronics studies,” said Sung-Kwan Mo, a physicist and staff scientist at Berkeley Lab’s Advanced Light Source (ALS) who co-led the study, published today in Nature Physics.

    LBNL/ALS

    “We’re excited about the fact that we have found another family of materials where we can both explore the physics of 2-D topological insulators and do experiments that may lead to future applications,” said Zhi-Xun Shen, a professor in Physical Sciences at Stanford University and the Advisor for Science and Technology at SLAC National Accelerator Laboratory who also co-led the research effort.

    “This general class of materials is known to be robust and to hold up well under various experimental conditions, and these qualities should allow the field to develop faster,” he added.

    The material was fabricated and studied at the ALS, an X-ray research facility known as a synchrotron. Shujie Tang, a visiting postdoctoral researcher at Berkeley Lab and Stanford University, and a co-lead author in the study, was instrumental in growing 3-atom-thick crystalline samples of the material in a highly purified, vacuum-sealed compartment at the ALS, using a process known as molecular beam epitaxy.

    The high-purity samples were then studied at the ALS using a technique known as ARPES (or angle-resolved photoemission spectroscopy), which provides a powerful probe of materials’ electron properties.

    3
    Beamline 10.0.1 at Berkeley Lab’s Advanced Light Source enables researchers to both create and study atomically thin materials. (Credit: Roy Kaltschmidt/Berkeley Lab.)

    “After we refined the growth recipe, we measured it with ARPES. We immediately recognized the characteristic electronic structure of a 2-D topological insulator,” Tang said, based on theory and predictions. “We were the first ones to perform this type of measurement on this material.”

    But because the conducting part of this material, at its outermost edge, measured only a few nanometers thin – thousands of times thinner than the X-ray beam’s focus – it was difficult to positively identify all of the material’s electronic properties.

    So collaborators at UC Berkeley performed additional measurements at the atomic scale using a technique known as STM, or scanning tunneling microscopy. “STM measured its edge state directly, so that was a really key contribution,” Tang said.

    The research effort, which began in 2015, involved more than two dozen researchers in a variety of disciplines. The research team also benefited from computational work at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC).

    NERSC Cray Cori II supercomputer

    LBL NERSC Cray XC30 Edison supercomputer

    Two-dimensional materials have unique electronic properties that are considered key to adapting them for spintronics applications, and there is a very active worldwide R&D effort focused on tailoring these materials for specific uses by selectively stacking different types.

    “Researchers are trying to sandwich them on top of each other to tweak the material as they wish – like Lego blocks,” Mo said. “Now that we have experimental proof of this material’s properties, we want to stack it up with other materials to see how these properties change.”

    A typical problem in creating such designer materials from atomically thin layers is that materials typically have nanoscale defects that can be difficult to eliminate and that can affect their performance. But because 1T’-WTe2 is a topological insulator, its electronic properties are by nature resilient.

    “At the nanoscale it may not be a perfect crystal,” Mo said, “but the beauty of topological materials is that even when you have less than perfect crystals, the edge states survive. The imperfections don’t break the key properties.”

    Going forward, researchers aim to develop larger samples of the material and to discover how to selectively tune and accentuate specific properties. Besides its topological properties, its “sister materials,” which have similar properties and were also studied by the research team, are known to be light-sensitive and have useful properties for solar cells and for optoelectronics, which control light for use in electronic devices.

    The ALS and NERSC are DOE Office of Science User Facilities. Researchers from Stanford University, the Chinese Academy of Sciences, Shanghai Tech University, POSTECH in Korea, and Pusan National University in Korea also participated in this study. This work was supported by the Department of Energy’s Office of Science, the National Science Foundation, the National Science Foundation of China, the National Research Foundation (NRF) of Korea, and the Basic Science Research Program in Korea.

    See the full article here .

    Please help promote STEM in your local schools.

    STEM Icon

    Stem Education Coalition

    A U.S. Department of Energy National Laboratory Operated by the University of California

    University of California Seal

    DOE Seal

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: